
Article Brief
Why this article matters
Securing agentic AI today means juggling latency-sensitive guardrails, stale threat models, and fragmented tooling—with no unified standard in sight. This post unpacks the A2AS paper's two key artifacts: the BASIC mental model (Boundary, Authentication, Secrecy, Integrity, Consent) and the open-source A2AS Python framework with its policy-markup system. You'll see how each maps to real attack classes (user→agent, agent→tool, agent→agent), honest limitations, and whether it's practical enough to adopt in your own stack.
Good morning, good afternoon, and good evening!
Today I want to share some thoughts on a recently published academic paper that, honestly, surprised me with its practical approach and its potential to simplify security in agentic AI applications.
If you work in AI Security, you'll surely relate to the challenges I mention here.
The problem: Security in GenAI is a mess
In the world of security for GenAI applications, we deal with:
- Increased latency from controls and validations, which can affect user experience and operational efficiency
- Guardrails that become outdated against new threats, since attack vectors constantly evolve and controls must be continuously updated
- Dependency on third-party libraries (and their bugs!), which introduces risks of external vulnerabilities and compatibility issues
- Complex ecosystems that are difficult to maintain, where each integration can be a point of failure or exposure
- Hyper-specific security solutions for each business unit or application, making it difficult to standardize and scale controls
This makes security always a puzzle — expensive and hard to scale. Furthermore, the lack of universal protocols for secure communication between AI agents creates gaps that can be exploited by sophisticated attackers.12
The paper: "A2AS: Agent-to-Agent Security for LLM-based Autonomous Agents"
This paper came out just hours ago (it's not on arXiv yet, but you can find it in the comments of the original post).
The research team proposes two things:
- A security model called BASIC
- An open source framework: A2AS
Why is it relevant?
For the first time, I see a proposal that aims to be the "HTTPS" of security in agentic AI systems. That is, a simple, modular, and easy-to-implement standard that doesn't depend on the application or the business, but rather protects the communication and integrity of agents and their tools.
The A2AS framework is designed to be interoperable across different platforms and vendors, allowing AI agents to collaborate and communicate securely, even in complex enterprise environments.345
The BASIC model
BASIC is an acronym that summarizes the five security pillars for agentic systems:
-
Boundary: Control of agent input and output boundaries. This includes the explicit definition of what actions an agent can perform and what resources it can access, using behavior certificates and wrappers that validate inputs and outputs before interacting with the real world
-
Authentication: Identity verification between agents and tools. Advanced mechanisms such as JWT, OAuth 2.0, OpenID Connect, and RSA keys are used to ensure that only authorized agents can communicate and execute tasks651
-
Secrecy: Protection of sensitive information through encryption of data in transit and at rest, ensuring that confidentiality is maintained even if communication is intercepted
-
Integrity: Assurance that data has not been altered during transmission, using digital signatures and cryptographic validations to detect any manipulation or corruption of information
-
Consent: Control of permissions and authorized actions, implementing role-based access models and policies that define what each agent can do in each context
Each pillar has controls and implementation examples. For instance:
- For "Boundary", they propose wrappers that validate and filter agent inputs/outputs before they interact with the real world
- For "Authentication", the use of mutual authentication and dynamic credential management is recommended, with contextual validation and probabilistic identity scoring1

The A2AS framework
A2AS is a set of Python modules that implement controls for each pillar of BASIC.
Some notable modules:
a2as.boundary: Defines and enforces limits on agent actions, restricting access to resources and functions based on behavior certificatesa2as.integrity: Verifies that data has not been modified, using hashes and digital signatures to ensure information integritya2as.secrecy: Handles encryption and protection of sensitive data, implementing advanced encryption algorithms and secure key managementa2as.auth: Authentication and authorization between agents and tools, supporting multiple authentication schemes and role-based access control
The framework also includes modules for:
- Logging and telemetry
- Policy validation
- Automated behavior testing
This facilitates integration into AI development and deployment pipelines.4

Usage example
Attack cases it addresses
The paper describes real-world attacks such as:
User-to-Agent A malicious user attempts to manipulate the agent with prompts designed to bypass controls, for example, by injecting hidden instructions or exploiting weaknesses in natural language processing.
Agent-to-Tool An agent attempts to exploit vulnerabilities in an external tool, such as misconfigured APIs or insecure dependencies, to gain unauthorized access or modify critical data.
Agent-to-Agent A compromised agent attempts to attack other agents in the system, whether through identity spoofing, message manipulation, or exploitation of insecure communication channels.
A2AS provides controls to mitigate these attack vectors in a centralized and reusable way.
Additionally, it includes mechanisms for:
- Rate limiting
- Anomaly detection
- Automatic isolation of suspicious agents
This enables rapid response to security incidents.261
Roadmap and limitations
The framework is still in development, but it already has functional modules and an active community.
Current limitations:
-
Limited integration: Native integration with some agent orchestration frameworks is lacking, although adapters and plugins are being developed to facilitate interoperability
-
Manual configuration: Some controls require manual configuration and fine-tuning based on each application's context, which can increase the initial complexity of adoption
-
Partial coverage: It doesn't cover all possible attack vectors (but it does cover the most critical ones), and the team is working on expanding coverage and improving the adaptability of controls
-
Performance: Performance may be affected in high-concurrency scenarios, so scalability testing is recommended before deploying to production
Future roadmap:
- Integration with monitoring systems
- Support for new encryption algorithms
- Creation of an automated test suite to validate agent security across different environments4
My personal conclusion
What excites me most about A2AS is that, at last, we have a common foundation for building security in agentic AI systems, without having to reinvent the wheel for every project.
The BASIC model is easy to understand and the A2AS framework is flexible enough to adapt to different scenarios. Moreover, the community behind A2AS is open to collaborations and contributions, which accelerates the evolution of the standard and its adoption in the industry.
Is it the definitive solution? No, but it's a huge step toward standardization and simplification of security in GenAI.
If you're interested in protecting your AI systems and participating in building a more secure ecosystem, I recommend exploring the framework and joining the discussion.
Interested in trying it out or joining the discussion on how useful this really is? Reach out to me on LinkedIn, email, or directly on YouTube and let's keep the conversation going! A big hug and thanks for reading this far! 789
Test Your Technical Knowledge
A2AS and BASIC Recap
What two main contributions does the post say the paper introduces?
Within the BASIC model, which pillar focuses on controlling what actions an agent can perform and which resources it can access?
According to the implementation example in the post, which A2AS module is used to verify that data has not been altered?
Footnotes
-
https://www.solo.io/blog/deep-dive-mcp-and-a2a-attack-vectors-for-ai-agents ↩ ↩2
-
https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/ ↩
-
https://www.blott.com/blog/post/how-the-agent2agent-protocol-a2a-actually-works-a-technical-breakdown ↩ ↩2
-
https://live.paloaltonetworks.com/t5/community-blogs/safeguarding-ai-agents-an-in-depth-look-at-a2a-protocol-risks/ba-p/1235996 ↩
-
https://dev.to/czmilo/2025-complete-guide-agent2agent-a2a-protocol-the-new-standard-for-ai-agent-collaboration-1pph ↩
AI Security Series
Part 3 of 5- 1Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
- 2DemonAgent Exposed: Understanding Multi-Backdoor Implantation Attacks on LLMs
- 3A2AS: A New Standard for Security in Agentic AI Systems
- 4MCP Security for Enterprise Organizations: Real-world experiences and advanced defense
- 5Rules vs. Skills: Creating Secure AI Context in Engineering Teams
Continue Reading
Next steps in the archive
Newer article
MCP Security for Enterprise Organizations: Real-world experiences and advanced defense
A personal reflection and technical analysis on the MCP protocol, from the challenge of presenting to the community to the real-world methods and risks in AI Security, MCP Server, and recommended defenses for organizations. Includes resources, papers, and key sites for modern research in AI agent security.
Older article
DemonAgent Exposed: Understanding Multi-Backdoor Implantation Attacks on LLMs
This blog post article about the great DemonAgent research paper shows how attackers can implant multiple backdoors in LLM-based agents and the technical mechanisms behind these attacks
Keep Exploring
Related reading
Continue through adjacent topics with the strongest tag overlap.

MCP Security for Enterprise Organizations: Real-world experiences and advanced defense
A personal reflection and technical analysis on the MCP protocol, from the challenge of presenting to the community to the real-world methods and risks in AI Security, MCP Server, and recommended defenses for organizations. Includes resources, papers, and key sites for modern research in AI agent security.

Rules vs. Skills: Creating Secure AI Context in Engineering Teams
At my company we ran into a familiar question while scaling AI coding assistants: when should context live in a Rule or `CLAUDE.md`, and when does it deserve a Skill...

DemonAgent Exposed: Understanding Multi-Backdoor Implantation Attacks on LLMs
This blog post article about the great DemonAgent research paper shows how attackers can implant multiple backdoors in LLM-based agents and the technical mechanisms behind these attacks

