
Article Brief
Why this article matters
Mixing up Cursor Rules and Agent Skills isn't just a stylistic choice—it bloats context windows, degrades model reasoning, and creates real attack surfaces. This post draws the architectural boundary precisely: how each mechanism works under the hood, a decision framework your engineering team can use today, and the concrete security threats—CVE-documented rule poisoning, MCP config backdoors, supply chain prompt injection—that emerge the moment you treat these files as passive documentation.
AI Security Series
Part 3 of 3- 1A2AS: A New Standard for Security in Agentic AI Systems
- 2MCP Security for Enterprise Organizations: Real-world experiences and advanced defense
- 3Rules vs. Skills: Creating a secure AI Context in our organizations
Continue Reading
Next steps in the archive
Keep Exploring
Related reading
Continue through adjacent topics with the strongest tag overlap.

MCP Security for Enterprise Organizations: Real-world experiences and advanced defense
A personal reflection and technical analysis on the MCP protocol, from the challenge of presenting to the community to the real-world methods and risks in AI Security, MCP Server, and recommended defenses for organizations. Includes resources, papers, and key sites for modern research in AI agent security.

A2AS: A New Standard for Security in Agentic AI Systems
Reflection, explanation, and analysis of the A2AS paper, the BASIC model, and the A2AS framework, from the perspective of real-world challenges in controls and attack mitigation in AI Security and GenAI Applications.

DemonAgent Exposed: Understanding Multi-Backdoor Implantation Attacks on LLMs
This blog post article about the great DemonAgent research paper shows how attackers can implant multiple backdoors in LLM-based agents and the technical mechanisms behind these attacks

