New Vulnerability in GitHub Copilot and Cursor: How Hackers Can Weaponize Code Agents
Pillar researchers have uncovered an undocumented supply chain attack vector, "Rules File Backdoor," that exploits AI-powered code editors like GitHub Copilot and Cursor. Attackers embed malicious prompts within seemingly benign configuration files, specifically, rules files that guide AI behavior. By leveraging hidden Unicode characters and advanced evasion techniques, these prompts manipulate the AI to generate backdoored code. This compromised code can then propagate across projects, making this supply chain risk particularly dangerous. Even a simple HTML-only page generated by Cursor AI would contain a malicious script sourced from the adversary if affected. The most common infection vectors include malicious actors sharing "helpful" rule files in developer communities that unwitting developers then incorporate and also pull requests to popular open-source repositories that include poisoned rule files.
Both GitHub and Cursor have been notified, with the companies reiterating that users are responsible for reviewing AI-generated code. The attack's persistence through project forking further exacerbates the risk, potentially affecting numerous end-users and repositories. Another notable aspect of this attack that makes it particularly dangerous is that the AI assistant never mentions the addition of the script tag in its response to the developer. A 2024 GitHub survey found that nearly all enterprise developers (97%) are using Generative AI coding tools. These tools have essentially become mission-critical to software development operations across organizations.
The "Rules File Backdoor" attack can manifest in several dangerous ways:
Security Officer Comments:
This "Rules File Backdoor" attack highlights a critical vulnerability in the rapidly evolving attack surface of AI-assisted software development. The ability to inject malicious instructions into AI-generated code through manipulated rules files represents a significant shift in supply chain attacks. The use of zero-width characters and bidirectional text markers to conceal malicious prompts demonstrates a sophisticated understanding of how to bypass traditional code review mechanisms and leverage the AI's natural language processing capabilities against itself. The fact that the attack persists through project forking is particularly concerning, as it allows malicious code to spread silently and widely, potentially impacting an unprecedented amount of users.
This development occurs as the cybersecurity community identifies another supply chain attack affecting GitHub Actions that tricks users into unwittingly divulging CD/CI secrets like AWS access keys. The reliance on AI for code generation, despite offering significant productivity gains, also introduces new attack vectors that require careful scrutiny. The response from GitHub and Cursor, emphasizing user responsibility, underscores the need for robust security practices in AI-assisted development. Therefore, organizations must implement thorough code review processes, even for AI-generated code, and be vigilant against the introduction of seemingly benign configuration files from untrusted sources.
Suggested Corrections:
Technical Countermeasures:
Disclosure Timeline:
Cursor
GitHub
Link(s):
https://thehackernews.com/2025/03/new-rules-file-backdoor-attack-lets.html
https://www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agents