icon

Digital safety starts here for both commercial and personal Use...

Defend Your Business Against the Latest WNY Cyber Threats We offer Safe, Secure and Affordable Solutions for your Business and Personal Networks and Devices.



WNYCyber is there to help you to choose the best service providers in Western New York... We DO NOT provide the services ourselves, as we are Internet Programmers who have to deak daily with Cyber Threats... (Ugghhh)... So we know what it's like and what it takes to protect OUR and OUR CUSTOMERS DATA... We built this Website to help steer you to those that can give you the best service at realistic and non-inflated prices. We do charge or collect any fees.

New Vulnerability in GitHub Copilot and Cursor: How Hackers Can Weaponize Code Agents

Summary:
Pillar researchers have uncovered an undocumented supply chain attack vector, "Rules File Backdoor," that exploits AI-powered code editors like GitHub Copilot and Cursor. Attackers embed malicious prompts within seemingly benign configuration files, specifically, rules files that guide AI behavior. By leveraging hidden Unicode characters and advanced evasion techniques, these prompts manipulate the AI to generate backdoored code. This compromised code can then propagate across projects, making this supply chain risk particularly dangerous. Even a simple HTML-only page generated by Cursor AI would contain a malicious script sourced from the adversary if affected. The most common infection vectors include malicious actors sharing "helpful" rule files in developer communities that unwitting developers then incorporate and also pull requests to popular open-source repositories that include poisoned rule files.

Both GitHub and Cursor have been notified, with the companies reiterating that users are responsible for reviewing AI-generated code. The attack's persistence through project forking further exacerbates the risk, potentially affecting numerous end-users and repositories. Another notable aspect of this attack that makes it particularly dangerous is that the AI assistant never mentions the addition of the script tag in its response to the developer. A 2024 GitHub survey found that nearly all enterprise developers (97%) are using Generative AI coding tools. These tools have essentially become mission-critical to software development operations across organizations.

The "Rules File Backdoor" attack can manifest in several dangerous ways:
  • Overriding Security Controls: Injected malicious directives can override safe defaults, causing the AI to generate code that bypasses security checks or includes vulnerable constructs. In our example above, a seemingly innocuous HTML best practices rule was weaponized to insert a potentially malicious script tag.
  • Generating Vulnerable Code:By instructing the AI to incorporate backdoors or insecure practices, attackers can cause the AI to output code with embedded vulnerabilities. For example, a malicious rule might direct the AI to:
    • Prefer insecure cryptographic algorithms
    • Implement authentication checks with subtle bypasses
    • Disable input validation in specific contexts
  • Data Exfiltration: A well-crafted malicious rule could direct the AI to add code that leaks sensitive information. For instance, rules that instruct the AI to "follow best practices for debugging" might secretly direct it to add code that exfiltrates:
    • Environment variables
    • Database credentials
    • API keys
    • User data
  • Long-Term Persistence: Once a compromised rule file is incorporated into a project repository, it affects all future code generation. Even more concerning, these poisoned rules often survive project forking, creating a vector for supply chain attacks affecting downstream dependencies.
Security Officer Comments:
This "Rules File Backdoor" attack highlights a critical vulnerability in the rapidly evolving attack surface of AI-assisted software development. The ability to inject malicious instructions into AI-generated code through manipulated rules files represents a significant shift in supply chain attacks. The use of zero-width characters and bidirectional text markers to conceal malicious prompts demonstrates a sophisticated understanding of how to bypass traditional code review mechanisms and leverage the AI's natural language processing capabilities against itself. The fact that the attack persists through project forking is particularly concerning, as it allows malicious code to spread silently and widely, potentially impacting an unprecedented amount of users.

This development occurs as the cybersecurity community identifies another supply chain attack affecting GitHub Actions that tricks users into unwittingly divulging CD/CI secrets like AWS access keys. The reliance on AI for code generation, despite offering significant productivity gains, also introduces new attack vectors that require careful scrutiny. The response from GitHub and Cursor, emphasizing user responsibility, underscores the need for robust security practices in AI-assisted development. Therefore, organizations must implement thorough code review processes, even for AI-generated code, and be vigilant against the introduction of seemingly benign configuration files from untrusted sources.

Suggested Corrections:
Technical Countermeasures:
  • Audit Existing Rules: Review all rule files in your repositories for potential malicious instructions, focusing on invisible Unicode characters and unusual formatting.
  • Implement Validation Processes: Establish review procedures specifically for AI configuration files, treating them with the same scrutiny as executable code.
  • Deploy Detection Tools: Implement tools that can identify suspicious patterns in rule files and monitor AI-generated code for indicators of compromise.
  • Review AI-Generated Code: Pay special attention to unexpected additions like external resource references, unusual imports, or complex expressions.
Disclosure Timeline:
Cursor
  • February 26, 2025: Initial responsible disclosure to Cursor
  • February 27, 2025: Cursor replied that they are investigating the issue
  • March 6, 2025: Cursor replied and determined that this risk falls under the users' responsibility
  • March 7, 2025: Pillar provided more detailed information and demonstration of the vulnerability implications
  • March 8, 2025: Cursor maintained their initial position, stating it is not a vulnerability on their side
GitHub
  • March 12, 2025: Initial responsible disclosure to GitHub
  • March 12, 2025: GitHub replied and determined that users are responsible for reviewing and accepting suggestions generated by GitHub Copilot.
Link(s):
https://thehackernews.com/2025/03/new-rules-file-backdoor-attack-lets.html

https://www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agents