Current Cyber Threats

Reprompt Attack Hijacked Microsoft Copilot Sessions for Data Theft

Summary:
Varonis Threat Labs has disclosed a critical AI vulnerability dubbed "Reprompt," which allows threat actors to perform silent, one-click data exfiltration from Microsoft Copilot (Personal). The attack leverages the q URL parameter to inject malicious instructions (Parameter 2 Prompt or P2P injection) into a Copilot session. By using a "double-request" technique, attackers can bypass Copilot’s safety guardrails, which typically scrub sensitive data only during the initial request. Once the victim clicks a malicious link, the attacker can establish a "chain-request" flow where the AI continuously fetches new instructions from an attacker-controlled server. This allows the attacker to silently harvest personal information, such as user location, search history, and file summaries, even after the user has closed the Copilot tab. While Microsoft has patched this specific flaw, the research highlights a fundamental shift in how "one-click" compromises can be executed against AI assistants.


Security Officer Comments:
This research underscores a significant evolution in the social engineering and web-based threat landscape. Historically, a "malicious link" was designed to deliver malware or steal credentials via a spoofed login page. With Reprompt, the link leverages the trusted relationship and active session a user has with a legitimate AI service. For organizations in critical infrastructure or high-value sectors, this is particularly concerning because traditional client-side security tools (like EDR or standard web filters) may not flag these interactions; the traffic appears as legitimate communication between the user’s browser and a trusted Microsoft domain.

The impact of such an attack is multifaceted. Beyond the immediate loss of personal data, an attacker could use this method to perform internal reconnaissance by asking the AI to summarize recent documents or meetings, effectively using the AI as an "insider threat" tool. While Microsoft 365 Copilot (Enterprise) was reportedly not affected by this specific exploit, the blurring lines between personal and professional accounts, especially in "Bring Your Own AI" (BYOAI) environments, means that a compromise of a user's personal Copilot could still expose sensitive professional context or facilitate targeted spear-phishing based on the stolen data.


Suggested Corrections:


For vendors:

  • Treat URL and external inputs as untrusted: Apply validation and safety controls to all externally supplied input, including deep links and pre-filled prompts, throughout the entire execution flow.
  • Protect against prompt chaining: Ensure safeguards persist across repeated actions, follow-up requests, and regenerated outputs, not just the initial prompt.
  • Design for insider‑level risk: Assume AI assistants operate with trusted context and access. Enforce least privilege, auditing, and anomaly detection accordingly.

For Copilot Personal users:
  • Be cautious with links: Only click on links from trusted sources, especially if they open AI tools or pre-fill prompts for you.
  • Check for unusual behavior: If an AI tool suddenly asks for personal information or behaves unexpectedly, close the session and report it.
  • Review pre-filled prompts: Before running any prompt that appears automatically, take a moment to read it and ensure it looks safe.

Link(s):
https://www.bleepingcomputer.com/ne...et-hackers-hijack-microsoft-copilot-sessions/
https://www.varonis.com/blog/reprompt