It’s one thing when an employee clicks a phishing link. It’s another when your AI assistant executes a data exfiltration payload without any user interaction. That is the reality behind EchoLeak (CVE-2025-32711), a critical zero-click vulnerability in Microsoft 365 Copilot. It allows attackers to exploit how the assistant processes seemingly benign content, turning your AI into an unintentional insider threat.
How EchoLeak Works
The attack leverages a flaw in Copilot’s Retrieval-Augmented Generation (RAG) engine. Threat actors send an email containing hidden markdown prompts. These prompts bypass existing Microsoft classifiers, like Cross-Platform Injection Attack (XPIA), and are never seen by the user.
When the user later interacts with Copilot, the assistant fetches this hidden content as contextual data. It then executes embedded instructions that can exfiltrate sensitive information via image loads or external URL calls. No clicking. No approvals. Just invisible exploitation.
The Impact
Copilot’s scope includes email threads, SharePoint files, OneDrive data, Teams messages, and more. EchoLeak gave attackers indirect access to all of it.
This exploit is particularly dangerous because it breaks the traditional model of user-driven compromise. Security controls that depend on detecting user behavior, clicks, or endpoint activity are rendered ineffective. Microsoft patched the vulnerability server-side in May 2025, but the architectural risk remains.
Immediate Remediation Steps
Even though the vulnerability has been patched, remediation should not stop at waiting for vendor updates. Organizations must rethink how they manage and secure AI-powered tools. Here are critical actions to take:
- Audit Copilot Permissions
Review all the systems Copilot has access to. Limit its scope to only the data that is absolutely required for functionality. - Restrict Data Ingestion Sources
Configure RAG components to limit the ingestion of untrusted sources such as shared inboxes, external messages, or loosely permissioned SharePoint sites. - Filter Prompt Input and Output
Apply security controls that sanitize markdown content and remove embedded image or link references before they reach the assistant. - Harden Exfiltration Paths
Monitor outbound HTTP and DNS activity originating from systems interacting with Copilot. Identify and block any unexpected destinations, particularly those linked to automated responses. - Simulate Prompt-Based Attacks
Test your environment with red team scenarios that include LLM prompt injection and scope violations. Identify weaknesses before adversaries do.
How nGuard Helps
nGuard delivers a portfolio of services that directly align with both the prevention and remediation of LLM threats like EchoLeak.
- Security Assessments and Penetration Testing
nGuard identifies vulnerabilities in systems integrated with AI technologies like Microsoft Copilot. Our assessments focus on uncovering insecure data exposures, misconfigurations, and excessive permissions that increase risk. By analyzing access paths and system behaviors, we help ensure enterprise environments are resilient against AI-driven threats. - Vulnerability Management
Our team continuously monitors for emerging threats like EchoLeak and ensures that critical patches and configuration changes are implemented quickly. - Managed SIEM
nGuard provides comprehensive log collection and analysis across Microsoft 365 and cloud environments. By monitoring outbound network activity, authentication events, and data access patterns, we help organizations detect early signs of suspicious behavior that could signal AI-related exploitation or unauthorized data exposure.
Final Thoughts
EchoLeak represents more than just a critical bug. It is a turning point in how security teams must think about AI. When autonomous systems begin interacting with business-critical data, traditional perimeter defenses fall short.
Enterprises need visibility into how AI agents behave, what they can access, and how they can be manipulated. With services that span offensive testing, defensive monitoring, and long-term AI governance, nGuard helps you move beyond reactive security and into proactive control of your AI ecosystem.