Recent AI security incidents, including the Anthropic leak and Mercor AI supply chain attack, show that the biggest risks come from human error, insecure integrations, and software supply chain exposure.
AI security incidents are real and rising
The Anthropic leak and Mercor attack in April 2026 are early signals of a shift in enterprise risk. In short, AI security failures are no longer theoretical. They are happening now, and they are exposing sensitive data, internal systems, and proprietary technology.
In one case, a supply chain attack exposed customer data through a compromised dependency. In another, human error led to the release of leaked source code tied to an AI system. Although these are quite different failures, they point to the same fundamental security truth.
AI security risk is not just about models. It is about how people interact with AI systems, how data moves through them, and how those systems connect to broader enterprise environments.
What happened in the Anthropic leak and Mercor AI supply chain attack
The Anthropic and Mercor incidents highlight how quickly AI-related risks can materialize in enterprise environments. While they had different root causes, both incidents exposed sensitive systems and reinforced a common issue: organizations often lack visibility and control over how AI systems are built, integrated, and used.
Anthropic leak
The Anthropic leak was two separate security lapses that occurred within days. One involved publicly accessible internal files. The other exposed source code for Anthropic's Claude Code AI assistant due to a release packaging error.
- Cause: Release packaging error during deployment
- Exposure: Leaked source code and internal files
- Risk: Long-term visibility into system design and workflows
The issue stemmed not from a sophisticated attack, but from human error in how code was prepared and released.
Leaked source code cannot be contained or revoked once it's exposed. This creates lasting risk. Attackers can analyze the code, understand how the system works, and identify potential weaknesses in how data is processed or retained.
Mercor AI supply chain attack
The Mercor AI attack was a supply chain incident caused by a vulnerability in LiteLLM, an open source library widely used to connect applications to AI services.
- Cause: Malicious code embedded in LiteLLM within open source repositories
- Exposure: Data flowing through AI systems, including customer data
- Risk: Compromise of software supply chains and shared dependencies, injecting credential-stealing malware to harvest API keys and other data
This was a classic supply chain attack adapted for modern AI environments. Most large-scale software created today uses legitimate libraries and packages, like Mercor’s use of LiteLLM. Attackers (in this case, Team PCP with the Lapsus$ hacking group) could create more pervasive and persistent impact by exploiting such dependencies in the ecosystem.
AI systems rely on interconnected tools, APIs, and AI agents. Thus, when a component like LiteLLM is compromised, organizations inherit that risk automatically.
Why these incidents matter for enterprises
These incidents reflect a broad shift in how security gaps prevail in AI ecosystems.
- Risk is introduced through software supply chains, not just vulnerabilities in deployed AI systems
- Exposure often stems from human error by authorized users, not advanced exploits or unsanctioned access
- AI systems increase the speed and scale at which data can be exposed
The takeaway for enterprises is clear: AI security is not just about protecting models. It is about understanding how data, dependencies, and people interact across the entire environment.
The pattern behind both incidents: AI security is a human and process problem
These incidents were not driven by advanced attacks on AI models. They were caused by familiar issues that now have greater impact in AI environments.
- Third-party dependency risk across software supply chains
- Human error in how systems are configured and released
- Limited visibility into how AI systems, data, and workflows interact
This reflects a broader trend: 82% of breaches involve human factors such as error, misdelivery, or misuse. That pattern is now extending into AI systems, demonstrating that AI does not eliminate human risk—it amplifies it.
Key takeaways
- AI systems increase the speed and scale of data exposure
- Humans remain the primary attack surface, even in AI security incidents
- Small mistakes and existing security gaps can have far-reaching consequences when magnified by AI adoption
Key AI security risks enterprises must address now
- AI supply chain vulnerabilities: Dependencies like LiteLLM introduce hidden risk across software supply chains. You may not control them, but they still affect your security posture.
- Data leakage and overexposure: AI tools process sensitive data. Without controls, that data can be exposed through prompts, integrations, or misconfigurations, including interactions with AI agents.
- Insider and human-driven risk: Employees interact directly with AI systems. Misuse, mistakes, or lack of awareness can lead to exposure.
- Persistent exposure risk: Once data or code is leaked, it cannot be recovered. The risk becomes permanent.
- Email and collaboration-based threats: Many interactions and workflows start in email or collaboration platforms. These channels remain primary entry points for attackers to access people and systems.
Why these risks are amplified in Microsoft 365 environments
Most enterprises operate in Microsoft 365 or cloud-forward environments. These ecosystems:
- Connect email, files, and collaboration tools
- Integrate with AI assistants and third-party apps
- Enable fast sharing of data across users
This creates efficiency, but also risk. Sensitive data can move quickly between users and AI tools. Without platform and ecosystem visibility, organizations cannot track how that data is used or exposed.
How to reduce AI security risk in the enterprise
- Reduce human-driven risk. Security awareness is not enough. Organizations need visibility into user behavior and risky actions to enforce acceptable use of data and AI systems.
- Protect sensitive data across AI workflows. Reconsider whether your data loss prevention controls adequately monitor and restrict how data is shared with AI systems.
- Secure email and collaboration channels. Email remains a primary attack vector. Securing it reduces the risk of compromised users and AI-related exposure downstream.
- Gain visibility into AI interactions. Understand who is using AI tools, what data is being shared across all AI tools, and where risks exist.
- Focus on prevention, not response. Once data is exposed, it cannot be undone. Preventing exposure is more effective than trying to contain it later.
Why prevention matters more in AI security
Both of these incidents spotlight control failures rooted in unresolved issues:
- Mercor: data was already exposed through a supply chain compromise
- Anthropic: source code was already public and could not be retrieved
Detection helps you understand what happened. Prevention helps ensure it does not happen at all. In AI environments, prevention is critical because exposure is often permanent.
How Proofpoint helps reduce AI-era security risk
AI security requires more than protecting systems. It requires protecting how people interact with AI systems, data, and workflows.
Proofpoint takes a human-centric approach by helping organizations:
- Identify and reduce user-driven risk by understanding intent
- Protect sensitive data across email, cloud apps, endpoint, and on-premises data stores
- Gain visibility into user and AI behavior that leads to exposure
This approach focuses on stopping high-risk interactions before they lead to data loss or compromise, not just responding after the fact.
Explore Proofpoint AI Security to secure your AI ecosystem for more confident adoption.
Explore Proofpoint Data Security and Governance to secure your data for AI-readiness.