For the first time, Proofpoint is publishing the AI and Human Risk Landscape report, a global study examining how AI is impacting the collaboration security threat landscape.
Based on survey responses from over 1,400 security professionals across 12 countries, this inaugural report surfaces something the industry has been feeling but hasn't been able to fully quantify until now: AI is transforming how organisations collaborate, and security hasn’t kept pace with the risks accompanying it.
Here's a preview of what the data reveals.
AI is in production; security is still catching up
The speed of AI adoption is striking: 87% of organisations already have AI assistants deployed beyond the pilot phase, and 76% are actively rolling out autonomous agents. But only 48% say security was embedded in their AI strategy from the start. The rest describe their security posture as catching up, inconsistent or reactive.
Organisations aren't underinvesting. More than 90% have AI security funding in place. The issue is that many existing controls were built for a pre-AI threat model. The report explores what that gap looks like in practice and why budget alone isn't closing it.
Controls are deployed; confidence is not
Perhaps the most striking finding is that 63% of organisations have AI security controls in place, yet 52% aren’t fully confident those controls would detect a compromised AI. And among organisations that report having controls, half have still experienced a suspicious or confirmed AI-related incident.
The report examines why this confidence gap exists, from training and visibility shortfalls to the operational barriers that prevent controls from working across collaboration channels.
Threats don't stay in one channel
Among organisations that have experienced an AI-related incident, threats are showing up everywhere, not just email: 67% report threat activity in email, but 57% also see it in SaaS or cloud apps, 53% in AI assistants or agents and 49% across collaboration tools, social platforms and file sharing.
The report includes a detailed look at how these channel-level patterns compare between the full survey population and the incident-experienced subgroup, with findings that challenge assumptions about where AI-related risk is concentrated.
Investigations break down when tools cannot keep up
Only about one-third of organisations say they’re fully prepared to investigate an AI- or agent-related incident. The reason is structural: 94% say managing multiple security tools is at least moderately challenging, and 41% cannot correlate threats across channels at all.
When AI-related threats move across collaboration channels at machine speed, fragmented tool stacks cannot reconstruct what happened. The report connects this investigation readiness gap to the consolidation trend already underway, with 53% planning to move to a unified platform in the next 12 months.
What else is in the report?
The full 2026 AI and Human Risk Landscape report is more comprehensive than what we've shared here. It includes:
- Regional comparisons across 12 countries, with notable variation in threat exposure and AI adoption maturity
- Real-world incident case studies showing how AI-related attacks move across collaboration channels
- Proofpoint threat intelligence on OAuth consent abuse, AI-built phishing infrastructure and prompt injection in the wild
- A framework for understanding why AI security and collaboration security solve different problems and why organisations need both
Read the full report
AI adoption isn’t slowing down, and the collaboration security challenges it creates are compounding. The 2026 AI and Human Risk Landscape report provides security leaders with the data to benchmark where their organisation stands, identify the gaps that matter most and make the internal case for necessary changes. Download the full report now.