What Is Shadow AI?

Shadow AI is the use of AI tools and applications within an organisation without the knowledge and authorisation of the IT department. Employees use these tools on their own to increase efficiency and tackle urgent tasks. The result is a shadow AI ecosystem operating beyond security team visibility—far more dangerous than traditional shadow IT.

While shadow IT deals with personal cloud storage and unapproved messaging applications, shadow AI bypasses direct control by using external AI models to analyse sensitive organisational data and produce highly unpredictable, potentially damaging results.

A 2025 report from Menlo, which tracked hundreds of thousands of user inputs over a month, highlighted the risks of unregulated AI use by employees. It found that 68% of employees used personal accounts to access free AI tools like ChatGPT, with 57% of them using sensitive data. It also showed how AI systems adapt to user inputs and leak sensitive organisational data to the public.

Shadow AI use was amplified by the rise of generative AI technology to the mainstream in late 2022. The technology is highly accessible; most tools are browser-based and cloud-enabled. Employees also use AI-integrated SaaS products daily. The pressure on employees to produce results rapidly is continuously increasing.

People find their own solutions when official AI adoption slowly moves through approval channels. There continues to be a gap between employee needs and what IT provides.

Cybersecurity Education and Training Begins Here

Start a Free Trial

Here’s how your free trial works:

  • Meet with our cybersecurity experts to assess your environment and identify your threat risk exposure
  • Within 24 hours and minimal configuration, we’ll deploy our solutions for 30 days
  • Experience our technology in action!
  • Receive report outlining your security vulnerabilities to help you take immediate action against cybersecurity attacks

Fill out this form to request a meeting with our cybersecurity experts.

Thank you for your submission.

Why Does Shadow AI Matter in 2025?

Shadow AI creates blind spots that even the most sophisticated security systems cannot resolve. CISOs have a growing, unmonitored, and unprotected attack surface. One in five organisation breaches is attributed to shadow AI, and those with a greater prevalence of shadow AI breaches have costs on average of $670,000 more than those with lower levels or no shadow AI, according to IBM.

Engineers who fail to complete the required AI workflow documentation lose the ability to control system vulnerabilities, and the use of untested, unvalidated production systems gains traction. Compliance gaps become more problematic for policy teams as employees disregard existing governance frameworks. The explosion of generative AI and agentic AI investment drives this challenge.

According to Menlo’s report, AI websites witnessed a 50% increase in web traffic from February 2024 to January 2025, totalling 10.53 billion monthly visits. A “bring your own AI” culture is in full bloom, with more than 60% of users relying on personal, unmanaged AI tools as opposed to enterprise-approved tools. Patrick Pushor, Senior Solutions Architect at Acuvity, recently posted, “Nearly half of organisations expect a shadow AI incident in the next 12 months,” a stark finding from the company’s latest 2025 State of AI Security Report. Yet, high-level oversight has never been more disconnected from the employee adoption of AI.

Key Characteristics and Causes of Shadow AI

Shadow AI is evident in public large language models (LLMs) such as ChatGPT and Claude, but it can also hide in plain sight. AI features embedded within sanctioned SaaS tools can also activate without IT becoming aware. For example, marketing teams use AI copy assistants to compose entire campaigns. Sales representatives use AI tools to automate customer reply emails. Finance analysts copy data into chatbots to create summaries. This phenomenon is pervasive and crosses every department and role, not just the technical teams.

The causes can be boiled down to pressure and access. Employees operate under pressure to achieve more and do so rapidly, with fewer resources. When internal processes for AI approval crawl at a glacial pace or provide no response at all, people will find their own means. Most generative AI tools are easy to access — all you need is an email and a web browser.

Additionally, many organisations lack clear policies on AI tools — specifically, what is acceptable, what is not, and what is sanctioned. Based on S&P Global’s research, “Just over one-third (36%) have a dedicated AI policy or an AI policy integrated into other governance policies.” The gap between what employees need and what the governance system provides is an ever-widening chasm.

Risks Related to Shadow AI

Shadow AI creates a unique and complicated threat across all pillars of enterprise security. These risks include immediate problems such as data breach and exposure, as well as long-lasting issues around compliance and operational problems.

Data Leakage and Exposure

Employees copy and paste sensitive, proprietary data into unapproved AI tools, which exposes that data and leads to a breach of security. 77% of employees have been observed sharing sensitive, proprietary information with tools like ChatGPT. When breach incidents involving shadow AI occur, IBM’s findings revealed that 65% of the time they involve compromised personally identifiable information, and 40% of the time they involve exposed intellectual property.

Compliance and Regulatory Violations

Unregulated AI use creates gaps in data governance, which can put an entity at risk in terms of compliance with data privacy laws, such as GDPR, as well as industry standards. Most organisations do not control the flow of sensitive data and the tools employees use. This blind spot makes it nearly impossible to demonstrate compliance with an audit, as well as respond to data subject requests.

Model Outputs and Decision-evidence Integrity

Unregulated AI tools provide unconstrained, biased, untrue, or fabricated data, which employees then funnel into their decision-making. Data errors can compromise core operations at the enterprise level. Unregulated flows create a situation in which flawed and erroneous data can circulate between operations and not surface to the top for a long time.

Cost and Operational Risk

As mentioned above, shadow AI incidents increase the cost of a data breach by $670,000. Organisations unknowingly host an average of 1,200 unofficial applications that create duplicate spending, fragmented workflows, and a widened attack surface. The governance and control gaps, or “shadow areas”, become larger and more complex to manage over time.

Security and Access Control Risks

Unmonitored identity and access shadow AI risks vulnerability to security threats. The average organisation has 15,000 ghost users, and by design, these unnecessary and dormant credentials pose a threat when accessed by unauthorised tools. Employees sharing passwords with AI assistants creates backdoors, and the average time to remediate is 94 days.

Frameworks and Best Practices for Managing Shadow AI

More than just detecting shadow AI, managing it requires a well-planned and aligned approach that balances security and productivity. The best strategies give visibility into the use of AI, establish definable governance frameworks, provide the right tools, and develop actionable policies. The key to success is treating the governance of AI as a continuous process, rather than a one-off project.

Discovery and inventory of AI tools in use. You can’t govern what you can’t see. Utilise network monitoring, secure web gateways, and CASB tools to find AI domains and submit prompts across your network. Ask teams what tools they use and the data they access. Create a living inventory that matches each tool with business units, data elements, and specific use cases.

Risk assessment and prioritisation. Not all shadow AI poses the same risk. Classify the tools you’ve mapped as unacceptable, high risk, moderate risk, or low risk, considering data sensitivity, regulatory exposure, and business impact. A risk heat map will show which cases fall into the danger zone and deserve immediate attention, allowing you to direct your efforts to the most critical threats first.

Policy development and enforcement. Define approved AI tools, procedures for requesting new tools, and guidelines for managing sensitive data. Implement technical guardrails where the actual work is done. Control access and deny arbitrary tools using proxy-allow and deny lists. Use data loss prevention tools to prevent uploading data to unapproved tools, and provide secure routing alternatives to approved models. Policies should promote innovation.

Training and cultural change. Shadow AI is often benign ignorance. Protect your organisation with role-specific training using real-life, work-related scenarios. Encourage teams to share which AI tools they use, and allow them to do so without fear of reprisal. When leaders consistently demonstrate the desired culture, compliance with policies is almost guaranteed.

Cybersecurity integration. AI governance should interface with your existing data protection, access control, and insider risk programmes. Provide assigned cross-functional ownership that spans IT, security, legal, finance, and HR. Real-time monitoring of AI usage and large data transfers to AI tools should be flagged as anomalous. Routine audits surface usage patterns that assist in refining governance and revealing the gaps in your sanctioned offerings.

Who Should Be Accountable for Shadow AI Governance?

No single team can be responsible for shadow AI governance. Risks cross the boundaries of security, compliance, infrastructure, and business operations. Successful organisations assign such fragmented responsibilities and foster cross-department collaboration.

The risk oversight function is assigned to the security and CISO teams. They spot potential threats, assess the level of exposure, and identify which use of AI creates unacceptable vulnerabilities. They must oversee the use of unauthorised AI and verify balances with other collaborators regarding business needs and incident security. Governance is best facilitated when security leaders are high-level conduits, not roadblocks.

The architecture and technical implementation are the responsibility of the IT and engineering teams. They check AI tools for security flaws and build secure AI tools that employees find useful and want to use. They’re responsible for AI registries, marketplaces, and the enforcement of political AI usage and access at the network and application layers. Meanwhile, CISO teams ensure AI governance maintains alignment with GDPR, industry standards, and contractual obligations. They evaluate data handling practices, oversee vendor agreements, and facilitate organisation audits.

Compliance and legal teams translate regulatory compliance requirements into practical, actionable policies. When employees understand why certain policies are in place, coupled with high-quality authorised tools, policy compliance becomes automatic, not coerced. Leaders should exemplify this behaviour themselves as they use the appropriate tools.

This leadership-driven compliance culture requires structured governance to sustain momentum, making it crucial to have cross-functional feedback channels incorporated into the relevant disciplines. Organisations should meet frequently to review policies, evaluate newly developed tools, and mitigate risks that arise since the last meeting. Collaborative governance keeps teams focused on what matters most, preventing both security gaps and unnecessary red tape.

Challenges in Implementing Shadow AI Governance

Even organisations that are firmly invested in AI governance experience significant hurdles. Policy cycles take too long to employ, given the speed at which the landscape moves. Most IT controls are too basic to address AI-specific risks.

  • Rapid evolution of AI tools. AI capabilities are so advanced that new tools pop up weekly, and existing platforms add AI features without prior notice. Before the security team evaluates one tool, employees have access to three unvetted tools and have no incentive to stop using them.
  • Policy defiance. Employees consider governance to be a bottleneck and abandon it when approval processes take weeks, while free AI tools are instantly accessible. When the tools officially provided lack the capabilities and convenience that consumer tools offer, shadow adoption becomes inevitable.
  • Difficulty monitoring which tools are being used. AI functions differently from traditional SaaS. AI can be embedded in approved platforms, or it can operate locally without any network signatures. Most organisations aren’t equipped with the technical means to track AI activities, leaving security teams flying blind.
  • The need for more AI-specific policies. Most organisations lack policies that address AI-specific risks, such as model outputs, prompt injection, or data training. General IT policies fail to capture the unique compliance and ethical concerns that AI introduces. If policies are too restrictive, shadow adoption will occur, and if policies are too lenient, the organisation risks being exposed.

Shadow AI Detection Tools

To detect shadow AI, it’s important to have multiple layers that include tools to analyse network traffic, applications, user activities, and data movement. There is no single method that can detect everything. The most successful tools use various techniques to build a complete understanding of how and where AI tools emerge and are being used by employees.

AI-Use Discovery Platforms

These tools specialise in maintaining inventories of AI applications employees use, and they uncover enterprise environments that have been democratised with these tools. Using machine learning, these tools detect established AI services such as ChatGPT, Gemini, and Claude, as well as newly developed ones that other traditional discovery tools miss. Discovery platforms examine financial transactions and reports, browser usage, and network traffic to analyse the scope of AI use.

Cloud Access Security Brokers (CASB)

CASBs enhance traditional monitoring of SaaS applications by adding AI governance features. CASB solutions offer real-time oversight of AI service usage by employees and can apply AI-dependent, contextual policies. The latest CASB solutions provide AI-driven risk assessment of handled data and AI traffic in/out of the enterprise through deep packet inspection.

Data Loss Prevention (DLP) Systems

AI-embedded DLP tools automatically recognise when employees share sensitive information with unauthorised AI tools. They scan content at the prompt level for typed inputs, file uploads, and clipboard pastes to detect document sharing and check for major chatbots, including ChatGPT, Copilot, Claude, Gemini, and Perplexity. Advanced DLP solutions have context-aware servicing that reduces false positives even further and explains the rationale behind the data movement.

Identity and Access Management (IAM) Monitoring

IAM systems surveil credential sharing and use patterns to see employees’ authentication to AI services and track document sharing with AI. Proactive IAM even checks for controlled submission at the file level, manages access control, and reports on dormant accounts. SIEM systems integrate specific identity management and predictive AI usage. Real-time alerting of dynamically tracked AI usage thresholds is available for significant document exports and data slips.

Behavioural Analytics Platforms

These systems use machine learning to establish baselines for normal AI usage and identify anomalies that indicate potential policy violations. They assign behavioural risk scores to users based on their AI interaction patterns and can predict which employees are likely to misuse AI tools with sensitive data. Advanced platforms perform psychological profiling to understand not just what users do with AI but why they do it. Behavioural analytics become more accurate over time as they continuously learn from new patterns.

Network Traffic Analysis Tools

Even when applications encrypt their communications, network monitoring solutions can still identify AI usage by recognising patterns in the traffic. The tools pinpoint AI application fingerprints, monitor bandwidth usage, and detect abnormal volumes of data transfers that suggest the misuse of tools to exfiltrate data. Zero-trust network access platforms apply verification policies to every AI interaction, regardless of user location or device. This approach prevents data exfiltration through AI channels while maintaining user experience.

FAQs

How can I tell if my company has a shadow AI problem?

To identify if your company has shadow AI issues, first look for patterns like strange data access, unexplained increases in cloud traffic, and AI outputs produced outside of the sanctioned tools. Because shadow AI operates behind encrypted browser traffic, traditional security tools might not see it, and you may require specialised detection tools to understand the landscape better. The likelihood of shadow adoption increases if your company doesn’t have formal policies around AI or if your IT approval process is bureaucratically slow.

What are questions a CISO should ask about shadow AI?

Propose a sequence starting with visibility: What AI tools do employees access, and what is the data that they input? Then, move to governance: Do we have unambiguous policies around acceptable AI use, the oversight of new tool adoption, and technical means to enforce those policies? And lastly, ask about the risk: Can we monitor AI activities for a compliance audit, and have we assessed the potential impact of a shadow AI breach?

What is the first step to controlling shadow AI?

The first step to controlling shadow AI is visibility gained through discovery. Then, and only then, can you enforce policies effectively. Monitor AI-related network traffic and implement shadow AI inventory activities to assess the AI tools employees use. Evaluate authentication records for “Sign in with Google” and SSO integrations with services that the organisation does not recognise. Inventorying shadow AI takes time, but it is the crucial first step.

How does shadow AI affect compliance and data protection?

The unregulated use of shadow AI makes traditional data governance highly ineffective. Employees inserting sensitive data into tools that are not approved for use create unknown exposure to data flow and usage. The data is then potentially out of compliance with retention and deletion policies under GDPR or various industry standards. Furthermore, AI data breach risks within the EU are highly regulated with penalties of 4% of global revenue. Healthcare organisations are also highly regulated concerning data breach risks under HIPAA compliance.

How Proofpoint Can Help

Proofpoint gives organisations the visibility and control needed to tackle shadow AI head-on. Proofpoint solutions help you discover both sanctioned and unsanctioned AI across your environment, monitoring data flows to block potential exfiltration. The platform correlates AI activity with insider threat signals, catching risks early before they become serious incidents. Learn how Proofpoint enables the discovery, governance, and protection of unauthorised AI use while promoting safe innovation in your organisation. Reach out to Proofpoint today.

Ready to Give Proofpoint a Try?

Start with a free Proofpoint trial.