Interpublic Group

Generative AI security risks: what enterprises need to know right now

Share with your network!

Generative AI (GenAI) has moved rapidly from early exploration to mainstream adoption across a wide range of industries. From marketing teams using it for content creation to developers leveraging it for code suggestions, GenAI is reshaping the world of enterprise productivity and innovation. Yet with this transformation comes a growing set of security risks. 

Organizations are embracing GenAI tools, such as ChatGPT, Claude, Google Gemini and Microsoft 365 Copilot, with urgency, often driven by efficiency gains, competitive pressure or the potential for automation. However, in many cases, security governance, controls, and policy frameworks lag far behind. As a result, companies might be exposing themselves to risk. 

This blog explores the most significant GenAI-related threats, how they can appear in real-world situations and the practical steps organizations can take to mitigate those risks. 

Why GenAI introduces new categories of risk 

GenAI is fundamentally different from traditional software. Rather than executing rule-based logic, GenAI models generate content based on patterns learned from vast datasets. That means output can vary widely depending on prompt phrasing, context, training data and subtle biases—often in unpredictable ways. 

Because GenAI behavior depends heavily on data and context: 

  • Its outputs are not guaranteed or reproducible. Two similar prompts might yield very different results. 
  • Its responses are shaped by both user-supplied context and the model’s training data—with all their biases, gaps, or flaws. 
  • It mirrors and amplifies human behavior—good or bad—depending on how it’s used. 

This unpredictability is amplified by the speed of enterprise adoption. A 2025 McKinsey survey found that 88% of organizations now use AI in at least one business function. Yet, according to IBM research, only 24% have robust AI risk governance frameworks. That gap means most enterprises are deploying GenAI without sufficient oversight of how models behave, what data they use, or how outputs are validated. 

In environments that lack strong AI governance or usage controls, GenAI’s dependence on data and context makes it a source of risk. Defenders must adjust accordingly. 

GenAI security risks that enterprises must address 

Here are the major risk categories enterprises must understand: 

Data exposure through prompts and outputs 

Employees or contractors can accidentally share sensitive information, such as source code, customer data, financial records or internal strategy, when using public or unmanaged AI tools. Once submitted, this data might be stored, reused for training, or otherwise persist beyond its intended use. 

These risks are real: companies have seen internal data appear in public large language models (LLMs) long after submission. Often, workers have good intentions; looking for help with writing or debugging, unaware of the risks of long-term exposure. However, for organizations already facing insider risk, GenAI creates a new way for data to leak—driven by convenience, lack of awareness, or weak controls. 

Model poisoning and corrupted training data 

Attackers can manipulate training or fine-tuning datasets to influence AI outputs. By injecting malicious or misleading data, they can cause models to generate biased, incorrect, or unsafe recommendations. 

Fine-tuned or internally trained models are especially vulnerable. Poisoned models might leak sensitive internal information, such as file paths, templates, or infrastructure details, while going undetected for months. This makes them a subtle but high-impact threat. 

Automated social engineering at scale 

GenAI is transforming social engineering by automating tasks that were once time-consuming. Attackers can now quickly craft phishing, spear phishing, business email compromise (BEC) and impersonation campaigns with far greater personalization. AI-assisted campaigns can: 

  • Tailor messages to specific individuals or roles, such as finance teams, HR staff or executives 
  • Produce convincing, grammatically correct emails that bypass basic filters 
  • Operate in multiple languages and adapt to regional norms, expanding reach 

What once required significant effort can now be executed with just a few prompts. This enables adversaries to rapidly test, refine and scale attacks. Proofpoint threat intelligence confirms that attackers are increasingly using AI in this way, making social engineering faster, stealthier and more effective than ever. 

Deepfakes and synthetic identity attacks 

Beyond text, GenAI tools now make it relatively easy to produce convincing synthetic audio, video or image content. Deepfakes that mimic voices or faces pose a serious threat where impersonation is used to authorize sensitive actions. 

Imagine scenarios such as: 

  • A fraudster using a synthetic audio clip of a CEO to instruct a finance employee to authorize a wire transfer 
  • A fake video of an executive demanding urgent changes to payroll or vendor payments 

These synthetic identity attacks erode trust in digital communications and challenge traditional verification systems. For organizations that rely on voice calls, video meetings, or remote identity checks, deepfakes create a dangerous new dimension of risk. 

Hallucinations and incorrect outputs 

GenAI models are not infallible. They can produce content that is plausible and convincing yet false or misleading; so-called hallucinations. 

In many cases, these “answers” might look trustworthy. These can include well-written essays, policy drafts, or technical explanations. But if taken at face value without verification, they can lead to serious mistakes: 

  • Misconfigured infrastructure or code deployment based on incorrect advice 
  • Compliance or legal errors due to inaccurate representation of regulations or standards 
  • Misinformation shared publicly or internally, leading to reputational damage 

Because outputs from GenAI can appear polished and authoritative, there’s a danger that employees will treat them as fact, especially under pressure or time constraints. 

Prompt injection and output manipulation 

As GenAI becomes embedded in enterprise tools (ChatOps, assistants, document and report generation), it faces a new attack vector: prompt injection. 

Prompt injection hides malicious instructions in input text, documents or web content, tricking AI systems into unintended actions. For example: 

  • A malicious PDF could prompt an AI assistant to leak sensitive data. 
  • A compromised internal document might make an AI bypass policy checks or alter compliance text. 

Even more dangerous is indirect prompt injection, where attackers embed hidden commands in everyday content, such as emails, attachments, metadata, or web pages that AI systems process. Threat actors are already using this technique to hijack AI assistants. A single malicious email or document can trigger hidden instructions, leading to data leaks or unauthorized actions without any visible exploit code. 

Because AI treats all ingested text as trusted input, these attacks can bypass traditional security filters and create a powerful new attack surface for AI-enabled enterprises. 

Over-reliance on AI for critical business decisions 

GenAI offers tempting advantages, such as speed, scalability, and automation. But when enterprises rely heavily on GenAI for critical decisions such as access control, financial approvals, incident triage, or compliance reviews, dangerous dependencies can emerge. 

If AI-generated output is accepted without human review: 

  • AI errors might grant improper access.  
  • Fraudulent or malicious output might be approved automatically. This can include fake vendor invoices or false change requests. 
  • Decisions based on made-up or biased AI information might harm trust in systems or cause compliance problems. 

In essence, using AI as a shortcut rather than a tool with guardrails exposes enterprises to risk, not only from external attackers but also from flawed automation itself. 

Real-world examples of GenAI being abused or going wrong 

Here are some realistic scenarios that illustrate why GenAI risks must be taken seriously: 

Example 1: AI-generated phishing at scale 

Attackers gather publicly available information on a target: leadership names, finance roles, vendors, and press releases. Using GenAI, they create dozens of highly personalized phishing emails in multiple languages. Each message appears legitimate, referencing real projects, invoices or vendors. Automated generation allows attackers to rapidly test subject lines, wording and timing. By the time defenders detect abnormal activity, multiple accounts might already be compromised. 

Example 2: deepfake voice used for fraud 

An accounts payable employee receives a call from someone claiming to be the Chief Financial Officer (CFO). The voice is synthetic but convincing, and the request seems urgent. The employee is instructed to transfer funds to a “trusted vendor.” Believing the call is genuine, the transfer is completed. Later investigation reveals the voice was AI-generated. This technique bypasses email filters and exploits trust in human recognition. 

Example 3: sensitive data leaked via prompt sharing 

A software engineer pastes internal code containing API keys, server addresses, and user data into a public or semi-public AI assistant to speed up debugging. The AI returns improved code, but the original sensitive data is retained in logs or caches. Months later, security audits reveal exposed keys or infrastructure details. This demonstrates the risk of “shadow AI usage” when enterprise controls are lacking. 

Example 4: poisoned fine-tuned model deployed internally 

An internal AI assistant is fine-tuned using a mix of proprietary and third-party data. Some third-party data contains malicious or biased prompts. When deployed, the assistant produces harmful or inaccurate outputs: leaking internal paths, giving misleading instructions, or exposing sensitive infrastructure. Users treat these outputs as trustworthy and flawed practices can propagate across departments before detection. 

Why these risks matter for security and IT teams 

The risks from GenAI are real and far-reaching. Faulty or manipulated models can disrupt workflows, cause downtime, and raise operational costs. Sensitive data entered into AI tools can also create compliance and privacy risks, especially in regulated sectors, such as finance, healthcare, and defense. 

Financial harm is another concern. Deepfakes, AI-assisted phishing, and synthetic identities can lead to fraud, unauthorized transactions, and costly recovery efforts. Public exposure of AI-related incidents can further damage brand reputation and erode trust among customers, partners, and regulators. 

These challenges are often tied to human behavior. Employees who misuse GenAI tools—knowingly or not—can create insider risks. Unmonitored software as a service (SaaS) and AI applications might also open new paths for data loss or policy violations. 

Ultimately, GenAI widens the attack surface, accelerates potential abuse, and lowers the barrier for threat actors. Security and IT teams must respond with a human-centric, behavior-aware approach that addresses both technical and human risks. 

Best practices to mitigate GenAI security risks 

While GenAI challenges are significant, so too are the defenses, many of which enterprises can deploy today. Below are practical, actionable steps for CISOs and IT leaders to begin controlling these risks. 

Implement data controls and guardrails around prompts 

  • Develop and enforce a clear policy defining what types of data may (and may not) be inputted into GenAI tools. Off-limits categories should include source code with secrets, customer data, internal strategy documents and regulated personal data. 
  • Use Data Loss Prevention (DLP) or similar tools to monitor, block, or alert on attempts to submit sensitive data into unmanaged “shadow” AI tools. 
  • Favor enterprise-grade AI platforms that support prompt anonymization, metadata stripping, and transparency about data retention or logging practices. 

By limiting what goes into GenAI models, organizations reduce the risk of accidental or malicious data exposure. 

Monitor AI usage across email, cloud and collaboration platforms 

GenAI usage represents a new behavioral surface that must be visible to security teams. To gain control: 

  • Include data from AI tools, including prompt submissions, output records, and plugin activity, in systems that monitor user behavior. 
  • Watch for anomalous patterns: frequent use of public GenAI services, large uploads of internal documents or repeated code or forum-style submissions. 
  • Expand insider risk and cloud use monitoring to include AI-driven activity. 

Visibility is the first step towards control. 

Validate and sanitize inputs and outputs 

Treat both the inputs to AI models and the outputs from them as untrusted until they have been manually verified or properly sanitized. Specifically: 

  • Sanitize incoming files or text to remove potential embedded instructions, macros, or hidden prompts. 
  • Require human review for AI-generated outputs, especially if they influence sensitive decisions, compliance documents, or configurations. 
  • Where possible, cross-validate output through alternate sources or independent tools. 

These steps help prevent prompt injection, embedded manipulation or unintended execution of malicious content. 

Apply least privilege and access controls to AI integrations 

GenAI tools often integrate with other internal systems, such as data stores, APIs and cloud platforms. To limit the blast radius: 

  • Enforce role-based access control (RBAC) for AI-enabled integrations. Grant only the minimum privileges needed. 
  • Regularly audit permissions and review which systems AI tools can access. 
  • Avoid giving AI connectors broad access to sensitive data or high-impact systems unless absolutely necessary. 

Limiting access reduces the potential damage, even if an AI component is compromised or misused. 

Train employees on safe AI usage 

The human element is still the most important part of defense. To create an effective first line of protection: 

  • Give employees clear guidance on what they can and can’t do with GenAI tools. 
  • Include AI use in existing security and compliance training. This should cover safe handling, data classification, and verification procedures. 
  • Teach teams about the risks of AI errors, prompt manipulation, deepfake impersonation, and overreliance on AI output. 
  • Promote a culture of “verify first, trust later.” 

Well informed employees are less likely to make costly mistakes or accidentally expose sensitive information. 

Build governance around AI deployments 

Sustainable security relies on strong policies, clear processes and ongoing oversight. Enterprises should: 

  • Keep an up-to-date inventory of all GenAI tools and integrations. 
  • Set clear approval steps before deploying any new AI tools, services, or plugins. 
  • Maintain ongoing oversight through regular security reviews, red-teaming exercises, and model behavior assessments. 
  • Assign clear responsibility for data classification, prompt usage policies, output review, and incident response. 

This approach turns AI from a risky unknown into a manageable resource, allowing innovation while keeping control. 

A 30-day plan to strengthen GenAI security 

Shown below is a practical roadmap to get started with strengthening your GenAI security. 

Week 1: Inventory AI tools and usage 

  • Identify all GenAI tools (both sanctioned and unsanctioned) currently in use across the organization. 
  • Map who is using those tools, what kinds of data they handle, and how they integrate with other systems. 

Week 2: Enforce data controls and least privilege 

  • Define and publish AI usage policies. 
  • Configure DLP or equivalent controls to block or alert on risky data submissions. 
  • Assess data security posture and strengthen data governance practices before rolling out enterprise AI tools such as M365 Copilot. 
  • Review and tighten permissions for AI-enabled integrations. 

Week 3: Deploy monitoring and behavioral analytics 

  • Integrate AI usage logging into existing security telemetry and insider threat platforms. 
  • Establish baseline behavior—what “normal” AI usage looks like—and define alerts for anomalies. 

Week 4: Run red-team tests and implement governance 

  • Test for potential threats, such as prompt injection, data leaks through AI, or attacks that use fake identities. 
  • Review governance frameworks, including approval processes, model registries, and accountability assignment. 
  • Provide AI-specific training or awareness sessions to employees. 

Completing this roadmap in 30 days won’t eliminate all risk. But it will dramatically reduce the attack surface and put defenses in place before a real incident occurs. 

How Proofpoint helps safeguard GenAI environments 

GenAI can be risky, but it can also be safe with the right approach. Proofpoint uses a human-focused, behavior-aware strategy to help organizations protect their AI tools and data. 

We help by: 

  • Spotting shadow AI use to stop data leaks 
  • Strengthen data security posture and automate AI data governance 
  • Checking prompts and content for malicious or unsafe activity 
  • Watching AI integrations across email, cloud and collaboration platforms to prevent misuse 
  • Detecting AI-related threats, such as phishing, deepfakes, and fake identities 

With Proofpoint, AI becomes a secure tool, not a risk. Ready to secure your AI strategy with confidence? Download the Securing and Governing AI Data guide to learn how Proofpoint helps organizations protect data across AI-powered workflows.