Table of Contents
Deepfakes are AI-generated synthetic media (video, audio, etc.) intended to convincingly depict a person saying or doing something they never did. Many modern deepfakes use tactics like voice cloning, synthetic video calls, and AI-generated content to mimic the way individuals communicate. Since generative AI technologies have made these tools increasingly available, there has been a tremendous increase in their use to impersonate identities and exploit trust at an enterprise level.
Historically, creating deepfakes required a high level of technical knowledge and computational power. However, the technical barrier to entry has been lowered, and attackers can now create a near-exact replica of an executive’s voice using a short audio clip to produce a convincing video call with little to no programming experience. In turn, the number of deepfake files found online has grown from approximately 500,000 in 2023 to an estimated 8 million in 2025, reports Cybersecurity Drive. “It’s a perfect storm that leads us to really sense that 2026 will be the year of impersonation attacks,” said Aaron Painter, Nametag CEO.
For enterprises, the threat is not abstract. Deepfakes specifically target the verification processes that organizations rely on: a voice on a phone call, a face in a video meeting, or an email that reads exactly like one sent by the CFO. In 2025, U.S. deepfake fraud losses totaled over $1.1 billion, more than triple the $360 million loss the previous year. The attack surface continues to grow because deepfakes exploit human trust, which is much harder to “patch” than technological barriers.
Cybersecurity Education and Training Begins Here
Here’s how your free trial works:
- Meet with our cybersecurity experts to assess your environment and identify your threat risk exposure
- Within 24 hours and minimal configuration, we’ll deploy our solutions for 30 days
- Experience our technology in action!
- Receive report outlining your security vulnerabilities to help you take immediate action against cybersecurity attacks
Fill out this form to request a meeting with our cybersecurity experts.
Thank you for your submission.
How Deepfake Technology Works
A deepfake is a prediction engine that learns from a large sample of audio, video, and/or images of a target individual, identifying the patterns that define how that person looks, sounds, and/or moves, and generating novel content that reflects those patterns with sufficient fidelity to deceive an observer. The process of generating a deepfake consists of three stages:
- Training: The AI system consumes a dataset of the target, which may include public videos, audio from earnings calls, social media content, etc. As with many things, the more data available, the more believable the final product will be. Executives who have a large public footprint are more at risk because there is typically so much data available about them.
- Generation: After training, the AI system uses machine learning to produce synthetic content using the patterns it learned from the input it receives. For instance, deepfake technology can clone someone’s voice based on just three seconds of audio, and a face swap can be seamlessly superimposed on a live video call.
- Real-time synthesis: This is when the risks become most acute to enterprises. Modern deepfake tools do not need to be edited after being produced. Instead, they produce content in real-time. As a result, a synthetic voice or face can be integrated into a live phone call or video conference without having used any pre-recorded content.
What Makes Deepfakes Difficult to Spot
For SOC teams and IT staff reviewing flagged content, and for finance or HR employees making real-time verification calls, these are the properties that make deepfakes difficult to detect without purpose-built tooling:
- Lip sync and facial micro-expressions are increasingly accurate.
- Voice clones don’t just copy pitch; they also copy cadence, tone, and speech patterns.
- Real-time delivery takes away the time frame that post-production analysis needs to work.
- Familiarity bias makes people fill in the blanks when they hear or see a voice or face they know.
- Standard authentication controls, such as voice verification and video ID checks, lack an inherent capability to differentiate between synthetic and authentic identities.
For IAM and IT teams, that last point carries the sharpest edge. Authentication systems built around “something you are” (e.g., a voice, a face) were not designed with generative AI in mind. Deepfakes do not bypass those controls through a technical exploit. They satisfy them.
Common Deepfake Attacks in Cyber-crime and Fraud
Deepfakes are not limited to just one type of attack campaign. They appear across multiple attack surfaces, each with a different target, technique, and organizational consequences.
Executive Impersonation
Attackers can quickly copy an executive’s voice by stealing audio from earnings calls, conference presentations, and media interviews. The clone can then make a fake video meeting, make a live phone call, or approve a fake wire transfer in the executive’s name. One well-known case involved a finance worker sending $25 million after a video call that showed deepfaked versions of several senior coworkers. Everyone on the call was fake. The attack works because it uses both authority and urgency at the same time, which are two of the most reliable social engineering triggers.
BEC Amplification
Traditional business email compromise relies on text alone. Deepfake-enhanced BEC adds a voice call or video appearance to confirm the request, which makes it easier for finance and operations teams to move forward. This combination is especially dangerous for fraud leaders because it passes internal checks that were made to catch BEC attempts. A convincing voice on a follow-up call effectively cancels out a second-factor check.
Account Takeover Enablement
Deepfakes are being used more and more to get around identity verification controls that keep people from getting into their accounts. Face-swap tools that work in real time can pass liveness checks during video onboarding, and voice clones can work for phone-based authentication at help desks and call centers. This is a direct challenge to authentication models that use biometric signals for IAM teams. In 2025, 1 in 20 identity verification failures was caused by deepfake-driven fraud.
Identity Verification and KYC Bypass
AI-generated fake IDs and composite selfies are being used to bypass KYC controls on a large scale. Underground services can make fake IDs that look real for as little as $15 each. In 2024, one bank found more than 1,100 deepfake attempts to get around its biometric loan application process in just one year. For companies that use document and facial recognition to hire new employees, the attack surface now includes the verification layer itself.
Brand and Reputation Attacks
Fake videos or audio of executives making false product announcements, market-sensitive statements, or damaging admissions can spread faster than any correction. For publicly traded companies or regulated industries, the financial and legal risks of a convincing fake executive statement can be higher than the costs of a direct fraud event. SOC teams that are keeping an eye on this threat need to be able to correlate signals from different channels, since the first sign of trouble usually shows up on social media instead of inside the network perimeter.
Best Practices to Reduce Deepfake Risk
To minimize the risk of deepfakes, governance, technical controls, and human readiness all need to work in unison. One layer is not enough on its own.
Governance and Verification Protocols
Make sure that any request for money transfers, changes to credentials, or access to sensitive data has clear, written steps to follow. For high-value financial approvals, out-of-band verification—confirming a request through a different, pre-established channel than the one it came in on—should be required. Before an incident happens, there should be clear paths for escalation.
Technical Controls
The technical base includes MFA that is resistant to phishing, identity monitoring across cloud and SaaS environments, email security controls, and DLP policies. For IAM teams, allowing native device cameras and flagging sessions that start from virtual or swapped sources gives them a way to stop video-based impersonation attempts.
Human Layer
Not only do employees need to know that deepfakes exist, but they also need to know how to use them in real life. People have both the knowledge and the permission to stop and think before acting on an unusual request because of regular simulation drills, updated security awareness training that includes AI-generated impersonation, and a culture of “stop-and-verify” that is clearly communicated. A simple but valuable habit for executives to start today is to agree on a personal verification phrase for live calls.
Are Deepfakes Illegal?
The short answer is that it depends on what they are used for. There isn’t a single federal law that completely bans deepfakes, but that doesn’t mean there aren’t any laws that do.
The TAKE IT DOWN Act came into law in May 2025, making it the first federal law to specifically target AI-generated synthetic media. The law is mostly about non-consensual intimate images and requires covered platforms to remove them. However, it doesn’t directly deal with cases of enterprise fraud or impersonation. In those cases, the laws against fraud, wire fraud, defamation, and identity theft still apply, and they are very strong.
Law at the state level has moved more quickly. As of late 2025, 47 states have passed some kind of deepfake law. These laws cover things like interfering with elections, making content without permission, personality rights, and commercial impersonation. The resulting patchwork makes it challenging for multi-state businesses to comply with the rules, especially those that operate in areas with different notification, removal, and liability requirements.
The practical lesson for compliance and legal teams is that current law allows prosecution of deepfake-related fraud, but doing so is difficult. Finding out who did it anonymously, figuring out where the crime happened, and proving intent all make things harder. The lack of enforcement underscores the importance of prevention. Companies that wait for regulators to close these gaps are taking a risk that policy documentation, incident response planning, and vendor due diligence on AI tools could fix right now.
Real-Life Cases of Enterprise-Scale Attacks Using Deepfakes
While these stories seem outrageous in their own right, they’re not fictional cases. They reflect an increasing trend of enterprise-targeted deepfake attacks, and each example shows a different area where traditional verification processes have failed.
Ferrari — CEO Voice Clone Attempt
Ferrari executives received several messages asking for help with a secret acquisition that needed to be completed quickly. Then they got a phone call that sounded just like the CEO’s voice and regional accent. To make the attack seem more believable, it was planned to coincide with real business news. The attempted CEO fraud fell apart only when an executive asked a personal verification question that only the real CEO would know. This was a control that had nothing to do with technology. This case shows that even organizations with abundant resources are not immune, and that human-layer verification protocols remain among the best ways to protect against attacks.
Singapore Multinational — $499,000 Zoom Impersonation
In March 2025, a finance director at a multinational company in Singapore approved a wire transfer after talking on Zoom with people who said they were the company’s CFO and senior leadership. Attackers had suggested the video call ahead of time because they knew that finance teams had been trained to check unusual requests visually. The willingness to “verify” gave people a false sense of security that let them get around the very control that was meant to catch this kind of fraud. This case shows security teams that attackers are now planning for detection methods and working around them.
North Korean IT Worker Infiltration
In July 2024, KnowBe4, a cybersecurity firm specializing in security awareness training, discovered that a newly hired software engineer was a North Korean operative using a stolen U.S. identity, an AI-manipulated photo, and a deepfaked video presence across four separate interview rounds. The individual passed background checks and verified references before being onboarded. The fraud was uncovered only when endpoint detection flagged malware being loaded onto the company’s laptop within hours of delivery.
Deepfake-assisted identity fraud has been used to put fake employees in real companies, which is a less common but growing attack pattern. In 2025, 41% of companies said they had hired at least one fake candidate who used AI-generated documents and deepfake video interviews to get through the hiring process.
How to Detect Deepfakes in Enterprise Environments
Detection alone is not enough. That’s the most important thing to know about enterprise deepfake defense. AI detection tools can flag media that looks suspicious, but by the time the content is seen, a wire transfer may have already gone through, or an account may have already been set up. Detection should be a part of identity controls and verification workflows, not next to them.
With that framing, effective detection works on four levels of signals.
Media Signals
AI-powered detection tools look for things in video and audio that human reviewers might miss, like small lip-sync errors, strange blinking patterns, spectral anomalies in voice recordings, and compression fingerprints left by generative models. These tools work best when they are built into high-risk workflows like executive video calls, requests for financial authorization, and onboarding new employees, rather than being used after the fact. For SOC teams, real-time media analysis at communication gateways gives them the earliest possible warning before social engineering can finish.
Behavioral Signals
Deepfake-enhanced communications tend to follow patterns of behavior that are easy to see, no matter how real the media itself looks. Unusual urgency, requests that skip normal approval chains, strange wording, and pressure to act outside of normal business hours are all good signs. Fraud teams should see these signs as signs that a transaction is about to happen. Any high-value authorization that comes with strange behavior needs to be put on hold and verified outside of the normal channel before money can move.
Contextual Signals
One of the least used detection signals is channel mismatch. Investigate requests from unusual platforms, outside known communication patterns, or from contacts never engaged through that channel—regardless of how authentic the voice or face appears. Timing is important too. Requests that are timed to coincide with executive travel, end-of-quarter pressure, or organizational disruption follow a pattern that both security teams and executive assistants can learn to spot.
Identity Signals
For IAM and security teams, the identity layer often shows the clearest signs of compromise that can be caused by deepfakes. If you see strange activity on your account, a device fingerprint that doesn’t match previous sessions, or a request for access from a strange location, you should immediately verify that the video call or voice authorization is real.
Many companies haven’t yet put into place deepfake passwords, which are phrases or gestures that executives agree on ahead of time and use on live calls to prove their identity. These are a low-cost, high-value way to control procedures. For executives, the rules are simple: if a request seems urgent and the stakes are high, take your time and check with a different channel before taking action.
Emerging Trends in Deepfake Threats
For CISOs mapping their roadmap and fraud teams tracking frequency, these are the developments that will define the next phase of the threat.
- Real-time synthetic video calls: Deepfake video is no longer just for post-production; it can now be used for real-time synthetic video calls. Live face-swap tools work on active video sessions, making it harder to verify visual details in real time.
- Multilingual voice cloning: Voice synthesis platforms can now output in multiple languages in real time with natural intonation and emotional control. With just one cloned voice model, attackers can convincingly impersonate people in many languages and locations.
- Deepfake-as-a-service: Criminals of all skill levels can use commercialized platforms to clone voices, make videos, and create fake personas. Some stores say they receive more than 1,000 AI-generated scam calls every day.
- Autonomous agent-led impersonation campaigns: AI agents can now run multi-step impersonation campaigns on their own, without human assistance. They can do things like schedule contacts, change their responses, and move between targets.
- Attacks that use email, voice, and chat: Attackers are using fake voice calls, AI-written emails, and chat platform messages to carry out coordinated attacks. These elaborate attacks are difficult to detect because no one channel carries out the entire attack. For SOC teams, this means that cross-channel signal correlation is a must-have for detection, not an improvement.
Get Ahead of Tomorrow’s Attacks with Proofpoint
Artificial intelligence has created a new dimension in today’s threat landscape. Attackers use AI to scale their campaigns and evolve the effectiveness and believability of their attacks. Conversely, security teams use AI to detect the patterns and anomalies from the very attacks conspired by AI. Fighting fire with fire, Proofpoint’s AI-integrated security platform helps organizations stay ahead of these evolving risks, turning threat intelligence into faster, smarter protection. See why Proofpoint leads in enterprise cybersecurity solutions for AI-driven threats.
Ensure your organization’s security and governance in the age of AI. Get in touch with Proofpoint.
Related Resources
FAQs
What is a deepfake?
A deepfake is fabricated video, audio, or image content generated by AI that makes it appear as if a person is saying or doing something they never did. The word “deepfake” comes from the words “deep learning” and “fake,” which are both types of machine learning used to make the content. Modern deepfakes can clone voices in real time and combine live video, making them a useful tool for large-scale impersonation and fraud.
How are deepfakes used in cybercrime?
People mostly use deepfakes to pretend to be people they trust and trick them into doing things that could cost them a lot of money, like approving wire transfers, sharing credentials, or giving access. Some common ways that businesses are attacked are through CEO fraud over voice or video calls, BEC amplification, where a fake voice confirms a fake email request, and identity verification bypass during onboarding or helpdesk authentication. What they all have in common is that they take advantage of people’s trust instead of their technical weaknesses.
Can deepfakes bypass biometric security?
Yes, AI-generated face images and real-time face-swap tools can get around liveness detection checks used in video-based identity verification. Voice clones can also work with phone-based biometric authentication systems. Many authentication controls based on something you are were not designed with generative AI in mind, and many have not been updated to reflect the quality of synthetic media available today.
How can organizations detect deepfakes?
Certain layers of effective detection work together: media signals from AI analysis tools; behavioral signals like requests that are out of the ordinary or requests that are too urgent; context signals like channel mismatch or irregular timing; and identity signals like strange account access around the time of a suspicious communication. There isn’t one layer that works on its own. The organizations best able to withstand deepfake attacks use automated detection, trained human judgment, and clear verification protocols.
What’s the difference between a deepfake and synthetic media?
Synthetic media is a broader term that includes any content AI creates or substantially alters, such as text, images, audio, and video. Deepfakes are a type of synthetic media that is made to make it look like a real, identifiable person is in a fake situation. All deepfakes are fake media, but not all fake media is a deepfake. The difference is important for legal and policy reasons, as the words we use increasingly shape the rules we follow.
Are deepfakes illegal in the United States?
There is no single federal law that makes deepfakes illegal, but laws against fraud, wire fraud, defamation, and impersonation apply when deepfakes are used for illegal purposes. The TAKE IT DOWN Act, which was signed into law in May 2025, was the first federal law to deal with synthetic media. However, it focuses on non-consensual intimate images rather than business fraud. State-level laws have moved faster, and now 47 states have some kind of deepfake law. This makes compliance rules very different across states.
The latest news and updates from Proofpoint, delivered to your inbox.
Sign up to receive news and other stories from Proofpoint. Your information will be used in accordance with Proofpoint’s privacy policy. You may opt out at any time.