Table of Contents
Synthetic identity fraud is one of the most rapidly evolving identity-based cyber crimes. Instead of taking a real person’s entire identity, fraudsters are creating entirely new identities by combining real data with false information (e.g., legitimate social security numbers, fictitious names, AI-generated photos, fabricated credit history).
What’s changing the game is the role AI has within this fraudulent activity. Fraudsters can use generative AI technologies to create large numbers of synthetic digital identities, automate enrollment across multiple financial platforms, and use deepfake images and videos to circumvent biometric identification verification processes. In fact, A 2026 report found that image-to-video generation has exploded by over 1,000% year over year, eroding assumptions about biometric and liveness verification.
Over $3.3 billion in synthetic identity exposure hit U.S. lenders in the first half of 2025 alone. Additionally, “the average loss to each confirmed synthetic fraud case is $15,000,” reports financial crime consultant Tom Vidovic. “Over 80% of new account fraud can be attributed to this sophisticated scheme,” he adds. For fraud teams, security leaders, and identity programs, the challenge is front and center.
Cybersecurity Education and Training Begins Here
Here’s how your free trial works:
- Meet with our cybersecurity experts to assess your environment and identify your threat risk exposure
- Within 24 hours and minimal configuration, we’ll deploy our solutions for 30 days
- Experience our technology in action!
- Receive report outlining your security vulnerabilities to help you take immediate action against cybersecurity attacks
Fill out this form to request a meeting with our cybersecurity experts.
Thank you for your submission.
Definition of Synthetic Identity Fraud
Synthetic identity fraud is a form of identity-based cyber crime that fabricates a fictitious persona by combining real and manufactured personal information. Unlike traditional identity theft, it does not target a single victim’s existing identity. It builds an entirely new one.
Synthetic identities typically combine:
- A real Social Security number (often belonging to a child, older adult, or someone with no credit history)
- A false name and date of birth
- A fictitious address
- AI-generated profile images or deepfake documents
- Fabricated or manipulated credit and employment history
What makes synthetic identity fraud unique is the fact that it is constructed from a hybrid of both true and fabricated elements. The true elements allow the fake identity to pass through enough automated checks and systems to initially verify its authenticity. The difficulty lies in tracing the fabricated elements back to a specific individual. AI has greatly enhanced this ability to create large amounts of synthetic identities and deploy them quickly and automatically throughout digital onboarding processes.
How Synthetic ID Fraud Works
Synthetic identity fraud is a complex process that involves making a phony identity appear credible, legitimate, and capable of passing verification processes and receiving communications. It’s a long-term scheme that can yield significant payouts for threat actors while causing substantial losses for financial institutions and individuals whose personal information was exploited. Such schemes are typically carried out using several tactics.
1. Gather Information
Fraudsters typically start by collecting authentic personal data, often focusing on Social Security Numbers. Common sources include:
- Data breaches
- Dark web marketplaces
- Social engineering tactics
- Public records
- Social media platforms
They often target SSNs belonging to children, elderly folks, or homeless individuals, as these targets are less likely to be actively monitored.
2. Create the Synthetic Identity
It’s common for fraudsters to pair a stolen SSN with fake information, such as a fake name, phone number, date of birth, or address. Often referred to as a “Frankenstein ID,” this combination of real and forged data is what defines a synthetic identity. AI tools now allow fraud rings to generate realistic profile photos, fabricate supporting documents, and create synthetic personas at scale with minimal manual effort.
3. Build Credit and Credibility
With the intention of eventual theft, fraudsters then use the synthetic ID to apply for lines of credit. Initially, applications are often rejected and dismissed due to a lack of credit history. However, these applications create a credit file for the synthetic identity.
Patient fraudsters continue applying for credit until they are successful, often with high-risk lenders. When invested long-term in their synthetic identity, they use this credit responsibly, making timely payments to build a positive credit history and overall credibility. Automated bot activity accelerates this stage by allowing fraudsters to submit high volumes of applications across multiple platforms simultaneously.
4. Cultivating the Identity
Over months or even years, fraudsters nurture and grow the synthetic identity, gradually gaining access to higher credit limits and more valuable financial products. They may use techniques like credit piggybacking, where the synthetic identity is added as an authorized user on an account with good credit to boost its creditworthiness.
To make the identity seem more authentic, fraudsters may also create fully developed synthetic online presences: AI-generated avatars, fabricated social media histories, and automated engagement activity designed to pass both human review and algorithmic verification checks. Some fraud rings deploy these synthetic personas across digital onboarding workflows simultaneously, using automation to test and refine which profiles successfully clear identity verification.
5. The “Bust-Out”
Once the synthetic identity has established legitimate credit and can effectively borrow funds from financial institutions, the fraudster makes one final push to max out all available credit before vanishing and neglecting any repayment. This is often referred to as “busting out,” leaving creditors with significant losses.
6. Repeating the Process
Skilled fraudsters often create multiple synthetic identities simultaneously, allowing them to scale their operations and increase their illicit gains. AI-assisted automation has made this scaling significantly faster and more accessible, lowering the technical barrier to running large-scale synthetic identity operations.
Synthetic Identity Fraud vs. Identity Theft
These two terms get used interchangeably, but they describe meaningfully different threats. Here’s how they break down.
Synthetic Identity Fraud
Traditional Identity Theft
What it is
A fabricated persona built from real and fake data
A real person’s identity was stolen and misused
Is there a direct victim?
Often, there’s no single identifiable victim
Yes, a specific individual whose identity was taken
Primary target
Financial institutions, lenders, digital platforms
Individual people and their accounts
How it is detected
Hard to detect — no victim to file a report
Victim may notice and report unusual activity
Role of AI
Increasingly used to generate identities at scale
Less common, though AI aids phishing and credential theft
Common outcomes
Credit fraud, mule accounts, phishing infrastructure
Drained accounts, damaged credit, and identity impersonation
Detection difficulty
High
Moderate
What it is
Synthetic Identity Fraud
A fabricated persona built from real and fake data
Traditional Identity Theft
A real person’s identity was stolen and misused
Is there a direct victim?
Synthetic Identity Fraud
Often, there’s no single identifiable victim
Traditional Identity Theft
Yes, a specific individual whose identity was taken
Primary target
Synthetic Identity Fraud
Financial institutions, lenders, digital platforms
Traditional Identity Theft
Individual people and their accounts
How it is detected
Synthetic Identity Fraud
Hard to detect — no victim to file a report
Traditional Identity Theft
Victim may notice and report unusual activity
Role of AI
Synthetic Identity Fraud
Increasingly used to generate identities at scale
Traditional Identity Theft
Less common, though AI aids phishing and credential theft
Common outcomes
Synthetic Identity Fraud
Credit fraud, mule accounts, phishing infrastructure
Traditional Identity Theft
Drained accounts, damaged credit, and identity impersonation
Detection difficulty
Synthetic Identity Fraud
High
Traditional Identity Theft
Moderate
The core distinction is construction versus theft. Identity theft takes something real and exploits it. Synthetic identity fraud builds something new and weaponizes it. Both cause serious harm, but they require different detection methods, different response workflows, and different long-term prevention strategies.
Methods of Synthetic Identity Fraud
Cyber criminals use a range of techniques to craft and carry out synthetic ID theft. Here are the primary methods used.
- Identity compilation: Otherwise known as “Frankenstein Fraud,” this common method combines a stolen SSN with fake personal information so that the new identity doesn’t correspond to any real person. AI tools now allow fraudsters to generate supporting details, including realistic profile photos and fabricated documents, in seconds.
- AI-generated identity artifacts: With advancements in generative AI models (which can produce realistic fake documents), fraudsters can generate believable fake driver’s licenses, passport images, and online persona photos that automated document verification systems accept as valid.
- Deepfake identity verification bypass: Attackers use AI-generated video and voice cloning to defeat facial recognition, liveness detection, and remote identity verification systems. In injection attacks, fraudsters hijack a device’s camera stream and replace the live feed with synthetic deepfake footage, allowing a fabricated identity to pass biometric checks that were designed to stop them.
- Automated fraud creation: Specialized AI bots automate the entire identity creation process, from generating fake images to submitting applications and passing KYC checks at scale. Fraud rings can now produce and deploy thousands of synthetic identities simultaneously, iterating on rejected profiles and adjusting attributes to improve success rates.
- Identity manipulation: This tactic seeks to alter the details of a real person’s identity to create a new one. Slight changes to names, addresses, or birthdates can be enough to mask previous credit history or start a “fresh” identity.
- Piggybacking: Similar to tailgating to access a secure physical area, piggybacking involves adding a synthetic identity as an authorized user on a legitimate account, effectively boosting its credit score.
- Credit Profile Number (CPN) scams: These schemes use a fake nine-digit number in place of a real SSN, which are often marketed as a legal way to create a new credit identity but are illegal.
- Social engineering: By using phishing, pretexting, or other socially manipulative tactics, threat actors obtain real personal information to incorporate into synthetic identities.
- Exploiting credit repair loopholes: With an established synthetic ID, threat actors may file false identity theft claims to remove negative items from credit reports, thereby artificially improving the creditworthiness of the synthetic identity.
- Data breach exploitation: Another technique involves utilizing personal information obtained from large-scale data breaches. Fraudsters can easily combine fragments of authentic information from multiple individuals and create convincing synthetic identities at scale.
These methods are often used in combination, allowing fraudsters to produce hard-to-detect synthetic identities. The variety and complexity of these techniques impact the escalating challenges of combating synthetic identity fraud in the financial sector.
Synthetic Identity Fraud in the Age of AI
Generative AI has dramatically impacted the financial structure for synthetic identity fraud. Generating a realistic, new synthetic identity required significant resources and expertise. The AI tools today generate realistic images for profile photos, fake identification documents, and full digital histories in minutes. Fraud teams are significantly affected by the potential for criminals to execute synthetic identity fraud on an unprecedented scale and speed, which traditional manual fraud prevention cannot keep up with.
Fraudulent activity using deepfakes also now poses a threat to identity verification. Attackers use AI-generated videos and audio to circumvent facial recognition, liveness detection, and remote onboarding processes (specifically designed to prevent synthetic identity fraud).
Perhaps the most impactful fraudulent development to date is the creation of AI fraud agents: fully autonomous systems that use generative AI, automation, and reinforcement learning to generate synthetic identities, interact with verification systems in real time, and learn and improve upon previous attempts. Security professionals recognize this as a transition from opportunistic fraud to a scalable, continuous cyber-crime operation using AI.
The Dangers of Synthetic Identity Fraud
Synthetic identity fraud has consequences that extend far beyond financial loss. Because synthetic personas are becoming increasingly sophisticated and easy to create in large quantities, they’re posing growing challenges for fraud prevention, cybersecurity, and digital trust.
Financial Losses for Businesses
Synthetic identity fraud hits banks, lenders, and digital platforms hardest—but telecommunications providers, retail credit programs, and fintechs face equal exposure. As AI makes synthetic identities faster and cheaper to generate at scale, financial losses will accelerate across every sector.
Erosion of Digital Trust
The moment synthetic identity passes identity verification, the trust-based systems organizations rely on fail. This is a financial risk for fraud teams; however, it also represents a structural issue related to the integrity of digital onboarding and identity systems.
Facilitation of Broader cyber crime
Synthetic identities are being created as part of larger criminal schemes. Fraudsters are using synthetic accounts to send phishing messages, launder money, set up mule accounts, and create complex fraud networks that are difficult to trace back to legitimate individuals. For security professionals, a synthetic identity is frequently not the final objective. It’s a tool for something bigger.
Long-term Harm to Real Individuals
Children and young adults are common victims of SSN harvesting, since their credit files are inactive and unlikely to be regularly monitored. Additionally, victims may not realize they have suffered the consequences of an attack until years after the fact, when their credit history has already been damaged.
Scaling Through Automation
AI and automation enable organized crime syndicates to generate and distribute synthetic identities across numerous platforms. What was previously accomplished through manual labor and time-consuming processes can now be completed at scale, significantly reducing the time between the creation of a synthetic identity and the onset of financial exploitation.
Challenges for Fraud Detection
Traditional fraud detection methodologies struggle to detect synthetic identities because there’s no single legitimate victim to trigger a complaint. In addition, the lack of a direct victim makes it much more challenging to both detect and quantify synthetic identity theft. Detection and quantification through behavioral and identity signal-based analysis are becoming critical to closing this detection gap.
Increased Operational Costs
Organizations that have invested in the most sophisticated identity verification and fraud detection systems incur high operational costs to maintain them. Consumers typically bear these costs as increased fees or tighter access to credit.
Synthetic Identities and Account Takeover
Synthetic identities are seldom the final stage of cyber crime. They often serve as infrastructure for more extensive campaigns. Fraudsters create synthetic accounts to gain credibility before conducting phishing schemes; they create mule accounts to receive and move funds stolen from victims; they establish apparently legitimate user networks that obscure the trail of financial laundering activity.
The connection between synthetic identity crime and account takeover fraud is direct. Because synthetic accounts are purpose-built to pass verification, fraudsters exploit them to test stolen credentials, absorb false transactions, and receive funds from compromised accounts—while evading the same fraud checks designed to catch them.
When a synthetic identity surfaces in one business unit, it rarely operates alone—coordinated attacks across multiple areas are the norm. Siloing synthetic identity crime from account takeover leaves both threats underestimated and underdefended.
How to Detect Synthetic ID Fraud
Detecting synthetic identity fraud has become significantly harder as AI-generated personas grow more convincing. Traditional verification checks that rely on document review or credit history alone no longer cut it. Modern detection requires layering behavioral signals, identity analytics, and machine learning across the full account lifecycle.
Identity Graph Analysis
Identity graph tools create a relationship map among data elements (e.g., devices, email addresses, phone numbers, IP addresses, account activity). When fraudsters create synthetic identities, they typically use common “infrastructure” across many fraudulent activities, and the graph analysis will reveal these relationships, which would be difficult to discover through individual account reviews.
AI-driven Fraud Detection
Machine learning algorithms detect suspicious identity creation by comparing the characteristics of new accounts against fraud signature data, behavioral baseline data, and cross-platform signals. AI-driven detection differs from traditional rule-based systems because it can adapt to emerging fraud patterns without updating individual rules.
Behavioral Anomaly Detection
Fraudulent accounts tend to act differently from legitimate accounts. Any number of abnormal behaviors can be indicators of fraudulent activity (e.g., rapidly building credit, placing large purchases after an extended period of dormancy, atypical login and session behaviors, etc.).
Biometric and Liveness Verification
As the sophistication of deepfake bypass attacks increases, biometric and liveness detection have become critical components of digital identity verification workflows during onboarding.
Credit File and Data Consistency Checks
Thin credit histories, cross-database data inconsistencies, and age-mismatched SSNs remain strong synthetic identity signals—especially when layered with other detection methods.
Cross-Platform Identity Correlation
Attackers typically use synthetic identities across multiple platforms simultaneously. The problem is that these types of coordinated fraud rings look like isolated events within a single organization. But by sharing fraud signals across institutions, platforms, and industry networks, you can identify and stop them.
How to Protect Against Synthetic Identity Fraud
Defending against synthetic identity fraud requires more than periodic credit checks and document review. As fraud methods grow more automated and AI-assisted, protection strategies need to match that sophistication across the full identity lifecycle.
- Advanced identity verification: Robust digital onboarding should combine document verification, biometric authentication, and liveness detection to authenticate the person behind an identity. Behavioral biometrics, which analyze how a user types, moves, and navigates, add a continuous layer of verification that’s difficult for automated fraud tools to replicate convincingly.
- AI-driven fraud detection: Machine learning models detect suspicious identity creation patterns by analyzing behavioral baselines, cross-platform signals, and known fraud signatures in real time. Unlike static rule sets, AI-driven systems adapt to evolving fraud patterns and improve detection accuracy over time without requiring constant manual updates.
- Identity graph monitoring: Continuous monitoring of how identities connect across devices, accounts, and activity patterns helps surface coordinated fraud operations early. A single synthetic identity may look legitimate in isolation. Identity graph tools reveal the infrastructure shared across dozens of fraud attempts.
- Cross-institution fraud intelligence sharing: Participating in shared fraud intelligence networks and industry consortia gives organizations visibility into patterns that no single institution would detect independently.
- Social engineering awareness: Employees and customers remain a meaningful entry point for the personal data that feeds synthetic identity construction. Training people to recognize phishing, pretexting, and credential harvesting attempts reduces the supply of real data available to fraudsters.
- Credit and identity monitoring: Regular review of credit reports and identity signals remains a practical baseline, particularly for detecting SSN misuse affecting children and young adults whose files are rarely monitored. Unexpected credit activity or unfamiliar accounts warrant immediate investigation.
Emerging Trends in Identity Fraud
Identity fraud is not static. Emerging tactics will likely shape the threat landscape over the next several years.
- AI-generated identity artifacts: Generative AI tools can quickly create high-quality, legitimate-looking fraudulent IDs, passports, profile pictures, and other documentation. In 2025, AI-generated fakes accounted for about 2% of all documented frauds worldwide; this number was essentially zero one year earlier.
- Deepfake identity verification bypass: Attackers use AI-generated video and voice to defeat facial recognition and liveness detection during digital onboarding. Twenty percent of biometric fraud attempts now involve a deepfake component.
- Biometric spoofing: Face-swap technology and animated selfie manipulation allow fraudsters to inject synthetic media directly into verification camera streams, bypassing checks designed to require a live human presence.
- Automated fraud rings: Large-scale organized fraud rings employ AI agents to develop, test, and evolve synthetic identities, iteratively refining profiles that do not verify until they do.
- Synthetic digital personas: Synthetic fraud rings are creating fully developed online identities (with social media history, AI-generated profile pictures, and automated interactions) that are increasingly indistinguishable from those of actual people.
Synthetic Identity Fraud and Cybersecurity
Synthetic identity fraud has evolved from being purely a financial crime to a major cyber crime issue. Its scope now includes the onboarding process for customers/clients, identity verification processes used by organizations, and the digital trust models upon which many organizations rely. In the first half of 2025 alone, 8.3% of all digital account creation requests were identified as potentially fraudulent. This figure reflects how deeply synthetic fraud has penetrated the onboarding layer.
Security teams face a significant challenge with synthetic identity fraud. These fake identities are designed to pass static identity verification and point-in-time credential checks that most organizations use. In turn, continuous identity monitoring, behavioral analysis, and adaptive authentication are required to identify and stop evolving fraud that occurs post-onboarding.
However, there are additional risks associated with this type of fraud. Too many false positives create friction for legitimate users—and blind spots for coordinated synthetic attacks.
Identity security infrastructure is the first line of defense against synthetic identity fraud, and security leaders must ensure it evolves as fast as attackers do.
FAQs
What is synthetic identity fraud?
Synthetic identity fraud is a type of identity-based cyber crime in which a threat actor creates a completely fictitious persona using both authentic and manufactured personal data. As opposed to traditional identity theft, where one person’s authentic identity was stolen, there’s often no single direct victim. A synthetic identity is created using multiple forms of data so that the thief can establish new lines of credit, develop a legitimate appearing financial history, and later exploit the accumulated credit.
How does synthetic identity fraud work?
Fraudsters create a synthetic identity by matching a real SSN to false personal identification. Once the synthetic identity is established, they’ll use that identity to apply for credit and, over time, build a legitimate-looking financial history. Eventually, the accumulated credit in the synthetic identity is exploited through a planned “bust-out,” in which all lines of credit are used simultaneously. AI has streamlined each step of the process—from creating the identity to automatically applying for and verifying a line of credit.
What is the difference between synthetic identity fraud and identity theft?
Identity theft occurs when a criminal steals a person’s existing identity and uses it without the individual knowing. Synthetic identity fraud creates an entirely new identity, using real and manufactured data. The most significant difference is that because synthetic frauds are created to appear legitimate and have no direct individual victim, they are also far easier to conceal from detection and investigation.
Why is synthetic identity fraud difficult to detect?
Because synthetic identities are built to look legitimate and leave no individual victim, they evade detection far more easily than traditional fraud. There is no victim reporting a crime and no obvious anomaly in early account activity. Attackers play the long game to establish the appearance of a valid credit history. This makes it even more difficult to identify the fraud until the final stage.
How do criminals create synthetic identities?
Threat actors acquire a true SSN from a data breach or purchase them from the Dark Web. In many cases, they target children or people whose credit is inactive. The SSN is paired with false names, false addresses, and false dates of birth. Thieves use AI tools to generate other supporting documentation, such as driver’s licenses and photo IDs. Thieves use AI to create a virtual presence for the synthetic identity. Many of these tasks can now be accomplished by automated bots at scale. The bots can submit multiple applications for a line of credit and iterate upon the failed applications until the synthetic identity appears acceptable.
Diffuse Threats with the Help of Proofpoint
When attackers compromise credentials and privileged accounts, they can move laterally, escalate access, and operate as if they had legitimate authentication. Pinpointing these threats isn’t easy; you need to see how accounts are being used–not just who has access, but their actions versus their expected patterns. Cases involving credential theft, privilege escalation, account takeover, and lateral movement demand stronger security measures and oversight. With Proofpoint, security teams have greater clarity into suspicious activity and attack paths, and can stop threats before they spread. Proofpoint helps organizations improve their ability to stop identity-based attacks, investigate theft cases, and respond appropriately when trust relationships are abused.
Prevent and de-escalate identity-based threats with the help of Proofpoint. Get in touch for more.
Related Resources
The latest news and updates from Proofpoint, delivered to your inbox.
Sign up to receive news and other stories from Proofpoint. Your information will be used in accordance with Proofpoint’s privacy policy. You may opt out at any time.