TL;DR — Key Findings
- 1
Synthetic identity fraud increased 311% between Q1 2024 and Q1 2025 — the fastest growth rate ever recorded in our network.
- 2
Of the 4,000,000 synthetic IDs we analysed, 23% used a real government ID number combined with fabricated personal details.
- 3
Deepfake-based fraud now accounts for 6.5% of all fraud attacks globally — a 2,137% increase from 2022.
- 4
1 in 7 deepfake attempts successfully bypassed a single-layer liveness check. Multi-layer detection reduced bypass success to under 0.3%.
- 5
A complete synthetic identity with AI-generated photo is available on the dark web for as little as $15.
- 6
Injection attacks (bypassing the physical camera entirely) surged 9× year-over-year and are the fastest-growing attack vector.
- 7
Human reviewers correctly identified deepfakes only 24.5% of the time — worse than random chance on high-quality fakes.
- 8
The cryptocurrency sector saw a 654% increase in deepfake-related incidents from 2023 to 2024.
Section 01
Data Methodology
This report reflects data from deepidv's identity verification network between January 2024 and February 2026, spanning fintech, e-commerce, banking, gaming, real estate, and HR verticals.
deepidv Verification Network
All live identity verification events processed across deepidv customer integrations.
Flagged Fraud Queue
Every auto-flagged and customer-escalated submission in our pipeline — 4,000,000+ synthetic identity events total.
Deepfake Attack Log
Over 10,000 confirmed deepfake attempts captured by liveness, injection, and biometric matching layers.
Key Definitions
| Term | Definition |
|---|---|
| Synthetic Identity | An identity constructed by combining real personal data (e.g. a real SSN) with fabricated biographical information — name, date of birth, address, photo. |
| Deepfake Attack | Any use of AI-generated or manipulated audio, image, or video to impersonate a person during an identity verification check. |
| Injection Attack | Synthetic biometric data fed directly into a verification pipeline at software level, bypassing the physical camera entirely. |
| Presentation Attack | A physical artefact (photo, mask, screen replay) held to a real camera to deceive a liveness check. |
| Bypass Rate | The % of fraudulent submissions that passed a verification check undetected. |
| DaaS | Deepfake-as-a-Service — commercial dark-web tooling that sells packaged deepfake generation for fraud. |
Section 02
The Scale of Synthetic Identity Fraud
Synthetic identity fraud has become the fastest-growing financial crime in the world. Between Q1 2024 and Q1 2025, our network recorded a 311% increase in synthetic identity document fraud. Not annual growth — growth in a single quarter.
311%
Increase in synthetic document fraud Q1 2024–Q1 2025
$35B
Annual losses from synthetic ID fraud globally
21%
Of first-party frauds now involve a synthetic identity
Year-over-Year Growth Across Attack Types
“We've stopped thinking about synthetic identity fraud as a niche financial crime. It is now as common as credit card fraud was fifteen years ago — and it is growing faster.”
Shawn-Marc Melo
Founder & CEO, deepidv
Section 03
What We Found in 4,000,000 Synthetic IDs
Our dataset of 4,000,000 confirmed or suspected synthetic identity submissions revealed several patterns that surprised even our own fraud research team.
How These Identities Were Constructed
4M+
IDs
Key finding: In 2024, the cost to generate a fully synthetic identity — including an AI-generated government ID, convincing face photo, and fabricated credit history — dropped to under $15 on dark web marketplaces. In 2022, the same package cost $400–$800 and required specialist skills.
The Documents Fraudsters Fake Most
| Document Type | % of Submissions | Primary Forgery Method | Detection Difficulty |
|---|---|---|---|
| National / Government ID | 46% | AI-generated template overlay | High |
| Passport | 21% | Stolen base + modified personal data | Medium-High |
| Driver's Licence | 18% | Physical print + digital scan | Medium |
| Utility Bill / Proof of Address | 9% | AI-generated (GPT + design tool) | Low-Medium |
| Bank Statement | 4% | AI-generated with realistic formatting | High |
| Pay Stub / Tax Record | 2% | Template-based AI generation | High |
Signals That Triggered a Fraud Flag
% of flagged cases where signal was present
“The shift from physical forgeries to AI-generated documents is the single most important change we have observed in fraud in the past five years. You can no longer trust that something looks real.”
Didi Ogbowu
Senior Researcher & Backend Engineer, deepidv
Section 04
Deepfake Attack Analysis
In 2022, deepfakes accounted for a fraction of a percent of fraud attempts. By Q1 2025, that number had risen to 6.5% of all fraud attacks globally — a 2,137% increase. There were 19% more deepfake incidents in Q1 2025 alone than in all of 2024.
6.5%
Of all fraud attacks now involve deepfakes
2,137%
Growth in deepfake fraud since 2022
40%
Of biometric fraud uses deepfakes
$3B+
US deepfake losses Jan–Sep 2025
The Deepfake Fraud Timeline
Highly technical, rare, expensive. Mostly crypto-specific.
GAN tools become widely available. Crypto sector accounts for 88% of deepfake fraud cases.
DaaS emerges. $360M in US losses. Face swap attacks surge 704%.
More incidents in one quarter than all of 2024. Injection attacks surge 9×.
Gartner: one-third of enterprises will consider IDV unreliable due to AI deepfakes.
Section 05
What We Found in 10,000+ Deepfakes
Our deepfake dataset of over 10,000 confirmed attempts is one of the largest independently compiled datasets in the identity verification industry. What we found challenged several of our own assumptions.
Attack Type Distribution & Bypass Rates
| Attack Type | % of Dataset | Single-Layer Bypass | Multi-Layer Bypass | Sophistication |
|---|---|---|---|---|
| Face Swap (video) | 34% | 22% | 0.4% | Medium |
| GAN-Generated Synthetic Face | 28% | 31% | 0.2% | High |
| Injection Attack (virtual camera) | 16% | 58% | 0.3% | Very High |
| Video Replay (recorded real user) | 14% | 11% | 0.1% | Low |
| Audio + Video Combined | 3% | 41% | 0.5% | Very High |
| 3D Mask / Print Attack | 5% | 8% | <0.1% | Low |
Most alarming finding: Injection attacks bypass single-layer liveness checks 58% of the time. A standard "blink and turn" liveness test offers virtually no protection against a well-executed injection attack. Multi-layer detection reduces this to 0.3%.
Deepfake Quality Is Skyrocketing
In 2023, our reviewers could identify most deepfakes by eye. By 2025, that was no longer true for the top tier. We stratified all 10,000+ deepfakes by quality:
Over half our dataset — 59% of deepfakes — fall into Tier 3 or 4. In a 2025 iProov study, only 0.1% of participants correctly identified all fake and real media presented to them. The average human detection rate for high-quality deepfakes: just 24.5%.
“A Tier 4 deepfake passes every human review check with a high confidence score. We only catch them because our models look for things a human eye cannot see: lighting physics inconsistencies, frequency-domain anomalies, and sub-pixel compression artefacts.”
Omar Tahir
CTO, deepidv
The Tools Fraudsters Are Using
| Tool / Method | % of Cases | Bypass Target | Who Can Buy It |
|---|---|---|---|
| Commercial DaaS platforms (dark web) | 44% | Liveness + ID photo match | Any buyer ($10–$50) |
| Open-source GAN models (DeepFaceLab, etc.) | 31% | Liveness check | Technical users (free) |
| Real-time face swap tools (virtual camera) | 18% | Video call + live KYC | Non-technical ($5–$20/mo) |
| Custom bespoke models (advanced actors) | 7% | Multi-layer enterprise KYC | Organised crime groups |
Section 06
Attack Vector Taxonomy
Attacks fall into two primary families — each requiring a different defence posture.
Presentation Attack
Risk: MediumA physical or digital artefact held to the real camera. Standard PAD (Presentation Attack Detection) is reasonably effective against these.
- Photo on phone screen
- Printed photo
- 3D silicone mask
- Video loop replay
Injection Attack
Risk: Very HighSynthetic biometric data injected directly into the software pipeline. The camera is bypassed entirely — PAD cannot detect it.
- Virtual camera driver
- iOS RPTM server exploit
- Emulator injection
- API-level data injection
| Attack Vector | Camera? | PAD Detects? | deepidv Defence |
|---|---|---|---|
| Photo / screen replay | Yes | Yes | Liveness + depth analysis |
| 3D mask | Yes | Usually | Active liveness + texture analysis |
| Face swap video | Yes | Sometimes | Deepfake detection model |
| GAN synthetic face | Yes | Rarely | Frequency-domain forensics |
| Virtual camera injection | No | No | Injection detection layer |
| iOS RPTM injection | No | No | Device integrity + injection detection |
| Emulator attack | No | No | Device fingerprinting + session analysis |
| API-level synthetic payload | No | No | Payload validation + anomaly detection |
Section 07
The Deepfake-as-a-Service Economy
Perhaps the most unsettling finding in our research is not the sophistication of the attacks — it is how accessible they have become. Deepfake fraud has industrialised.
$5
Deepfake selfie generation
One-time AI face photo, custom subject, sufficient for most static ID photo checks.
$15
Ready-to-use synthetic identity
Complete fabricated identity: name, DOB, address, fake financials, and AI photo.
$10–50
Full deepfake identity package
AI face, video, matching government ID template, and basic personal data.
The Complete Dark Web Fraud Stack
Personal data
$1–$5
Real name, government ID number, DOB, address — purchased from data breach markets.
AI-generated photo
$5–$15
A GAN-generated face photo matching the demographic profile of the stolen identity.
Synthetic document
$10–$30
Pixel-perfect government ID template with fake data embedded.
Deepfake video
$20–$50
Short video of the fake face performing liveness actions (blink, turn, smile).
Virtual camera / injection tool
$10–$20/mo
Tool to pipe the deepfake video directly into the verification flow.
Residential proxy
$3–$10/mo
A legitimate-looking IP from the target country to avoid geographic flags.
Total cost to assemble a complete synthetic identity fraud kit: approximately $49–$120. Expected return from a single successful fraudulent account opening at a fintech lender: $2,000–$50,000+. The ROI of identity fraud has never been higher.
Section 08
Sector-Specific Fraud Rates
Identity fraud does not affect every industry equally. Combined with findings from Sumsub, Veriff, Entrust, and TransUnion, our data reveals a clear hierarchy of exposure.
Net fraud rate by sector — % of verification attempts flagged
Spotlight: Cryptocurrency
Deepfake-related incidents in crypto rose 654% from 2023 to 2024. The sector remains the preferred target for high-sophistication attacks — fast account opening, large transaction limits, pseudonymous by design, and historically under-regulated KYC requirements.
Spotlight: Auto Lending
Auto lenders suffer disproportionately from synthetic identity fraud specifically — not deepfakes, but the slow construction of a synthetic identity over months to build a plausible credit profile. In the first half of 2024 alone, auto lenders lost $2 billion to synthetic identity fraud — double bank credit card losses.
Section 09
Human vs. AI Detection Accuracy
The gap between what a human reviewer can detect and what AI models can detect is not marginal. It is existential for any fraud programme that relies on manual review.
24.5%
Human detection rate for high-quality deepfakes
0.1%
Of people in iProov study correctly identified all fakes
>95%
AI model accuracy in lab conditions (drops 45–50% real-world)
| Detection Method | Tier 1 | Tier 2 | Tier 3 | Tier 4 | Injection |
|---|---|---|---|---|---|
| Human review | 91% | 62% | 28% | 9% | 4% |
| Single-layer liveness | 88% | 71% | 52% | 31% | 42% |
| PAD + deepfake model | 99% | 97% | 89% | 74% | 58% |
| deepidv multi-layer | 99.8% | 99.6% | 99.2% | 98.7% | 99.7% |
“The moment your fraud programme assumes a human can catch what machines miss, you have already lost. In 2026, the question is not whether to automate — it is how many layers your automation has.”
Deep Identity Labs Team
deepidv Research Division
Section 10
Fraud Trends to Watch in 2026
Agentic Fraud — AI Orchestrates the Full Attack
We are beginning to see the first evidence of fully automated fraud pipelines in which an AI agent handles every step — from purchasing stolen data to generating the deepfake, assembling the document, and executing the verification attempt — without any human involvement. Google's Cybersecurity Forecast 2026 identified AI-orchestrated social engineering as the fastest-growing attack category.
Do this
Deploy behavioural analytics that flag non-human interaction patterns at the session level. An AI agent moves differently through a verification flow than a human.
Multi-Modal Deepfakes — Audio + Video Combined
Single-modality deepfakes are increasingly detectable. The next wave is multi-modal: a synthetic face combined with a cloned voice, designed to defeat both visual liveness and any voice-based authentication. In our dataset, combined audio-video deepfakes had a single-layer bypass rate of 41% — the highest of any attack type.
Do this
Ensure your liveness check and voice verification run independent detection pipelines. Do not rely on a combined system that treats voice and video as a single input.
Synthetic Credit Histories — The Long Game
The most sophisticated synthetic identity operations do not attempt fraud immediately. They establish the identity, build a thin but legitimate credit history over 6–24 months, then execute a large-value fraud event before disappearing. TransUnion found a 153% increase in these 'sleeper' synthetic identities in 2024.
Do this
Implement re-verification triggers on high-value transaction events, not just at account opening. An identity that passed KYC 18 months ago may no longer be trustworthy.
Injection Attack Commoditisation
Tools that allow non-technical fraudsters to execute virtual camera injections are now available for $5–$20 per month. The new CEN/TS 18099 European technical specification and forthcoming ISO 25456 standard reflect regulatory recognition that injection detection is now a baseline requirement, not an advanced capability.
Do this
Ask your identity verification provider for their injection attack bypass rate in adversarial testing. If they don't have an answer, they don't have injection detection.
Regulatory Pressure Accelerates
The EU AI Act (August 2025), the US TAKE IT DOWN Act, and emerging ISO 25456 standards are tightening requirements on biometric verification systems. By 2026, identity verification is not just a product decision — it is a compliance requirement.
Do this
Document your verification stack with your legal and compliance teams now. Regulators are beginning to audit what checks were in place when a synthetic identity slipped through.
Section 11
How deepidv Defends Against These Threats
This report was written to demonstrate why the architecture of your identity verification stack matters more in 2026 than it ever has. Single-layer checks are not just insufficient — they are actively dangerous.
Deepfake Detection
Frequency-domain forensics and GAN artefact detection catch Tier 3 and Tier 4 fakes that fool standard liveness.
Injection Attack Detection
Validates camera feed integrity itself — detects virtual cameras, RPTM exploits, and API-level payload injections.
Document Forensics
Pixel-level metadata analysis, font consistency checks, and compression artefact detection across all major document types.
Cross-Source Identity Validation
Confirms identity claims against 500+ financial data sources and public records to catch synthetic thin-file patterns.
Agentic Re-Verification
Background agents continuously monitor identity signals after onboarding, catching sleeper synthetics before they execute.
Behavioural & Session Analysis
Analyses how a user moves through the verification flow — timing, device signals, session fingerprinting — to detect AI-orchestrated attacks.
See it in action
Test deepidv's multi-layer detection from your browser
No account required. Test synthetic ID detection, liveness checks, and deepfake analysis in under two minutes.
Section 12
Frequently Asked Questions
Data attribution: deepidv internal verification network data, January 2024 – February 2026. Industry statistics sourced from Sumsub Identity Fraud Report 2025–2026, Veriff Identity Fraud Report 2026, Entrust 2026 Identity Fraud Report, TransUnion, Federal Reserve Bank of Boston, FTC, FBI IC3, iProov, Gartner, and Keepnet Labs.
