deepidv Logo
Research Report · March 2026

Fraudulent ID & Deepfake Benchmark Report 2026

We examined 4,000,000 synthetic identities and 10,000+ deepfake attacks. The results fundamentally changed how we understand modern fraud.

4M+

Synthetic IDs analysed

10K+

Deepfakes examined

2,137%

Deepfake growth since 2022

$35B

Annual synthetic fraud losses

Read the report

TL;DR — Key Findings

  • 1

    Synthetic identity fraud increased 311% between Q1 2024 and Q1 2025 — the fastest growth rate ever recorded in our network.

  • 2

    Of the 4,000,000 synthetic IDs we analysed, 23% used a real government ID number combined with fabricated personal details.

  • 3

    Deepfake-based fraud now accounts for 6.5% of all fraud attacks globally — a 2,137% increase from 2022.

  • 4

    1 in 7 deepfake attempts successfully bypassed a single-layer liveness check. Multi-layer detection reduced bypass success to under 0.3%.

  • 5

    A complete synthetic identity with AI-generated photo is available on the dark web for as little as $15.

  • 6

    Injection attacks (bypassing the physical camera entirely) surged 9× year-over-year and are the fastest-growing attack vector.

  • 7

    Human reviewers correctly identified deepfakes only 24.5% of the time — worse than random chance on high-quality fakes.

  • 8

    The cryptocurrency sector saw a 654% increase in deepfake-related incidents from 2023 to 2024.

Section 01

Data Methodology

This report reflects data from deepidv's identity verification network between January 2024 and February 2026, spanning fintech, e-commerce, banking, gaming, real estate, and HR verticals.

deepidv Verification Network

All live identity verification events processed across deepidv customer integrations.

Flagged Fraud Queue

Every auto-flagged and customer-escalated submission in our pipeline — 4,000,000+ synthetic identity events total.

Deepfake Attack Log

Over 10,000 confirmed deepfake attempts captured by liveness, injection, and biometric matching layers.

Key Definitions

TermDefinition
Synthetic IdentityAn identity constructed by combining real personal data (e.g. a real SSN) with fabricated biographical information — name, date of birth, address, photo.
Deepfake AttackAny use of AI-generated or manipulated audio, image, or video to impersonate a person during an identity verification check.
Injection AttackSynthetic biometric data fed directly into a verification pipeline at software level, bypassing the physical camera entirely.
Presentation AttackA physical artefact (photo, mask, screen replay) held to a real camera to deceive a liveness check.
Bypass RateThe % of fraudulent submissions that passed a verification check undetected.
DaaSDeepfake-as-a-Service — commercial dark-web tooling that sells packaged deepfake generation for fraud.

Section 02

The Scale of Synthetic Identity Fraud

Synthetic identity fraud has become the fastest-growing financial crime in the world. Between Q1 2024 and Q1 2025, our network recorded a 311% increase in synthetic identity document fraud. Not annual growth — growth in a single quarter.

311%

Increase in synthetic document fraud Q1 2024–Q1 2025

$35B

Annual losses from synthetic ID fraud globally

21%

Of first-party frauds now involve a synthetic identity

Year-over-Year Growth Across Attack Types

Synthetic document fraud (Q1 2024 → Q1 2025)+311%
Deepfake-enabled fraud growth since 2022+2,137%
Injection attack growth (year-over-year)9× (900%)
Deepfake vishing attacks Q4 2024 → Q1 2025+1,600%
Synthetic identity attempts 2019 → 2025+184%

We've stopped thinking about synthetic identity fraud as a niche financial crime. It is now as common as credit card fraud was fifteen years ago — and it is growing faster.

SM

Shawn-Marc Melo

Founder & CEO, deepidv

Section 03

What We Found in 4,000,000 Synthetic IDs

Our dataset of 4,000,000 confirmed or suspected synthetic identity submissions revealed several patterns that surprised even our own fraud research team.

How These Identities Were Constructed

4M+

IDs

Fully synthetic — all data AI-generated
41%
Mixed — real ID number + fabricated personal data
23%
Stolen identity with altered photo/address
28%
Account takeover with document swap
8%

Key finding: In 2024, the cost to generate a fully synthetic identity — including an AI-generated government ID, convincing face photo, and fabricated credit history — dropped to under $15 on dark web marketplaces. In 2022, the same package cost $400–$800 and required specialist skills.

The Documents Fraudsters Fake Most

Document Type% of SubmissionsPrimary Forgery MethodDetection Difficulty
National / Government ID46%AI-generated template overlayHigh
Passport21%Stolen base + modified personal dataMedium-High
Driver's Licence18%Physical print + digital scanMedium
Utility Bill / Proof of Address9%AI-generated (GPT + design tool)Low-Medium
Bank Statement4%AI-generated with realistic formattingHigh
Pay Stub / Tax Record2%Template-based AI generationHigh

Signals That Triggered a Fraud Flag

% of flagged cases where signal was present

Facial biometric inconsistency68%
Document metadata anomaly (font, DPI, compression)54%
Identity data mismatch across sources47%
No matching credit history / thin file43%
Liveness check failure38%
Injection attack signal detected29%
Device / session anomaly24%
Behavioural pattern flag18%

The shift from physical forgeries to AI-generated documents is the single most important change we have observed in fraud in the past five years. You can no longer trust that something looks real.

DO

Didi Ogbowu

Senior Researcher & Backend Engineer, deepidv

Section 04

Deepfake Attack Analysis

In 2022, deepfakes accounted for a fraction of a percent of fraud attempts. By Q1 2025, that number had risen to 6.5% of all fraud attacks globally — a 2,137% increase. There were 19% more deepfake incidents in Q1 2025 alone than in all of 2024.

6.5%

Of all fraud attacks now involve deepfakes

2,137%

Growth in deepfake fraud since 2022

40%

Of biometric fraud uses deepfakes

$3B+

US deepfake losses Jan–Sep 2025

The Deepfake Fraud Timeline

20220.3% of fraud

Highly technical, rare, expensive. Mostly crypto-specific.

20231.1% of fraud

GAN tools become widely available. Crypto sector accounts for 88% of deepfake fraud cases.

20243.8% of fraud

DaaS emerges. $360M in US losses. Face swap attacks surge 704%.

Q1 20256.5% of fraud

More incidents in one quarter than all of 2024. Injection attacks surge 9×.

2026 (projected)~12–15% of fraud

Gartner: one-third of enterprises will consider IDV unreliable due to AI deepfakes.

Section 05

What We Found in 10,000+ Deepfakes

Our deepfake dataset of over 10,000 confirmed attempts is one of the largest independently compiled datasets in the identity verification industry. What we found challenged several of our own assumptions.

Attack Type Distribution & Bypass Rates

Attack Type% of DatasetSingle-Layer BypassMulti-Layer BypassSophistication
Face Swap (video)34%22%0.4%Medium
GAN-Generated Synthetic Face28%31%0.2%High
Injection Attack (virtual camera)16%58%0.3%Very High
Video Replay (recorded real user)14%11%0.1%Low
Audio + Video Combined3%41%0.5%Very High
3D Mask / Print Attack5%8%<0.1%Low

Most alarming finding: Injection attacks bypass single-layer liveness checks 58% of the time. A standard "blink and turn" liveness test offers virtually no protection against a well-executed injection attack. Multi-layer detection reduces this to 0.3%.

Deepfake Quality Is Skyrocketing

In 2023, our reviewers could identify most deepfakes by eye. By 2025, that was no longer true for the top tier. We stratified all 10,000+ deepfakes by quality:

Tier 1 — Immediately obvious (pixelation, blur, artifacts)12%
Tier 2 — Detectable with careful review29%
Tier 3 — Convincing to untrained human reviewers38%
Tier 4 — Indistinguishable from real by human review21%

Over half our dataset — 59% of deepfakes — fall into Tier 3 or 4. In a 2025 iProov study, only 0.1% of participants correctly identified all fake and real media presented to them. The average human detection rate for high-quality deepfakes: just 24.5%.

A Tier 4 deepfake passes every human review check with a high confidence score. We only catch them because our models look for things a human eye cannot see: lighting physics inconsistencies, frequency-domain anomalies, and sub-pixel compression artefacts.

OT

Omar Tahir

CTO, deepidv

The Tools Fraudsters Are Using

Tool / Method% of CasesBypass TargetWho Can Buy It
Commercial DaaS platforms (dark web)44%Liveness + ID photo matchAny buyer ($10–$50)
Open-source GAN models (DeepFaceLab, etc.)31%Liveness checkTechnical users (free)
Real-time face swap tools (virtual camera)18%Video call + live KYCNon-technical ($5–$20/mo)
Custom bespoke models (advanced actors)7%Multi-layer enterprise KYCOrganised crime groups

Section 06

Attack Vector Taxonomy

Attacks fall into two primary families — each requiring a different defence posture.

Presentation Attack

Risk: Medium

A physical or digital artefact held to the real camera. Standard PAD (Presentation Attack Detection) is reasonably effective against these.

  • Photo on phone screen
  • Printed photo
  • 3D silicone mask
  • Video loop replay

Injection Attack

Risk: Very High

Synthetic biometric data injected directly into the software pipeline. The camera is bypassed entirely — PAD cannot detect it.

  • Virtual camera driver
  • iOS RPTM server exploit
  • Emulator injection
  • API-level data injection
Attack VectorCamera?PAD Detects?deepidv Defence
Photo / screen replayYesYesLiveness + depth analysis
3D maskYesUsuallyActive liveness + texture analysis
Face swap videoYesSometimesDeepfake detection model
GAN synthetic faceYesRarelyFrequency-domain forensics
Virtual camera injectionNoNoInjection detection layer
iOS RPTM injectionNoNoDevice integrity + injection detection
Emulator attackNoNoDevice fingerprinting + session analysis
API-level synthetic payloadNoNoPayload validation + anomaly detection

Section 07

The Deepfake-as-a-Service Economy

Perhaps the most unsettling finding in our research is not the sophistication of the attacks — it is how accessible they have become. Deepfake fraud has industrialised.

$5

Deepfake selfie generation

One-time AI face photo, custom subject, sufficient for most static ID photo checks.

$15

Ready-to-use synthetic identity

Complete fabricated identity: name, DOB, address, fake financials, and AI photo.

$10–50

Full deepfake identity package

AI face, video, matching government ID template, and basic personal data.

The Complete Dark Web Fraud Stack

01

Personal data

$1–$5

Real name, government ID number, DOB, address — purchased from data breach markets.

02

AI-generated photo

$5–$15

A GAN-generated face photo matching the demographic profile of the stolen identity.

03

Synthetic document

$10–$30

Pixel-perfect government ID template with fake data embedded.

04

Deepfake video

$20–$50

Short video of the fake face performing liveness actions (blink, turn, smile).

05

Virtual camera / injection tool

$10–$20/mo

Tool to pipe the deepfake video directly into the verification flow.

06

Residential proxy

$3–$10/mo

A legitimate-looking IP from the target country to avoid geographic flags.

Total cost to assemble a complete synthetic identity fraud kit: approximately $49–$120. Expected return from a single successful fraudulent account opening at a fintech lender: $2,000–$50,000+. The ROI of identity fraud has never been higher.

Section 08

Sector-Specific Fraud Rates

Identity fraud does not affect every industry equally. Combined with findings from Sumsub, Veriff, Entrust, and TransUnion, our data reveals a clear hierarchy of exposure.

Net fraud rate by sector — % of verification attempts flagged

E-commerce & Marketplaces19.2%
Crypto / Web3 Platforms15.4%
Online Lending / BNPL8.7%
Financial Services (general)5.5%
Online Gaming / iGaming4.8%
Real Estate3.2%
HR / Hiring Platforms2.9%
Healthcare2.1%

Spotlight: Cryptocurrency

Deepfake-related incidents in crypto rose 654% from 2023 to 2024. The sector remains the preferred target for high-sophistication attacks — fast account opening, large transaction limits, pseudonymous by design, and historically under-regulated KYC requirements.

Spotlight: Auto Lending

Auto lenders suffer disproportionately from synthetic identity fraud specifically — not deepfakes, but the slow construction of a synthetic identity over months to build a plausible credit profile. In the first half of 2024 alone, auto lenders lost $2 billion to synthetic identity fraud — double bank credit card losses.

Section 09

Human vs. AI Detection Accuracy

The gap between what a human reviewer can detect and what AI models can detect is not marginal. It is existential for any fraud programme that relies on manual review.

24.5%

Human detection rate for high-quality deepfakes

0.1%

Of people in iProov study correctly identified all fakes

>95%

AI model accuracy in lab conditions (drops 45–50% real-world)

Detection MethodTier 1Tier 2Tier 3Tier 4Injection
Human review91%62%28%9%4%
Single-layer liveness88%71%52%31%42%
PAD + deepfake model99%97%89%74%58%
deepidv multi-layer99.8%99.6%99.2%98.7%99.7%

The moment your fraud programme assumes a human can catch what machines miss, you have already lost. In 2026, the question is not whether to automate — it is how many layers your automation has.

DI

Deep Identity Labs Team

deepidv Research Division

01

Agentic Fraud — AI Orchestrates the Full Attack

We are beginning to see the first evidence of fully automated fraud pipelines in which an AI agent handles every step — from purchasing stolen data to generating the deepfake, assembling the document, and executing the verification attempt — without any human involvement. Google's Cybersecurity Forecast 2026 identified AI-orchestrated social engineering as the fastest-growing attack category.

Do this

Deploy behavioural analytics that flag non-human interaction patterns at the session level. An AI agent moves differently through a verification flow than a human.

02

Multi-Modal Deepfakes — Audio + Video Combined

Single-modality deepfakes are increasingly detectable. The next wave is multi-modal: a synthetic face combined with a cloned voice, designed to defeat both visual liveness and any voice-based authentication. In our dataset, combined audio-video deepfakes had a single-layer bypass rate of 41% — the highest of any attack type.

Do this

Ensure your liveness check and voice verification run independent detection pipelines. Do not rely on a combined system that treats voice and video as a single input.

03

Synthetic Credit Histories — The Long Game

The most sophisticated synthetic identity operations do not attempt fraud immediately. They establish the identity, build a thin but legitimate credit history over 6–24 months, then execute a large-value fraud event before disappearing. TransUnion found a 153% increase in these 'sleeper' synthetic identities in 2024.

Do this

Implement re-verification triggers on high-value transaction events, not just at account opening. An identity that passed KYC 18 months ago may no longer be trustworthy.

04

Injection Attack Commoditisation

Tools that allow non-technical fraudsters to execute virtual camera injections are now available for $5–$20 per month. The new CEN/TS 18099 European technical specification and forthcoming ISO 25456 standard reflect regulatory recognition that injection detection is now a baseline requirement, not an advanced capability.

Do this

Ask your identity verification provider for their injection attack bypass rate in adversarial testing. If they don't have an answer, they don't have injection detection.

05

Regulatory Pressure Accelerates

The EU AI Act (August 2025), the US TAKE IT DOWN Act, and emerging ISO 25456 standards are tightening requirements on biometric verification systems. By 2026, identity verification is not just a product decision — it is a compliance requirement.

Do this

Document your verification stack with your legal and compliance teams now. Regulators are beginning to audit what checks were in place when a synthetic identity slipped through.

Section 11

How deepidv Defends Against These Threats

This report was written to demonstrate why the architecture of your identity verification stack matters more in 2026 than it ever has. Single-layer checks are not just insufficient — they are actively dangerous.

See it in action

Test deepidv's multi-layer detection from your browser

No account required. Test synthetic ID detection, liveness checks, and deepfake analysis in under two minutes.

Section 12

Frequently Asked Questions

Data attribution: deepidv internal verification network data, January 2024 – February 2026. Industry statistics sourced from Sumsub Identity Fraud Report 2025–2026, Veriff Identity Fraud Report 2026, Entrust 2026 Identity Fraud Report, TransUnion, Federal Reserve Bank of Boston, FTC, FBI IC3, iProov, Gartner, and Keepnet Labs.