deepidv
Back to News
The Deep Brief · Apr 6, 2026 · 5 min read

Deepfakes Defeated a Dutch Bank's Face Recognition — 46 Times

A man used deepfake technology to open 46 fraudulent bank accounts at ABN AMRO, exposing the limits of biometric verification systems that rely on liveness detection alone.

Shawn-Marc Melo
Shawn-Marc Melo
Founder & CEO at deepidv
Amsterdam canal at night representing biometric verification compromise at a Dutch bank

In March 2026, prosecutors told an Amsterdam court that a man had opened 46 ABN AMRO bank accounts in other people's names by using deepfake technology to bypass the bank's facial recognition checks. Each account was opened remotely, using the bank's standard digital onboarding flow. Each passed the biometric verification. Forty-six times.

This is not a theoretical attack. It is not a proof-of-concept demonstrated in a lab. It happened at one of Europe's largest banks, in production, against live identity verification systems.

How the Attack Worked

The attack exploited a fundamental weakness in camera-based biometric verification: it trusts the input. Standard liveness detection checks whether the person in front of the camera is physically present — evaluating blink patterns, head movement, and 3D depth. But when the camera feed itself is compromised, these checks are meaningless.

Injection attacks work by intercepting the video stream between the camera and the verification software, replacing the legitimate feed with pre-recorded or synthetically generated video. The verification system receives what appears to be a live capture of a real person. It evaluates the liveness signals, finds nothing anomalous, and approves the verification.

The ABN AMRO case demonstrates that this attack vector is now being used at scale — not by state-sponsored actors with custom tools, but by individuals with access to consumer-grade deepfake software.

Why Liveness Detection Failed

Liveness detection was designed for a specific threat model: someone holding a printed photo or a screen in front of a camera. Against that attack, it works. Check for blink. Check for depth. Check for movement. Pass.

But the threat model has changed. Modern injection attacks do not present a photo or a screen to the camera. They replace the camera feed entirely. The deepfake video includes natural blinking, head movement, and apparent depth — because the source material is either a real video of a real person or a synthesis sophisticated enough to include these signals.

In this threat model, liveness detection is checking whether a deepfake looks alive. The answer is always yes — because that is exactly what deepfakes are designed to do.

What Detection Should Look Like

Effective deepfake detection asks a different question: has this media been generated, manipulated, or injected?

This requires evaluating forensic signals that generative models fail to replicate accurately: compression artifact patterns, frequency domain anomalies, noise consistency across the frame, temporal coherence between frames, and subtle physiological signals that are present in genuine captures but absent or distorted in synthetic media.

It also requires detecting injection itself — evaluating whether the video stream originated from a physical camera or was inserted into the pipeline through software. This is a fundamentally different technical challenge than liveness detection, and it requires purpose-built detection infrastructure.

The Broader Implications

The ABN AMRO case is a single data point in a much larger trend. Deepfake usage in biometric fraud attempts surged 58% year-over-year according to FinTech Global's 2026 identity fraud analysis. The UK government predicted 8 million deepfakes would be shared in 2025, up from 500,000 in 2023. Gartner projects that by 2026, 30% of enterprises will no longer treat standalone identity verification as reliable in isolation.

For any organization that verifies identities through a camera — banks, crypto exchanges, marketplaces, HR platforms, dating apps — the question is no longer whether deepfake attacks will target your system. The question is whether your system can tell the difference.

Deepfake Bank Fraud FAQ

How did the deepfake attack on ABN AMRO work?
A man used deepfake technology to impersonate other people during ABN AMRO's remote identity verification process, successfully opening 46 bank accounts in their names by bypassing the bank's facial recognition checks.
Why didn't liveness detection stop the attack?
Liveness detection checks whether a person appears physically present. Injection attacks replace the camera feed with synthetic video that includes natural blinking, movement, and depth — passing all standard liveness heuristics.
What is an injection attack?
An injection attack intercepts the video stream between the camera hardware and the verification software, replacing the legitimate feed with pre-recorded or AI-generated video. The verification system receives what appears to be a live capture.
How common are deepfake attacks against identity verification?
Deepfake usage in biometric fraud attempts surged 58% year-over-year in 2025-2026, and deepfake incidents in the fintech sector increased 700% in 2023 compared to 2022.
How can organizations protect against deepfake fraud?
Multi-layered verification that combines document authentication, biometric matching, forensic deepfake detection, behavioral analytics, and injection detection provides stronger protection than any single check.
TagsBeginnerArticleDeepfake DetectionFraud PreventionIdentity VerificationEU

Relevant Articles

What is deepidv?

Not everyone loves compliance — but we do. deepidv is the AI-native verification engine and agentic compliance suite built from scratch. No third-party APIs, no legacy stack. We verify users across 211+ countries in under 150 milliseconds, catch deepfakes that liveness checks miss, and let honest users through while keeping bad actors out.

Learn More