What Is Liveness Detection — And Why It's No Longer Enough
Liveness detection was built to stop printed photos. Injection attacks bypass it entirely. Here's what liveness does, where it fails, and what comes next.

If you have ever verified your identity online — opened a bank account, signed up for a crypto exchange, applied for a rental — you have almost certainly encountered liveness detection. It is the step where the verification system asks you to look at the camera, blink, turn your head, or hold still while it confirms you are a real person sitting in front of a real camera.
For most of its existence, liveness detection has been the gold standard for preventing biometric fraud. It was effective against the dominant attack of its era: someone holding a printed photograph or a phone screen in front of the camera to impersonate another person.
That era is over. The attack landscape has fundamentally changed, and liveness detection — by itself — is no longer sufficient to protect identity verification systems. Understanding why requires understanding how liveness works, what it was designed to stop, and what new attack categories it was never built to handle.
How Liveness Detection Works
Liveness detection evaluates whether the person in front of the camera is physically present and alive — as opposed to a photograph, a video replay, or a mask. There are two primary approaches.
Active Liveness
Active liveness requires the user to perform a specific action during the capture: blink, smile, turn their head left or right, or follow a moving point on the screen. The system evaluates whether the user's response matches the prompted action with the timing and naturalness expected of a live human.
The advantage of active liveness is that it is difficult to fool with static media. A printed photograph cannot blink on command. A pre-recorded video cannot respond to a random prompt. The user must demonstrate real-time responsiveness.
The disadvantage is friction. Asking users to perform actions slows the verification flow, increases drop-off rates, and can create accessibility barriers for users with physical limitations. Some platforms report that active liveness prompts reduce verification completion rates by 5-15%.
Passive Liveness
Passive liveness analyzes the capture without requiring the user to perform any action. The system evaluates visual signals — depth information, skin texture, light reflection patterns, micro-movements — to determine whether the capture is of a live person or a reproduction.
Passive liveness is frictionless from the user's perspective. They look at the camera once, the system captures a frame or a brief video, and the liveness evaluation happens entirely in the background. Completion rates are higher because the user experience is simpler.
The trade-off is that passive systems have a smaller signal set to evaluate. Without prompted actions, the system must rely entirely on visual and temporal signals that may be more subtle and more susceptible to sophisticated spoofing.
Levels of Assurance
The industry benchmarks liveness detection against the ISO/IEC 30107-3 standard for Presentation Attack Detection (PAD). Two levels are commonly referenced.
Level 1 PAD detects basic presentation attacks: printed photos, digital photos displayed on screens, and simple video replays. Most commercial liveness systems pass Level 1 testing.
Level 2 PAD detects more sophisticated attacks: high-resolution video replays, 3D printed masks, silicone masks, and advanced screen-based presentations. Fewer systems pass Level 2, and those that do represent the current upper bound of liveness capability.
However, neither Level 1 nor Level 2 PAD tests for injection attacks — because injection attacks are not presentation attacks. This distinction is the critical gap.
What Liveness Was Built to Stop
Liveness detection was designed for a specific threat model called a presentation attack. In a presentation attack, the fraudster places something in front of the camera — a photo, a video, a mask — to impersonate another person. The camera captures the presented artifact, and the system must determine whether it is looking at a real person or a reproduction.
Against this threat model, liveness detection is effective. A printed photo has no depth. A flat screen has characteristic light reflection patterns. A mask has texture inconsistencies at the boundary with real skin. A video replay lacks the micro-variations in position and expression that occur involuntarily in live humans.
The entire liveness detection paradigm operates on one assumption: the camera is capturing what is physically in front of it. The system evaluates the captured content and determines whether that content represents a live person.
When that assumption holds, liveness works. When it does not, liveness fails — completely.
Why Liveness Fails Against Injection Attacks
What Injection Attacks Are
An injection attack does not present anything to the camera. Instead, it intercepts the video stream between the camera hardware and the verification software, replacing the legitimate feed with pre-recorded or synthetically generated video.
The verification system never receives the real camera feed. It receives a fabricated feed that was crafted to pass every check the system applies. The deepfake video includes natural blinking (defeating active liveness prompts), 3D depth characteristics (defeating passive depth analysis), natural skin texture (defeating texture analysis), and consistent lighting (defeating reflection pattern analysis).
The video passes liveness detection because it was not presented to the camera. It was injected into the software pipeline. Liveness detection is checking whether a deepfake looks alive. The answer is always yes — because that is exactly what deepfakes are designed to do.
The ABN AMRO Case
The most documented real-world example is the ABN AMRO incident. In March 2026, prosecutors told an Amsterdam court that a man had opened 46 bank accounts in other people's names by using deepfake technology to bypass the bank's facial recognition checks. Forty-six successful attacks against a major European bank's production verification system.
The attack succeeded because the bank's system relied on liveness detection as its primary defense against biometric fraud. Liveness confirmed that the captured video appeared to be a live person. It did not evaluate whether the video was genuine.
The Scale of the Problem
Injection attacks are not rare edge cases. iProov reported a 783% increase in injection attacks in 2024. Jumio noted an 88% year-on-year rise in 2025. The World Economic Forum's January 2026 Cybercrime Atlas report tested 17 face-swapping tools and 8 camera injection tools and found that most were able to bypass standard biometric onboarding checks.
Gartner projects that by 2026, 30% of enterprises will no longer treat standalone identity verification as reliable in isolation. The shift is already underway.
What Comes After Liveness
The next generation of biometric defense asks a fundamentally different question. Instead of "is this person live?" it asks "has this media been generated, manipulated, or injected?"
Forensic Signal Analysis
Forensic analysis evaluates the captured media for signals that distinguish genuine camera captures from synthetic or manipulated content. These signals operate below the threshold of human perception.
Compression artifacts: genuine camera captures exhibit specific compression patterns determined by the camera's hardware encoder. Synthetic content exhibits different compression characteristics because it was generated by software rather than captured by a sensor.
Frequency domain analysis: examining the image in Fourier space reveals spectral patterns characteristic of generative models. Real faces and synthetic faces look identical in pixel space but differ in frequency space.
Noise consistency: every camera sensor produces a characteristic noise pattern. Genuine captures have consistent noise across the frame. Synthetic content has inconsistent noise patterns because the generative model does not replicate sensor-specific noise.
Temporal coherence: in video captures, genuine footage exhibits frame-to-frame consistency determined by physical camera movement and the subject's actual motion. Synthetic video exhibits subtle temporal artifacts — micro-discontinuities between frames that are invisible to the human eye but detectable by analysis.
Injection Detection
A separate detection layer specifically evaluates whether the video stream originated from a physical camera or was inserted into the pipeline through software. This operates at the device and pipeline level rather than the content level.
Signals include: device integrity (has the camera API been intercepted?), environment validation (is the application running in an emulator or virtual machine?), sensor correlation (does accelerometer data match camera movement?), and stream authentication (does the video exhibit the encoding characteristics of a live camera feed versus a decoded file?).
Behavioral Context
The third layer evaluates the person's behavior during and around the verification event. Is their interaction cadence consistent with a legitimate user? Does their device history suggest a real person or an automated system? Does the session's timing, location, and device profile match the claimed identity?
Behavioral context does not detect deepfakes directly. It provides an additional signal layer that increases confidence in the overall verification decision — or raises red flags that trigger additional scrutiny.
The Layered Detection Model
Effective biometric defense in 2026 and beyond requires three layers operating simultaneously.
Liveness detection remains valuable as the first layer — it catches the simplest and most common presentation attacks, which still represent a significant volume of fraud attempts. Not every attacker is using injection tools. Many are still trying printed photos and screen replays.
Forensic deepfake detection operates as the second layer — catching synthetic and manipulated media that passes liveness checks by evaluating signals that generative models cannot replicate.
Injection detection operates as the third layer — catching attacks that bypass the camera entirely by validating the integrity of the capture pipeline.
No single layer is sufficient. But together, they create a defense that addresses the full spectrum of biometric attacks — from a teenager holding a photo to a sophisticated operation using injection tools with custom deepfake generation.
Liveness Detection FAQ
- What is liveness detection?
- Liveness detection is a biometric security measure that verifies a person is physically present during identity verification, distinguishing live humans from photographs, video replays, masks, and other presentation attacks.
- What is the difference between active and passive liveness?
- Active liveness requires the user to perform prompted actions (blink, turn head). Passive liveness analyzes visual signals from a standard capture without requiring user interaction. Active provides more signals but creates friction; passive is frictionless but has fewer signals to evaluate.
- Why doesn't liveness detection stop deepfakes?
- Injection attacks bypass the camera entirely, feeding synthetic video directly into the verification pipeline. The deepfake video passes all liveness heuristics because it was designed to appear live — including natural blinking, head movement, and 3D depth.
- What is an injection attack?
- An injection attack intercepts the video stream between the camera hardware and the verification software, replacing the genuine feed with pre-recorded or AI-generated video. The verification system evaluates fabricated content as if it came from the camera.
- What should replace liveness detection?
- Liveness detection should not be replaced but layered with forensic deepfake detection (analyzing media for synthetic signals) and injection detection (validating that the video stream originated from a physical camera). All three layers operating together address the full attack spectrum.
- How common are injection attacks?
- iProov reported a 783% increase in injection attacks in 2024. Jumio noted an 88% year-on-year rise in 2025. The World Economic Forum found that most standard biometric checks could be bypassed by current injection tools.
Relevant Articles
The 5 Deepfake Tools Fraudsters Actually Use
Every tool category explained with detection methods.
Apr 14, 2026
How Deepfake Romance Scams Work
Why video calls no longer prove identity.
Apr 21, 2026
Deepfakes Defeated a Dutch Bank's Face Recognition — 46 Times
The real-world case that proved liveness is insufficient.
Apr 6, 2026
Financial Groups Unveil 20-Point Plan to Fight AI Identity Fraud
Industry recommendations for layered defense.
Apr 7, 2026
What is deepidv?
Not everyone loves compliance — but we do. deepidv is the AI-native verification engine and agentic compliance suite built from scratch. No third-party APIs, no legacy stack. We verify users across 211+ countries in under 150 milliseconds, catch deepfakes that liveness checks miss, and let honest users through while keeping bad actors out.
Learn More
