deepidv
Back to News
The Deep Brief · Apr 7, 2026 · 7 min read

Financial Groups Unveil 20-Point Plan to Fight AI Identity Fraud

The ABA, Better Identity Coalition, and FSSCC release a 20-point plan to combat deepfake fraud, calling for federal action on identity verification infrastructure.

Shawn-Marc Melo
Shawn-Marc Melo
Founder & CEO at deepidv
Abstract AI and cybersecurity pattern representing identity fraud prevention

The American Bankers Association, the Better Identity Coalition, and the Financial Services Sector Coordinating Council have published a joint paper laying out the scale of AI-driven identity fraud and demanding federal and state action across twenty specific recommendations. The paper arrives as Deloitte projects AI-enabled fraud losses in the United States could reach $40 billion by 2027, up from $12.3 billion in 2023.

The message is blunt: identity fraud is no longer a sector-specific problem. It is a national security issue. And the current infrastructure is not built to handle it.

The Scale of the Problem

The numbers are stark. Deepfake incidents in the fintech sector increased 700% in 2023 compared to 2022. In 2021, 42% of Suspicious Activity Reports filed under the Bank Secrecy Act were tied to identity or authentication compromise. Research cited in the report found that LLMs can automate the entire phishing process, cutting attack costs by more than 95% while producing success rates equal to or greater than manually crafted campaigns.

Ten attack categories are currently targeting financial institutions: deepfakes used against identity verification systems, AI-generated phishing campaigns, synthetic identity creation, real-time deepfake fraud, and the use of AI agents for account takeovers — among others. Legacy authentication methods including SMS-based one-time passcodes and push-based authenticator apps are now considered phishable.

The Four Initiatives

The twenty recommendations are organized into four initiatives.

Identity Proofing and Verification

A Treasury Department-led task force would coordinate federal, state, and local agencies on closing the gap between physical credentials and their digital equivalents. Mobile driver's licenses using asymmetric public key cryptography are identified as a viable path — a deepfake cannot spoof possession of a private cryptographic key.

Expanding eCBSV Access

The paper calls for expanding the Social Security Administration's Electronic Consent-Based Social Security Number Verification system beyond its current limited scope. Opening eCBSV to account opening, background checks, and broader identity validation use cases would give financial institutions a way to verify identities against an authoritative government source.

NIST Liveness Detection Guidance

Accelerating NIST's guidance on liveness detection is another priority. The current generation of liveness checks was designed for an era when the primary attack was holding a printed photo in front of a camera. Injection attacks that bypass the camera entirely require a fundamentally different detection approach.

Multi-Agency AI Identity Task Force

A multi-agency task force to monitor AI-driven identity threats would coordinate response across financial services, healthcare, retail, and government benefits distribution. The paper specifically references HR 7270, the Stop Identity Fraud and Identity Theft Act of 2026.

Why Liveness Is Not Enough

The report's most significant technical conclusion is that standalone liveness detection is no longer sufficient. Traditional liveness checks evaluate whether a person is physically present — blink detection, head movement, 3D depth. These are trivially defeated by injection attacks that feed pre-recorded or synthetically generated video directly into the verification pipeline at the software level.

The report calls for detection systems that evaluate whether media has been generated, manipulated, or injected — a fundamentally different question than whether someone is "live."

What This Means for Businesses

Any organization that verifies identities — financial institutions, crypto platforms, marketplaces, HR departments, healthcare providers — needs to evaluate whether their current verification infrastructure can withstand the attack categories described in this report. The paper makes clear that relying on a single verification layer is no longer defensible.

The organizations that will weather this transition are those building multi-layered verification that combines document authentication, biometric matching, forensic deepfake detection, behavioral analytics, and continuous monitoring — rather than checking a single box at onboarding and trusting it indefinitely.

AI Identity Fraud FAQ

How much will AI-enabled fraud cost the US by 2027?
Deloitte's Center for Financial Services projects AI-enabled fraud losses in the United States could reach $40 billion by 2027, representing a 32% compound annual growth rate from $12.3 billion in 2023.
What are the main attack categories targeting financial institutions?
Ten categories are identified including deepfakes against IDV systems, AI-generated phishing, synthetic identity creation, real-time deepfake fraud, AI agent-driven account takeovers, and exploitation of legacy SMS-based authentication.
Why are liveness checks no longer sufficient?
Injection attacks bypass the camera entirely, feeding pre-recorded or AI-generated video directly into the verification pipeline. The injected content passes all liveness heuristics because it appears identical to a live capture.
What is eCBSV?
The Electronic Consent-Based Social Security Number Verification system is operated by the Social Security Administration. It allows authorized organizations to verify SSNs against an authoritative government source. The report calls for expanding its availability beyond credit-related financial services.
What federal legislation addresses AI identity fraud?
HR 7270, the Stop Identity Fraud and Identity Theft Act of 2026, would have Treasury run a grant program covering both financial sector security and fraud in government benefits distribution.
TagsAdvancedArticleFraud PreventionDeepfake DetectionIdentity VerificationUSFinTech

Relevant Articles

What is deepidv?

Not everyone loves compliance — but we do. deepidv is the AI-native verification engine and agentic compliance suite built from scratch. No third-party APIs, no legacy stack. We verify users across 211+ countries in under 150 milliseconds, catch deepfakes that liveness checks miss, and let honest users through while keeping bad actors out.

Learn More