SUBSCRIBE NOW SUPPORT US

iProov reveals deepfake attack risks

iProov reveals deepfake attack risks
Published on

iProov, a provider of biometric identity verification solutions, announced that an attack scenario demonstrated by its in-house Red Team has been published by MITRE ATLAS, a global knowledge base focused on AI security and threat mitigation.

The case study confirms a high-risk vulnerability in remote Know Your Customer (KYC) identity verification processes, showing how readily available face-swapping and virtual camera tools can be used to bypass mobile liveness detection systems.

According to iProov, the attack demonstrated how AI-generated deepfake videos injected through virtual camera applications can evade security checks used by financial services, banking, and cryptocurrency platforms during user onboarding.

The publication places iProov alongside contributors such as Microsoft, NVIDIA, IBM, Intel, Cisco, Palo Alto Networks, Kaspersky, CrowdStrike, and Trend Micro, which collaborate through MITRE ATLAS to improve AI threat detection and defense frameworks.

“The strength of MITRE ATLAS lies in the breadth and quality of the community that supports it. Contributions from across industry, academia, and government — ranging from red-team findings to operational threat insights — are essential to advancing the accuracy and completeness of the MITRE ATLAS knowledge base,” said Doug Robbins, vice president at MITRE Labs.

iProov chief scientific officer Andrew Newell said attacks against identity verification systems have increased significantly due to rapid advances in generative AI and the availability of low-cost tools.

“We’ve seen an explosion in attack vectors relating to identity verification over the last 12 months, largely driven by advances in generative AI and the wide availability of low cost tools,” Newell said. “The pace of evolution is only ever likely to increase, making it essential that all organisations examine their own defences against these new tactics without delay.”

The company said the findings highlight the importance of adopting stricter testing standards, including the European CEN 18099 framework, which establishes rigorous protocols for detecting injection attacks in biometric systems.

MITRE said the case study aims to help security teams, developers, and regulators better understand real-world threats to AI-enabled identity systems and strengthen defenses through continued collaboration.

Latest Stories

No stories found.
logo
Daily Tribune
tribune.net.ph