Identity fraud attacks using AI are fooling biometric security systems


- Deepfake selfies can now bypass traditional verification systems
- Fraudsters are exploiting AI for synthetic identity creation
- Organizations must adopt advanced behavior-based detection methods
The latest Global Identity Fraud Report by AU10TIX reveals a new wave in identity fraud, largely driven by the industrialization of AI-based attacks.
With millions of transactions analyzed from July through September 2024, the report reveals how digital platforms across sectors, particularly social media, payments, and crypto, are facing unprecedented challenges.
Fraud tactics have evolved from simple document forgeries to sophisticated synthetic identities, deepfake images, and automated bots that can bypass conventional verification systems.
Social media platforms experienced a dramatic escalation in automated bot attacks in the lead-up to the 2024 US presidential election. The report reveals that social media attacks accounted for 28% of all fraud attempts in Q3 2024, a notable jump from only 3% in Q1.
These attacks focus on disinformation and the manipulation of public opinion on a large scale. AU10TIX says these bot-driven disinformation campaigns employ advanced Generative AI (GenAI) elements to avoid detection, an innovation that has enabled attackers to scale their operations while evading traditional verification systems.
The GenAI-powered attacks began escalating in March 2024 and peaked in September and are believed to influence public perception by spreading false narratives and inflammatory content.
One of the most striking discoveries in the report involves the emergence of 100% deepfake synthetic selfies – hyper-realistic images created to mimic authentic facial features with the intention of bypassing verification systems.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Traditionally, selfies were considered a reliable method for biometric authentication, as the technology needed to convincingly fake a facial image was beyond the reach of most fraudsters.
AU10TIX highlights these synthetic selfies pose a unique challenge to traditional KYC (Know Your Customer) procedures. The shift suggests that moving forward, organizations relying solely on facial matching technology may need to re-evaluate and bolster their detection methods.
Furthermore, fraudsters are increasingly using AI to generate variations of synthetic identities with the help of “image template” attacks. These involve manipulating a single ID template to create multiple unique identities, complete with randomized photo elements, document numbers, and other personal identifiers, allowing attackers to quickly create fraudulent accounts across platforms by leveraging AI to scale synthetic identity creation.
In the payment sector, the fraud rate saw a decline in Q3, from 52% in Q2 to 39%. AU10TIX credits this progress to increased regulatory oversight and law enforcement interventions. However, despite the reduction in direct attacks, the payments industry remains the most frequently targeted sector with many fraudsters, deterred by heightened security, redirecting their efforts toward the crypto market, which accounted for 31% of all attacks in Q3.
AU10TIX recommends that organizations move beyond traditional document-based verification methods. One critical recommendation is adopting behaviour-based detection systems that go deeper than standard identity checks. By analyzing patterns in user behaviour such as login routines, traffic sources, and other unique behavioural cues, companies can identify anomalies that indicate potentially fraudulent activity.
“Fraudsters are evolving faster than ever, leveraging AI to scale and execute their attacks, especially in the social media and payments sectors,” said Dan Yerushalmi, CEO of AU10TIX.
“While companies are using AI to bolster security, criminals are weaponizing the same technology to create synthetic selfies and fake documents, making detection almost impossible.”
You might also like
Deepfake selfies can now bypass traditional verification systems Fraudsters are exploiting AI for synthetic identity creation Organizations must adopt advanced behavior-based detection methods The latest Global Identity Fraud Report by AU10TIX reveals a new wave in identity fraud, largely driven by the industrialization of AI-based attacks. With millions of transactions…
Recent Posts
- Nvidia’s BlueField-3 SuperNIC morphs into a special self-hosted storage powerhouse with an 80GBps memory boost and PCIe-ready architecture
- 8BitDo’s Ultimate 2 controller gets an upgrade to next-generation anti-drift sticks
- Framework’s first tiny Desktop beautifully straddles the line between cute and badass
- Samsung Galaxy Z Fold 7 unofficial renders tease a slimmer design and a bigger, hidden-in-plain-sight upgrade
- Netflix drops an uneasy new teaser for You season 5, and I can’t help but laugh as killer Casanova Joe calls himself ‘the luckiest guy in New York’
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010