Three essential steps for organizations to safeguard against deepfakes


Our identities face unprecedented threat. While AI has the potential to be a force for good, in the hands of nefarious actors it can have the opposite effect, amplifying these dangers. Among these threats are deepfakes: synthetic media used to impersonate real individuals. Over the past year, these fraudulent impersonations have surged, targeting individuals across various platforms. As deepfakes become more convincing, cybercriminals are finding new ways to exploit them, posing serious risks to personal and organizational security.
While deepfakes have been circulating online since 2017, their impact has recently escalated. Initially used to impersonate celebrities and public figures, deepfakes have now become more personal, targeting senior executives across nearly every industry—from retail to healthcare. A notable case involved a finance employee who was deceived into transferring an astonishing £20 million to fraudsters who used a video deepfake to impersonate the company’s chief financial officer.
Exacerbating the issue is the need for more awareness among the general public. A recent survey by Ofcom revealed that less than half of UK residents are familiar with deepfakes, increasing the likelihood of these attacks succeeding. Equally concerning is that according to KPMG, 80% of business leaders believe deepfakes pose a significant risk to their operations, yet only 29% have implemented measures to counteract them.
The first step in addressing the deepfake challenge to cybersecurity is raising awareness and adopting proactive strategies to combat the threat. But where should organisations begin? Let’s delve deeper, looking at three solutions that organisations can take to prevent being caught out by deepfakes.
Senior Director, Product Management at Ping Identity.
A Dual Approach: The Importance of Passive and Active Identity Verification
To effectively counter deepfakes, organizations must adopt a multifaceted approach to identity management and verification. While biometric authentication methods such as fingerprint or facial recognition are robust, more than a single mode of authentication is required to protect against today’s sophisticated cybercriminals. Multiple layers of authentication are necessary to safeguard against these threats without compromising the user experience.
This is where passive authentication, particularly passive identity threat detection, becomes crucial. Operating alongside active authentication methods—such as user-initiated verifications—passive identity threat detection works behind the scenes, primarily focusing on identifying potential risks. This technology can activate alternative verification methods, such as a push notification to confirm location or device usage when suspicious login attempts or behavior are detected. Rather than overwhelming users with additional authentication steps, passive identity threat detection alerts both the user and the organization to potential fraudulent activity, preventing it before it escalates.
Navigating a ‘Trust Nothing’ Era: The Shift from Implicit to Explicit Trust in Identity Verification
The concept of implicit trust—where we naturally trust what we see and hear—is diminishing as deepfakes increasingly compromise identity verification. In today’s “trust nothing, verify everything” era, explicit trust measures, such as sending a text message, push notification, or other credential checks outside the usual communication channels, have become essential. While not necessary for every interaction, these additional verifications are crucial when dealing with sensitive actions like transferring money or clicking on potentially malicious links, ensuring authenticity in a world where appearances can deceive.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Deepfakes are often used to socially engineer victims, exploiting channels like voice, images, and video over unauthenticated platforms. For instance, an employee might receive a Zoom call from someone impersonating their CEO, asking to reset a password or make an urgent payment. We have been encouraged by employers for years to trust our colleagues, but this rise in deepfakes presents a challenge to the fabric of work culture.
Leveraging AI for Good: Using Emerging Technologies to Combat Deepfakes
Society is at a critical juncture where AI tools can be used for good and evil, with human identity caught in the middle of this technological tug-of-war. As trust erodes and our identities are increasingly at risk, it is imperative that we stay vigilant and proactive in the fight against deepfakes.
AI, while contributing to the deepfake problem, also offers solutions to mitigate it. To reduce the prevalence of deepfakes, organizations must harness emerging technologies designed to detect these fraudulent media. These include image insertion detection, which identifies if an image was manually or falsely added to a communication, and audio detection tools that determine if an audio file was synthetically generated. As AI technology continues to evolve, we can expect the development of even more sophisticated deepfake prevention methods. However, in the meantime, organizations must leverage existing technologies to stay ahead—just as cybercriminals do with AI on the other side of this battle.
As with any cybersecurity threat, the best protection comes from being one step ahead. The more prepared organizations are for potential deepfake attacks, the better they can protect against future threats. By adopting a multifaceted approach to identity verification and remaining aware of the tactics employed by cybercriminals, we can safeguard our identities and maintain trust in a digital world.
We’ve listed the best network monitoring tools.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Our identities face unprecedented threat. While AI has the potential to be a force for good, in the hands of nefarious actors it can have the opposite effect, amplifying these dangers. Among these threats are deepfakes: synthetic media used to impersonate real individuals. Over the past year, these fraudulent impersonations…
Recent Posts
- I tried this new online AI agent, and I can’t believe how good Convergence AI’s Proxy 1.0 is at completing multiple online tasks simultaneously
- I cannot describe how strange Elon Musk’s CPAC appearance was
- Over a million clinical records exposed in data breach
- Rabbit AI’s new tool can control your Android phones, but I’m not sure how I feel about letting it control my smartphone
- Everything missing from the iPhone 16e, including MagSafe and Photographic Styles
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010