AI can now clone your personality in only two hours – and that’s a dream for deepfake scammers

- New study trained AI models on answers given in a two-hour interview
- AI could replicate participants’ responses with 85% accuracy
- Agents could be used instead of humans in future research studies
You might think your personality is unique, but all it takes is a two-hour interview for an AI model to create a virtual replica with your attitudes and behaviors. That’s according to a new paper published by researchers from Stanford and Google DeepMind.
What are simulation agents?
Simulation agents are described by the paper as generative AI models that can accurately simulate a person’s behavior ‘across a range of social, political, or informational contexts’.
In the study, 1,052 participants were asked to complete a two-hour interview which covered a wide range of topics, from their personal life story to their views on contemporary social issues. Their responses were recorded and the script was used to train generative AI models – or “simulation agents” – for each individual.
To test how well these agents could mimic their human counterparts, both were asked to complete a set of tasks, including personality tests and games. Participants were then asked to replicate their own answers a fortnight later. Remarkably, the AI agents were able to simulate answers with 85% accuracy compared to the human participants.
What’s more, the simulation agents were similarly effective when asked to predict personality traits across five social science experiments.
While your personality might seem like an intangible or unquantifiable thing, this research shows that it’s possible to distill your value structure from a relatively small amount of information, by capturing qualitative responses to a fixed set of questions. Fed this data, AI models can convincingly imitate your personality – at least, in a controlled, test-based setting. And that could make deepfakes even more dangerous.
Double agent
The research was led by Joon Sung Park, a Stanford PhD student. The idea behind creating these simulation agents is to give social science researchers more freedom when conducting studies. By creating digital replicas which behave like the real people they’re based on, scientists can run studies without the expense of bringing in thousands of human participants every time.
You can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made.
Joon Sung Park, Stanford PhD student
They may also be able to run experiments which would be unethical to conduct with real human participants. Speaking to MIT Technology Review, John Horton, an associate professor of information technologies at the MIT Sloan School of Management, said that the paper demonstrates a way you can “use real humans to generate personas which can then be used programmatically/in-simulation in ways you could not with real humans.”
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Whether study participants are morally comfortable with this is one thing. More concerning for many people will be the potential for simulation agents to become something more nefarious in the future. In that same MIT Technology Review story, Park predicted that one day “you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made.”
For many, this will set dystopian alarm bells ringing. The idea of digital replicas opens up a realm of security, privacy and identity theft concerns. It doesn’t take a stretch of the imagination to foresee a world where scammers – who are already using AI to imitate the voices of loved-ones – could build personality deepfakes to imitate people online.
This is particularly concerning when you consider that the AI simulation agents were created in the study using just two hours of interview data. This is much less than the amount of information currently required by companies such as Tavus, which create digital twins based on a trove of user data.
You might also like…
New study trained AI models on answers given in a two-hour interview AI could replicate participants’ responses with 85% accuracy Agents could be used instead of humans in future research studies You might think your personality is unique, but all it takes is a two-hour interview for an AI model…
Recent Posts
- Reddit is reportedly experiencing some outages
- Google may be close to launching YouTube Premium Lite
- Someone wants to sell you a digital version of the antiquated typewriter but without a glued-on keyboard (no really)
- Carbon removal is the next big fossil fuel boom, oil company says
- This is probably the best looking docking station I’ve ever seen in my entire life – and I can’t wait to test it
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010