Facebook is simulating users’ bad behavior using AI


Facebook’s engineers have developed a new method to help them identify and prevent harmful behavior like users spreading spam, scamming others, or buying and selling weapons and drugs. They can now simulate the actions of bad actors using AI-powered bots by letting them loose on a parallel version of Facebook. Researchers can then study the bots’ behavior in simulation and experiment with new ways to stop them.
The simulator is known as WW, pronounced “Dub Dub,” and is based on Facebook’s real code base. The company published a paper on WW (so called because the simulator is a truncated version of WWW, the world wide web) earlier this year, but shared more information about the work in a recent roundtable.
The research is being led by Facebook engineer Mark Harman and the company’s AI department in London. Speaking to journalists, Harman said WW was a hugely flexible tool that could be used to limit a wide range of harmful behavior on the site, and he gave the example of using the simulation to develop new defenses against scammers.
In real life, scammers often start their work by prowling a users’ friendship groups to find potential marks. To model this behavior in WW, Facebook engineers created a group of “innocent” bots to act as targets and trained a number of “bad” bots who explored the network to try to find them. The engineers then tried different ways to stop the bad bots, introducing various constraints, like limiting the number of private messages and posts the bots could send each minute, to see how this affected their behavior.
Harman compares the work to that of city planners trying to reduce speeding on busy roads. In that case, engineers model traffic flows in simulators and then experiment with introducing things like speed bumps on certain streets to see what effect they have. WW simulation allows Facebook to do the same thing but with Facebook users.
“We apply ‘speed bumps’ to the actions and observations our bots can perform, and so quickly explore the possible changes that we could make to the products to inhibit harmful behavior without hurting normal behavior,” says Harman. “We can scale this up to tens or hundreds of thousands of bots and therefore, in parallel, search many, many different possible […] constraint vectors.”
Simulating behavior you want to study is a common enough practice in machine learning, but the WW project is notable because the simulation is based on the real version of Facebook. Facebook calls its approach “web-based simulation.”
“Unlike in a traditional simulation, where everything is simulated, in web-based simulation, the actions and observations are actually taking place through the real infrastructure, and so they’re much more realistic,” says Harman.
He stressed, though, that despite this use of real infrastructure, bots are unable to interact with users in any way. “They actually can’t, by construction, interact with anything other than other bots,” he says.
Notably, the simulation is not a visual copy of Facebook. Don’t imagine scientists studying the behavior of bots the same way you might watch people interact with one another in a Facebook group. WW doesn’t produce results via Facebook’s GUI, but instead records all the interactions as numerical data. Think of it as the difference between watching a football game (real Facebook) and simply reading the match statistics (WW).
Right now, WW is also in the research stages, and none of the simulations the company has run with bots have resulted in real life changes to Facebook. Harman says his group is still running tests to check that the simulations match real-life behaviors with high enough fidelity to justify real-life changes. But he thinks the work will result in modifications to Facebook’s code by the end of the year.
There are certainly limitations to the simulator, too. WW can’t model user intent, for example, and nor can it simulate complex behaviors. Facebook says the bots search, make friend requests, leave comments, make posts, and send messages, but the actual content of these actions (like, the content of a conversation) isn’t simulated.
Harman says the power of WW, though, is its ability to operate on a huge scale. It lets Facebook run thousands of simulations to check all sorts of minor changes to the site without affecting users, and from that, it finds new patterns of behavior. “The statistical power that comes from big data is still not fully appreciated, I think,” he says.
One of the more exciting aspects of the work is the potential for WW to uncover new weaknesses in Facebook’s architecture through the bots’ actions. The bots can be trained in various ways. Sometimes they’re given explicit instructions on how to act; sometimes they are asked to imitate real-life behavior; and sometimes they are just given certain goals and left to decide their own actions. It’s in the latter scenario (a method known as unsupervised machine learning) that unexpected behaviors can occur, as the bots find ways to reach their goal that the engineers did not predict.
“At the moment, the main focus is training the bots to imitate things we know happen on the platform. But in theory and in practice, the bots can do things we haven’t seen before,” says Harman. “That’s actually something we want, because we ultimately want to get ahead of the bad behavior rather than continually playing catch up.”
Harman says the group has already seen some unexpected behavior from the bots, but declined to share any details. He said he didn’t want to give the scammers any clues.
Facebook’s engineers have developed a new method to help them identify and prevent harmful behavior like users spreading spam, scamming others, or buying and selling weapons and drugs. They can now simulate the actions of bad actors using AI-powered bots by letting them loose on a parallel version of Facebook.…
Recent Posts
- One of the best AI video generators is now on the iPhone – here’s what you need to know about Pika’s new app
- Apple’s C1 chip could be a big deal for iPhones – here’s why
- Rabbit shows off the AI agent it should have launched with
- Instagram wants you to do more with DMs than just slide into someone else’s
- Nvidia is launching ‘priority access’ to help fans buy RTX 5080 and 5090 FE GPUs
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010