Facebook says AI has fueled a hate speech crackdown

Facebook says it is proactively detecting more hate speech using artificial intelligence. A new transparency report released on Thursday offers greater detail on social media hate following policy changes earlier this year, although it leaves some big questions unanswered.
Facebook’s quarterly report includes new information about hate speech prevalence. The company estimates that 0.10 to 0.11 percent of what Facebook users see violates hate speech rules, equating to “10 to 11 views of hate speech for every 10,000 views of content.” That’s based on a random sample of posts and measures the reach of content rather than pure post count, capturing the effect of hugely viral posts. It hasn’t been evaluated by external sources, though. On a call with reporters, Facebook VP of integrity Guy Rosen says the company is “planning and working toward an audit.”
Facebook insists that it removes most hate speech proactively before users report it. It says that over the past three months, around 95 percent of Facebook and Instagram hate speech takedowns were proactive.

That’s a dramatic jump from its earliest efforts — in late 2017, it only made around 24 percent of takedowns proactively. It’s also ramped up hate speech takedowns: around 645,000 pieces of content were removed in the last quarter of 2019, while 6.5 million were removed in the third quarter of 2020. Organized hate groups fall into a separate moderation category, which saw a much smaller increase from 139,900 to 224,700 takedowns.
Some of those takedowns, Facebook says, are powered by improvements in AI. Facebook launched a research competition in May for systems that can better detect “hateful memes.” In its latest report, it touted its ability to analyze text and pictures in tandem, catching content like the image macro (created by Facebook) below.

This approach has clear limitations. As Facebook notes, “a new piece of hate speech might not resemble previous examples” because it references a new trend or news story. It depends on Facebook’s ability to analyze many languages and catch country-specific trends, as well as how Facebook defines hate speech, a category that has shifted over time. Holocaust denial, for instance, was only banned last month.
It also won’t necessarily help Facebook’s moderators, despite recent changes that use AI to triage complaints. The coronavirus pandemic disrupted Facebook’s normal moderation practices because it won’t let moderators review some highly sensitive content from their homes. Facebook said in its quarterly report that its takedown numbers are returning “to pre-pandemic levels,” in part thanks to AI.
But some employees have complained that they’re being forced to return to work before it’s safe, with 200 content moderators signing an open request for better coronavirus protections. In that letter, moderators said that automation had failed to address serious problems. “The AI wasn’t up to the job. Important speech got swept into the maw of the Facebook filter — and risky content, like self-harm, stayed up,” they said.
Rosen disagreed with their assessment and said that Facebook’s offices “meet or exceed” safe workspace requirements. “These are incredibly important workers who do an incredibly important part of this job, and our investments in AI are helping us detect and remove this content to keep people safe,” he said.
Facebook’s critics, including American lawmakers, will likely remain unconvinced that it’s catching enough hateful content. Last week, 15 US senators pressed Facebook to address posts attacking Muslims worldwide, requesting more country-specific information about its moderation practices and the targets of hate speech. Facebook CEO Mark Zuckerberg defended the company’s moderation practices in a Senate hearing, indicating that Facebook might include that data in future reports. “I think that that would all be very helpful so that people can see and hold us accountable for how we’re doing,” he said.
Zuckerberg suggested that Congress should require all web companies to follow Facebook’s lead, and policy enforcement head Monika Bickert reiterated that idea today. “As you talk about putting in place regulations, or reforming Section 230 [of the Communications Decency Act] in the United States, we should be considering how to hold companies accountable for acting on harmful content before it gets seen by a lot of people. The numbers in today’s report can help inform that conversation,” Bickert said. “We think that good content regulation could create a standard like that across the entire industry.”
Facebook says it is proactively detecting more hate speech using artificial intelligence. A new transparency report released on Thursday offers greater detail on social media hate following policy changes earlier this year, although it leaves some big questions unanswered. Facebook’s quarterly report includes new information about hate speech prevalence. The…
Recent Posts
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010