NYU researchers find no evidence of anti-conservative bias on social media


A new report finds that claims of anti-conservative bias on social media platforms are not only untrue but serve as a form of disinformation. The report from NYU’s Stern Center for Business and Human Rights says not only is there no empirical finding that social media companies systematically suppress conservatives, but even reports of anecdotal instances tend to fall apart under close scrutiny. And in an effort to appear unbiased, platforms actually bend over backward to try to appease conservative critics.
“The contention that social media as an industry censors conservatives is now, as we speak, becoming part of an even broader disinformation campaign from the right, that conservatives are being silenced all across American society,” the report’s lead researcher Paul Barrett said in an interview with The Verge. “This is the obvious post-Trump theme, we’re seeing it on Fox News, hearing it from Trump lieutenants, and I think it will continue indefinitely. Rather than any of this going away with Trump leaving Washington, it’s only getting more intense.”
The researchers analyzed data from analytics platforms CrowdTangle and NewsWhip and existing reports like the 2020 study from Politico and the Institute for Strategic Dialogue, all of which showed that conservative accounts actually dominated social media. And they drilled down into anecdotes about bias and repeatedly found there was no concrete evidence to support such claims.
Looking at how claims of anti-conservative bias developed over time, Barrett says, it’s not hard to see how the “anti-conservative” rhetoric became a political instrument. “It’s a tool used by everyone from Trump to Jim Jordan to Sean Hannity, but there is no evidence to back it up,” he said.
The report notes that the many lawsuits against social media platforms have “failed to present substantial evidence of ideological favoritism — and they have all been dismissed.”
This is not to suggest that Twitter, Facebook, YouTube, and others have not made mistakes, Barrett added; they have. “They tend to react to crises and adjust their policies in the breach, and that’s led to a herky-jerky cadence of how they apply their policies,” he said.
Twitter in particular has historically been more hands-off with moderation, proud of its image as a protector of free speech. But all that changed in 2020, Barrett said, in response to the pandemic and the anticipation that there would be a bitter election campaign cycle. “Twitter shifted its policies and began much more vigorous policing of content around the pandemic and voting in general,” he notes. Among social media companies, “Twitter was taking the lead and setting the example.”
And in the aftermath of the January 6th riots at the Capitol, Barrett says, Twitter and other platforms were well within their policies against inciting violence when they banned former President Trump.
The report has several recommendations for social media platforms going forward. First: better disclosure around content moderation decisions, so the public has a fuller understanding of why certain content and users might be removed. The report authors also want platforms to allow users to customize and control their social media feeds.
Hiring more human moderators is another key recommendation, and Barrett acknowledges that the job of content moderator is highly stressful. But having more moderators — hired as employees, not contractors — would allow Facebook and other platforms to spread out moderation of the most challenging content among more people.
The report also recommends Congress and the White House work with tech companies to dial back some of the hostility between Washington and Silicon Valley and work on responsible regulation. He doesn’t recommend repealing Section 230, however. Instead, he’d like to see it amended.
“Make it conditional: If companies want to enjoy the benefits of 230, they need to adopt responsible content moderation policies. Let people see how their algorithms work, and why certain people see material others don’t,” he said. “No one expects them to show every last line of code, but people should be able to understand what goes into the decisions being made about what they’re seeing.”
A new report finds that claims of anti-conservative bias on social media platforms are not only untrue but serve as a form of disinformation. The report from NYU’s Stern Center for Business and Human Rights says not only is there no empirical finding that social media companies systematically suppress conservatives,…
Recent Posts
- Gabby Petito murder documentary sparks viewer backlash after it uses fake AI voiceover
- The quirky Alarmo clock is no longer exclusive to Nintendo’s online store
- The government is still threatening to ‘semi-fire’ workers who don’t answer an email from Elon Musk
- Sigma’s latest camera is so minimalist it doesn’t have a memory card slot
- Freedom of speech is ‘on the line’ in a pivotal Dakota Access Pipeline trial
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010