YouTube Just Deranked Anti–Meghan Markle Channels From Search Results And Recommendations

Although Sussex fans have raised questions for years about why anti-Meghan accounts were platformed and monetized, the scrutiny of royal YouTube picked up steam in January, when social media analytics company Bot Sentinel published a report examining anti-Meghan content across various online platforms. The report was the third in a series about what the company dubbed “single-purpose hate accounts,” profiles that appeared to only be on the internet to attack a certain individual — in this case, the Duchess of Sussex.

In the report’s section about YouTube, Bot Sentinel identified 25 channels whose videos “focused predominantly on disparaging Meghan.” These channels, according to the report, had a combined overall view count of nearly 500 million and Bot Sentinel estimated that, collectively, the accounts have generated approximately $3.5 million from ad revenue over their lifetimes. (A number of the creators of the channels identified in the report disputed the company’s estimates, but have so far declined to publicly share their earnings.) The report called on the platform to remove the channels, citing YouTube’s harassment and cyberbullying policy, which explicitly states that “accounts dedicated entirely to focusing on maliciously insulting an identifiable individual” are examples of content it does not allow.

And yet, to the frustration of many Sussex fans (and glee of Meghan and Harry haters), YouTube has, so far, only removed one anti-Meghan channel. (Another channel was temporarily removed but, as Input magazine reported, it was restored.)

This is because YouTube’s current community guidelines have a significant vulnerability that allows for targeted harassment and, often, the spread of misinformation about an individual without breaking the platform’s rules. Anti-Meghan channels are platformed — and many are monetized — due to the company’s definition of what you need to attack about a person for it to count as harassment, hate speech, and cyberbullying.

According to YouTube’s terms of service, in order to be considered “content that targets an individual with prolonged or malicious insults,” the insults must be based on “intrinsic attributes,” which the company defines as “physical traits” and “protected group status.” This protected group policy lists 13 attributes that cannot be attacked: age, caste, disability, ethnicity, gender identity or expression, nationality, race, immigration status, religion, sex/gender, sexual orientation, victims of a major violent event and their kin, and veteran status.

YouTube’s rules indicate that everything else, including attacks based on falsehoods and potentially defamatory content, is fair game. And thus the platform hosts conspiracy videos that falsely imply that Meghan is intersex or provide “evidence” that she engaged in sex work before meeting Prince Harry — and these videos have more than 100,000 views.

In an emailed statement, YouTube reiterated exactly what types of attacks qualify as harassment and hate speech. “We have clear policies that prohibit content that targets an individual with threats or malicious insults based on intrinsic attributes, such as their race or gender,” spokesperson Jack Malon said.

Part of the issue, Maza said, is that most content that is racist, anti-LGBTQ, xenophobic or otherwise hateful that is targeted at a person based on “intrinsic attributes” is insidious. Indeed, much has been written about the “racist undertones” of the UK media’s coverage of Meghan.

“Hate speech is always implicit,” Maza said. “Good hate speech, good bigoted propaganda, dabbles in euphemism, stereotypes, a wink and a nod. It’s always suggested or alluded to. If your approach to moderating speech is that there has to be a clear rule, you’re never going to have a good policy. The focus must be implicit bias … Violence and explicit bigotry come from implicit bigotry.”

Most anti-Meghan YouTubers have proven to be good at toeing YouTube’s line when it comes to their videos, using coded words like “uppity” and “classless” to describe Meghan, or playing into the angry Black woman trope by portraying her as someone who regularly throws tantrums.

Source

Although Sussex fans have raised questions for years about why anti-Meghan accounts were platformed and monetized, the scrutiny of royal YouTube picked up steam in January, when social media analytics company Bot Sentinel published a report examining anti-Meghan content across various online platforms. The report was the third in a…

Leave a Reply

Your email address will not be published. Required fields are marked *