Real-World Events Drive Increases In Online Hate Speech, Study Finds

Top line

Real-world events such as elections and protests can lead to spikes in online hate speech on both mainstream and fringe platforms, a study published Wednesday in the journal PLOS ONE found, with hate posts increasing even though many social media platforms are trying to deal with them.

Also Read :  World Darts Championship: Seeds dominate to produce star-studded third round at Alexandra Palace | Darts News

Key facts

Using machine learning analysis—a way of analyzing data that automates model building—the researchers looked at seven types of online hate speech in 59 million posts by users of 1,150 online hate communities, online forums where people are most likely to uses hate speech, including on sites such as Facebook, Instagram, 4Chan and Telegram.

The total number of hate speech posts in a seven-day moving average trended higher over the course of the survey, which ran from June 2019 to December 2020, increasing by 67% from 60,000 to 100,000 daily posts .

Sometimes the hate speech of social media users grew to encompass groups that were not engaged with the real-world events of the time.

Among the cases noted by the researchers was the rise in religious hate speech and anti-Semitism following the US assassination of Iranian General Qassem Soleimani in early 2020, and the rise in religious and gender hate speech following the November 2020 US election. ., during which Kamala Harris was elected the first female vice president.

Despite efforts by individual platforms to stamp out hate speech, online hate speech persists, according to researchers.

The researchers pointed to media attention as one of the key factors driving hate-related posts: for example, there was little media attention when Breonna Taylor was first killed by police, and thus the researchers found minimal online hate speech, but when George Floyd was killed months later and media attention grew, as did hate speech.

A big number

250%. That’s how much racial hate speech has increased since the murder of George Floyd. It was the biggest jump in hate speech the researchers found during the study period.

Key background

Hate speech has plagued social networks for years: platforms like Facebook and Twitter have policies against hate speech and have promised to remove offensive content, but this has not eliminated the spread of these posts. Earlier this month, nearly two dozen independent human rights experts appointed by the United Nations called for more accountability from social media platforms to reduce the amount of online hate speech. And human rights experts aren’t alone in wanting social media companies to do more: A December USA Today-University Suffolk survey found that 52 percent of respondents said social media platforms should curb hateful and inaccurate content. while 38% say sites should be open forum.


Days after billionaire Elon Musk closed his deal to buy Twitter last year, promising to ease the site’s moderation policies, the site saw a “surge in hateful behavior,” according to Joel Roth, Twitter’s former head of safety and integrity. At the time, Roth tweeted that the safety team had taken down more than 1,500 accounts for hateful behavior in a three-day period. Musk has faced sharp criticism from advocacy groups who say that under Musk’s leadership and with the easing of speech regulations, the volume of hate speech on Twitter has grown dramatically, even though Musk has insisted that impressions of hateful tweets have decreased.

More information

Twitter’s head of safety admits ‘rise in hateful behavior’ as form reportedly restricts access to moderation tools (Forbes)

Some reservations about requiring consistency in social media content moderation decisions. (Forbes)

What should policymakers do to encourage better content moderation on the platform? (Forbes)


Leave a Reply

Your email address will not be published.

Related Articles

Back to top button