Facebook moderation tools fail to remove posts from terrorist groups in Kenya

Facebook News Feeds are heavily personalized based on social network and user interests, but a new report(Opens in a new window) on Terrorist Activities in East Africa argues that country and language play a bigger role than expected in what people see.

The platform relies on moderation algorithms to weed out hate speech and violence, but the systems struggle to detect it in non-English posts, according to the Institute for Strategic Dialogue (ISD).

Two terrorist groups in Kenya – al-Shabaab and the Islamic State – exploit these content moderation limits to post recruitment propaganda and shocking videos, many of which are in Arabic, Somali and Kiswahili.

“Language moderation shortcomings not only play into the hands of governments that violate human rights or spread hate speech, but also lead to brazenly open displays of support for terrorist groups,” the report said. ISD.

The report studied Facebook content posted in several East African countries, including Somalia, Ethiopia, Uganda and Kenya. (Credit: Getty Images)

As elections approach in Kenya on August 9, the study cites 30 public Facebook pages of militant groups that intentionally sow distrust of democracy and government. The most active profiles of al-Shabaab and Islamic State are calling for violence and discord ahead of elections, and ultimately the establishment of an East African caliphate.

The research also revealed that a video posted on al-Shabaab’s official page showing a Somali man being shot in the back of the neck was freely shared by five different users. The video bore the recognizable brand of al-Shabaab, which any content moderation system operating in that region should be required to recognize. Of the 445 users posting in Arabic, Kiswahili and Somali, all were able to freely share a mix of unofficial, official and personalized content clearly supporting al-Shabaab and the Islamic State.

An internal experiment conducted by Facebook in 2019 showed similarly disastrous results when creating a fictitious account in India, The Washington Post reported(Opens in a new window) Last year. In a memo included in the Facebook Papers, a collection of internal documents made public by whistleblower Frances Haugen, Facebook employees said they were shocked by soft-core pornography, hate speech and the “impressive number of corpses” shown to a new user in India.

In contrast, the algorithm suggested a slew of innocuous posts to a new user in the United States, a stark difference that shed light on how the platform behaves for users in different countries.

Content moderation deviations have the potential to negatively influence voter sentiment in favor of violent and extremist groups. The same thing happened in Myanmar in 2016-2018, where members of the military coordinated a Facebook hate speech campaign targeting the predominantly Muslim Rohingya minority group.

Recommended by our editors

The messages encouraged a Rohingya genocide, which ultimately led to thousands of deaths and a global refugee crisis as at least 750,000 were forced to flee, The Guardian reported(Opens in a new window). In 2021, refugees in the United States and United Kingdom sued Facebook(Opens in a new window) for spreading hate speech worth $150 billion.

Although moderating the world’s largest social media platform is a challenge, the SDI report(Opens in a new window) says identifying content moderation gaps is an essential first step, including inventorying languages ​​and images not detected by Facebook’s systems. Second, the report recommends strengthening the identification and removal of terrorism-specific content, especially in countries at high risk of violence or electoral influence.

Finally, ISD recommends not relying solely on Facebook. The development of an independent content moderation organization would help tech companies spot gaps in moderation policies while helping them understand the role they play in the regional ecosystem – beyond a group or of a specific platform. While Facebook is the hub of dangerous activity, other apps such as Twitter and YouTube also play a role, according to the report.

Facebook did not immediately respond to a request for comment.

What's New Now to get our top stories delivered to your inbox every morning.","first_published_at":"2021-09-30T21:30:40.000000Z","published_at":"2022-03-29T17:10:02.000000Z","last_published_at":"2022-03-29T17:09:22.000000Z","created_at":null,"updated_at":"2022-03-29T17:10:02.000000Z"})" x-show="showEmailSignUp()" class="rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs">

Receive our best stories!

Register for What’s up now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertisements, offers or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use and Privacy Policy. You can unsubscribe from newsletters at any time.

Comments are closed.