Blog Business Entertainment Environment Health Latest News News Analysis Opinion Science Sports Technology World
Facebook: Social Media Platform Under Pressure as Report Finds Terrorist Content Rife


Researchers at the Institute for Strategic Dialogue (ISD) found that Facebook does too little to prevent posts promoting official al-Shabaab and Islamic State propaganda from appearing on its site. 




In a 2022 report published at the end of a two-year investigation, researchers lay out how Facebook’s moderators are not equipped to handle the content appropriately, especially since terrorists exploit language gaps in Facebook’s moderation team.  




Although previously reported, Facebook’s moderation consistently has ‘blind spots’ when it comes to languages such as Somali, Kiswahili, and Arabic: something that the ISD claims is playing into the hands of governments conducting human rights abuses, as well as high-profile terrorist organisation such as al-Shabaab and the Islamic State group. 




The report describes one example of a “Somali-language ‘media outlet’” which “shared four official al-Shabaab videos.” These were published on the site in October 2021, and despite carrying al-Shabaab's official branding and “were in no shape or form disguised to get past moderators,” remained publicly available on Facebook for months. 




Furthermore, the report argues that these gaps in moderation regarding East African languages are directly responsible for the deaths of civilians. For example, in January 2019, Somali militants belonging to al-Shabaab stormed the compound of a luxury hotel in Nairobi, the capital of Kenya. A government investigation into the attack later found that it was coordinated on Facebook, and the account used for this remained on Facebook “undetected for six months until after the attack.” 




This article examines the finding of the ISD report and other investigations into terrorist activity on Facebook. 




The report comes amid a range of ongoing controversies for Meta, the owner of Facebook along with popular social media and messaging apps such as Instagram and WhatsApp. The company has been accused of removing posts offering abortion pills to women following the Roe v Wade ruling, "whitewashing" a human rights report on India, and giving gun-sellers on the platform as many as ten strikes before removing them from the site. 




Facebook’s policy on dangerous individuals and organisations states that “we remove praise, substantiative support and representation of Tier 1 entities, as well as their leaders, founders, or prominent members,” where they define “Tier 1 entities” as including “terrorist, hate and criminal organisations.” Facebook specifically includes in Tier 1 groups designated by the US government as Foreign Terrorist Organisations, which both al-Shabaab and the Islamic State Group are. 




Yet despite this policy, the ISD report identified examples of posts which show ‘brazen’ shows of support for both al-Shabaab and the Islamic State. One Somali-language page, visible publicly, shared four separate official al-Shabaab videos over a three-week period in October 2021. The videos were viewed over 53,000 times and were shared by almost 18,000 users. These examples did not even attempt to cover their branding to bypass the moderators, and yet they were publicly visible on Facebook for months. 




Another example is an al-Shabaab assassination video which was shared by five different al-Shabaab users on Facebook, all with blatant al-Shabaab media branding. The video was moderated with only a sensitive video warning, despite it clearly classifying as ‘support and representation of Tier 1 entities’ under Facebook’s removal policy. Clearly, Facebook’s moderators are unaware or do not recognise al-Shabaab terrorist branding, a severe flaw in moderation. 




The report additionally explains that Islamic State networks are more proficient at hiding their branding, apparently as an attempt to bypass moderation. Researchers discovered Islamic State content disguised under Netflix or Amazon Prime branding, suggesting that supporters are well aware of Facebook’s moderation policies. The BBC reported in 2020 that one network linked to the Islamic State mixed propaganda content with that of real news outlets including adding the BBC news theme music in an attempt to dodge moderators. 




It’s not just Facebook’s problem: the Kenyan edition of Nation, which conducted further spot-checks on the ISD report, found extremist content, including “several violent jihad sermons instructing audiences to pick up arms” on TikTok, a newbie to the social media scene with an unprecedented one billion active monthly users worldwide. ISD found some parts of the network extended to Twitter and Telegram. 




When approached with questions by Nation, a Facebook spokesperson said “We've already removed a number of these Pages and Profiles and will continue to investigate once we have access to the full findings. We don't allow terrorist groups to use Facebook, and we remove content praising or supporting these organisations when we become aware of it.” 




In relation to accusations by the ISD of a blind spot in its moderation teams, they said “We have specialized teams — which include native Arabic, Somali and Swahili speakers — dedicated to this effort. We know these adversarial groups are continuously evolving their tactics to try and avoid detection by platforms, researchers, and law enforcement. That’s why we continue to invest in people, technology, and partnerships to help our systems stay ahead.” 
In a speech in October 2019, Facebook’s founder and CEO Mark Zuckerberg boasted that “Our AI systems identify 99 percent of the terrorist content we take down before anyone even sees it. This is a massive investment.” 




But all this means is that of the terrorist content that Facebook identifies – 99% of it is taken down by artificial systems. Facebook cannot know how much terrorist content it doesn’t recognise as breaching the rules and refuses to disclose the number of user reports submitted to the company about terrorist content. 




Furthermore, Zuckerberg stated in 2018 before a congressional panel appearance “When people ask if we’re a media company what I heard is, ‘Do we have a responsibility for the content that people share on Facebook,’ and I believe the answer to that question is yes.” 




The ISD report is far from the only report finding concerning volumes of terrorist-linked content on Facebook.  




In 2019 an anonymous whistleblower working with the National Whistleblower Center (NWC) conducted a simple search of the English and Arabic names of several terrorist groups on Facebook. These simple, public searches returned the profiles of hundreds of self-identified terrorists – people who are openly sharing terrorist and extremist content on public Facebook pages, which anyone could search for and access. 




The whistleblower tracked these accounts over a five-month period in 2018, and, shockingly, found that despite Facebook’s assurances about removing 99% of extremist material before anyone sees it, less than 30% of these profiles had been removed during that time period. 




Moreover, just 38% of the accounts brazenly sharing the symbols of terrorist organisations had been removed. It is clear that Facebook’s moderation teams simply miss the vast majority of terrorist content on the platform – content that Zuckerberg himself claims to have responsibility over. 




There is a big question mark over social media companies’ roles in the future regarding radicalisation. 




The EU’s terrorist content regulation came into force in EU countries in July 2022, which gives social media companies such as Facebook, Google, and Twitter an hour to remove terrorist content after a member state flags it to the company or face a fine of up to 4% of its global revenue. The law aims to crack down on content such as viral livestreams of terrorist attacks like the 2019 Christchurch mosque shootings. 




Nation interviewed Emma Wabuke, a PhD candidate in radicalisation at Cambridge University, who said that “The platforms have to handle this carefully, given the freedom of speech and privacy issues concerned. But ultimately, they have an opportunity more than most to moderate the content on their platforms in good time.” 




“Their role, however, should not extend to informing law enforcement, unless there are specific orders that have been enforced by the courts.” 




The ISD strikes a similar tone, calling for education about extremism and propaganda at the community level: “We must educate young people so that they are armed with the knowledge to make informed decisions about extremism and to recognise propaganda when they see it. Education is more effective than intervention.” 




An author of the ISD report, Moustafa Ayad, told the BBC that "The tactics we outline in our report are shifting as we speak. Without a clear understanding of these networks, and their behaviours, responses reliant on takedowns do little to quell ISIS-support expansion across our primary platforms." 




Facebook has a terrorism problem. The volume of extremist content and the apparent inability of moderators to quickly take it down is a problem not just for Facebook, but, as seen by the Kenya and Christchurch attacks, a problem for the world as a whole. It is a prime example of how online activity leads to real-world consequences. 




It remains to be seen what impact the EU regulation has on this activity. Although Facebook is responsible for Facebook’s content, governments and multinational treaties must put pressure on tech companies to use their power to prevent extremist content on their platforms: or face the proliferation of extremist ideology, and further rises in extremist Islamic and white supremacist terror groups. The free world depends on it.


 


Edited by: Whitney Edna Ibe


 


Cover Image: Mark Zuckerberg speaks in front of Facebook's logo in 2018. From Wikimedia Commons, Anthony Quintano.


 



Share This Post On

Tags: Facebook Terrorism Islamic State Extremism Christchurch East Africa Radicalisation Kenya al-Shabaab



0 comments

Leave a comment


You need to login to leave a comment. Log-in
TheSocialTalks was founded in 2020 as an alternative to mainstream media which is fraught with misinformation, disinformation and propaganda. We have a strong dedication to publishing authentic news that abides by the principles and ethics of journalism. We are a not-for-profit organisation driven by a passion for truth and justice in society.

Our team of journalists and editors from all over the world work relentlessly to deliver real stories affecting our society. To keep our operations running, we depend on support in the form of donations. Kindly spare a minute to donate to support our writers and our cause. Your financial support goes a long way in running our operations and publishing real news and stories about issues affecting us. It also helps us to expand our organisation, making our news accessible to more everyone and deepening our impact on the media.

Support fearless and fair journalism today.


Related