Facebook Chairman and CEO Mark Zuckerberg at a House Financial Services Committee hearing in Washington on October 23, 2019. Source: Erin Scott/Reuters.
Market Herald logo


Be the first with the news that moves the market
  • Employees at Facebook have warned for years of the platform’s failure to police abusive and extremist content
  • Created more than 10 years ago, the platform currently operates in over 190 countries with more than 2.8 billion monthly users
  • However, efforts to tackle hate speech and misinformation have failed to keep up with the rate of expansion
  • Language and AI gaps, for example, allowed abusive posts to spread in countries where Facebook itself has determined the risk of real-world harm is high
  • Facebook has also failed to properly address human traffickers using its platforms to recruit, buy and sell what were referred to as “domestic servants”

Employees at Facebook have warned for years of the platform’s failure to police abusive and extremist content, focusing instead on becoming the global tech behemoth it is today, according to internal documents and interviews with former employees.

However, efforts to stop its products from becoming conduits for hate speech, inflammatory rhetoric and misinformation — some of which has been blamed for inciting violence — have failed to keep pace with the rate of expansion.

According to Reuters, internal documents show Facebook knows it hasn’t hired enough workers with the language skills and knowledge of local events necessary to identify objectionable posts, particularly in developing countries.

The files also show that the artificial intelligence systems used by Facebook to root out such content are frequently not up to the task and that the company has not made it easy for users to flag posts themselves.

The documents are among a swathe of disclosures made to the US Securities and Exchange Commission (SEC) by whistleblower Frances Haugen, a former product manager at Facebook who left the company in May.

Still, Facebook spokesperson Mavis Jones said in a statement that the company has native speakers tasked with reviewing content in more than 70 languages globally, as well as experts in humanitarian and human rights issues.

She said these teams are working to stop abuse on Facebook in places where there is a heightened risk of conflict and violence.

“We know these challenges are real and we are proud of the work we’ve done to date,” Ms Jones said.

But according to the documents provided by Ms Haugen, languages spoken outside the United States, Canada and Europe have been a stumbling block for Facebook’s automated content moderation.

In 2020, for example, the platform did not have screening algorithms — “classifiers” — to identify misinformation in Burmese, the language of Myanmar, or hate speech in Ethiopia’s dialects of Oromo or Amharic.

These gaps allowed abusive posts to spread in countries where Facebook itself has determined the risk of real-world harm is high.

Three former Facebook employees who worked for the company’s Asia Pacific, Middle East and North Africa offices in the past five years, according to Reuters, said they believed content moderation in those regions had not been a priority.

They argued that Facebook’s leadership did not understand the issues, and failed to devote enough staff and resources.

Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa, who left in 2017, said the company’s approach to global growth has been “colonial,” focused on monetisation without safety measures.

Perhaps even more concerning is Facebook’s awareness of — and failure to address — human trafficking.

According to internal documents reviewed by CNN, such content is referred to internally as domestic servitude: “a form of trafficking of people for the purpose of working inside private homes through the use of force, fraud, coercion or deception.”

Earlier this year, an internal Facebook report noted that “gaps still exist in our detection of on-platform entities engaged in domestic servitude” and detailed how the company’s platforms are used to recruit, buy and sell what was referred to as “domestic servants”.

Facebook has reportedly known about human traffickers using its platforms since 2018. In 2019, the situation got so bad that Apple threatened to pull Facebook and Instagram’s access to its App Store, which the social media giant relies on to reach new users.

“Investors would have been very interested to learn the truth about Facebook almost losing access to the Apple App Store because of its failure to stop human trafficking on its products,” Ms Haugen wrote in an SEC complaint.

But despite the revelations, the company has tried to discredit the allegations.

Last week, Facebook’s Vice President of Communications John Pinette said that “a curated selection out of millions of documents at Facebook can in no way be used to draw fair conclusions about us”.

More From The Market Herald

" More than 100 global leaders pledge to end deforestation by 2030

At the COP26 climate talks in Glasgow on Monday, more than 100 global leaders pledged to halt and reverse deforestation and land degradation

" China and US pledge to tackle climate change in joint statement

The United States and China, the world’s two major carbon dioxide polluters, announced a commitment to increase cooperation in combating climate change, particularly

" Robinhood says details of 7 million users exposed in major security breach

Trading platform Robinhood Markets said on Monday a third party had gained access to the personal details of roughly seven million users but

" Obama urges COP26 to help developing nations, scolds China and Russia over climate inaction

On Monday, Former US President Barack Obama called on rich nations to end a years-long stalemate with developing countries over cash for climate