Pay Up, Zuck: Amnesty International Urges Facebook to Pay Reparations for Role in Rohingya Conflict

Mark Zuckerberg frowning
Getty/Chip Somodevilla

Facebook (now known as Meta) is facing new calls from Amnesty International to pay reparations to the Rohingya people for the company’s alleged role in inciting ethnic violence in Myanmar.

TechCrunch reports that Facebook is facing pressure to pay reparations to the Rohingya people for the company’s alleged role in inciting ethnic violence in Myanmar.

Rohingya refugees carry the body of Mohibullah, an international representative of ethnic Rohingya refugees, for burial, in Kutupalong, Bangladesh, Thursday, Sept. 30, 2021. Rights groups and the U.S. government have called for a full investigation into the killing of a Rohingya leader in a refugee camp in southern Bangladesh. Police say Mohibullah was shot by unknown attackers at the Kutupalong refugee camp late Wednesday. (AP Photo/ Shafiqur Rahman)

Rohingya refugees carry the body of Mohibullah, an international representative of ethnic Rohingya refugees, for burial, in Kutupalong, Bangladesh, Thursday, Sept. 30, 2021. Rights groups and the U.S. government have called for a full investigation into the killing of a Rohingya leader in a refugee camp in southern Bangladesh. Police say Mohibullah was shot by unknown attackers at the Kutupalong refugee camp late Wednesday. (AP Photo/ Shafiqur Rahman)

A recent report from Amnesty International, which provides what the organization calls a “first-of-its-kind, in-depth human rights analysis” of the role played by Facebook in the crimes committed against the Rohingya in 2017, has found that the tech giant’s addition to the genocide was not merely that of “a passive and neutral platform,” that failed to respond in an adequate manner, but that the company’s core business model of behavioral ads was responsible for urging hatred against the Rohingya people for profit.

Amnesty concludes in its report: “Meta’s content-shaping algorithms proactively amplified and promoted content on the Facebook platform which incited violence, hatred, and discrimination against the Rohingya.”

Amnesty points to Facebook’s ad tracking-based business model, which it calls “invasive profiling and targeted advertising,” as the core contribution that Facebook made to the genocide. Amnesty says that this business model feeds off of “inflammatory, divisive and harmful content,” which it says implicates Facebook for inciting violence against the Rohingya.

Facebook was warned in 2018 by UN human rights investigators that the platform was contributing to the spread of hate speech and violence against Myanmar’s local Muslim minority. Facebook admitted that it was “too slow to prevent misinformation and hate” spreading on the platform, but did not accept that its use of algorithms designed to maximize engagement was fuel for ethnic violence.

In an executive summary of its report, Amnesty references the Facebook papers released by whistleblower Frances Haugen, stating: “This evidence shows that the core content-shaping algorithms which power the Facebook platform — including its news feed, ranking, and recommendation features — all actively amplify and distribute content which incites violence and discrimination, and deliver this content directly to the people most likely to act upon such incitement.” It adds:

As a result, content moderation alone is inherently inadequate as a solution to algorithmically-amplified harms. Internal Meta documents recognize these limitations, with one document from July 2019 stating, ‘we only take action against approximately 2% of the hate speech on the platform’. Another document reveals that some Meta staff, at least, recognize the limitations of content moderation. As one internal memo dated December 2019 reads: ‘We are never going to remove everything harmful from a communications medium used by so many, but we can at least do the best we can to stop magnifying harmful content by giving it unnatural distribution.’

This report further reveals that Meta has long been aware of the risks associated with its algorithms, yet failed to act appropriately in response. Internal studies stretching back to as early as 2012 have consistently indicated that Meta’s content-shaping algorithms could result in serious real-world harms. In 2016, before the 2017 atrocities in Northern Rakhine State, internal Meta research clearly recognized that ‘[o]ur recommendation systems grow the problem’ of extremism. These internal studies could and should have triggered Meta to implement effective measures to mitigate the human rights risks associated with its algorithms, but the company repeatedly failed to act.

Read more at TechCrunch here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.