Zuckerberg’s Shenanigans: Princeton and USC Researchers Find Racial Bias in Meta’s Ad Algorithms

Mark Zuckerberg (Drew Angerer /Getty)
Drew Angerer /Getty

A recent study by researchers from Princeton University and USC suggests that Meta’s algorithms for presenting educational ads exhibit signs of racial bias, particularly in the delivery of ads realated to for-profit universities and those with a history of predatory marketing practices.

The Register reports that the research paper, titled “Auditing for Racial Discrimination in the Delivery of Education Ads,” is set to be presented at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) in Rio de Janeiro, Brazil. The authors, including researchers from Princeton and USC, found that Meta’s algorithms disproportionately show ads for for-profit colleges and universities with historically predatory practices to black users compared to ads for public universities.

Zuckerberg Meta Selfie

Mark Zuckerberg Meta Selfie (Facebook)

This finding raises concerns about discrimination extending beyond the current scope of solutions, which have been limited to housing, employment, and credit. In 2017, Meta (then known as Facebook) was sued by the U.S. Department of Housing and Urban Development (HUD) for allowing housing advertisers to avoid showing ads to people of a particular race, violating the US Fair Housing Act. Meta settled the charges in June 2022, promising to develop a new system to address racial and other disparities caused by its personalization algorithms in housing ads.

The researchers employed a method that uses a pair of education ads for seemingly equivalent opportunities, with one opportunity tied to a historical racial disparity that ad delivery algorithms may propagate. If the ad for the for-profit college is shown to relatively more Black users than the ad for the public college, the algorithmic choices of the platform can be considered racially discriminatory.

Korolova, one of the authors, stated that while only Meta knows how its ad algorithms work, the researchers suspect that the observed algorithmic effects are driven by proxies and other historical data Meta may rely upon, rather than the direct use of race.

The findings could have legal consequences for Meta under the doctrine of disparate impact of discrimination. The researchers argue that Meta has taken a narrow view of compliance, focusing on discrimination in housing, employment, and credit ads, while not adequately addressing algorithmic bias in ad delivery more broadly.

Meta spokesperson Daniel Roberts told The Register: “Addressing fairness in ads is an industry-wide challenge and we’ve been collaborating with civil rights groups, academics, and regulators to advance fairness in our ads system. Our advertising standards do not allow advertisers to run ads that discriminate against individuals or groups of individuals based on personal attributes such as race and we are actively building technology designed to make additional progress in this area.”

Read more at the Register here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.