In a recent article, ProPublica outlines how it believes Facebook undermines the privacy of its two billion WhatsApp users worldwide.
In an article titled “How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users,” ProPublica discusses the various privacy issues that Facebook’s 2 billion WhatsApp users face and how Facebook moderates the platform while claiming that all chats on the platform are encrypted.
ProPublica notes that in March 2019, Facebook CEO Mark Zuckerberg discussed the company’s plans to shift into a more privacy-focused area, writing: “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about. We plan to build this the way we’ve developed WhatsApp.”
ProPublica further notes that WhatsApp regularly assures users that all chats on the platform are encrypted and private. In a testimony to the U.S. Senate in 2018, Zuckerberg himself said: “We don’t see any of the content in WhatsApp.” However, ProPublica contents that this is not the case, writing:
Those assurances are not true. WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.
Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.” The presentation does not mention the company’s content moderation efforts.
WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers. But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.” The company declined to make executives available for interviews for this article, but responded to questions with written comments. “WhatsApp is a lifeline for millions of people around the world,” the company said. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”
ProPublica notes that Facebook admits that its platforms such as Instagram and Facebook are heavily moderated with 15,000 employees examining content on the platform every day. The company has even released quarterly transparency reports that detail how many Facebook and Instagram accounts have been “actioned” for abusive content, but no such report was made available for WhatsApp.
Read more at ProPublica here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com