Social media giant Facebook touts WhatsApp as a safe messaging platform where users chats are end-to-end encrypted. A recent report has now found out that WhatsApp may allow content moderators to access users’ messages in certain cases. According to a report in ProPublica, there are more than 1,000 contract workers in office buildings in Austin, Texas, Dublin, and Singapore. These hourly workers, according to the report, can only view messages that the users have reported. This means that these moderators can only see users messages, images and videos only when the receiver hits the report button to report the message to WhatsApp.
The report in ProPublica says that this messages review is one element in a broader monitoring operation in which the company also reviews material that is not encrypted, including data about the sender and their account. A 49-slide internal marketing presentation from December 2020 accessed by ProPublica emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares the brand character to “the Immigrant Mother.” This marketing material doesn’t mention the company’s content moderation efforts.
WhatsApp’s director of communications, Carl Woog acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove abusers. However, he told the publication that Facebook does not consider this work to be content moderation. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse,” Wong was quoted in the report as saying.
A ProPublica investigation that draws on data, documents, and dozens of interviews reveals how WhatsApp’s security has been compromised since Facebook’s purchase of the platform in 2014.