WhatsApp Has Content Moderators Who Can Read Contents Of Chats: Report

There has been a storm brewing over WhatsApp’s privacy policies — eyebrows had been earlier raised over a new WhatsApp privacy update which hinted that it would share some data with Facebook — but even more disturbing aspects have now come to light.

A report by ProPublica says that WhatsApp has an extensive team which moderates user messages for content such as child porn, gore and excessive violence, and takes down messages which it deems are inappropriate. This by itself isn’t unusual, because most social networks employ similar teams which take down content that flouts terms and conditions. WhatsApp, though, claims that its chats are “end-to-end encrypted”, and no one apart from the sender and receiver can see private messages. The latest report however says that content moderators employed by WhatsApp are not only able to see users’ personal messages, but also able to take them down.

“WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content,” the ProPublica report says. “Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute,” it adds.

The report claims that Facebook has “special software” which allows contract employees to read private chat messages. This flies in the face of WhatsApp repeatedly telling its users that no one other than the recipient and the sender are privy to the contents of chats.

While it’s be possible that WhatsApp reads only a tiny fraction of messages that might’ve been reported as inappropriate by users, it does show that WhatsApp’s claims of end-to-end encryption are questionable — if WhatsApp employees are able to read contents of messages, it proves that a system does exist which allows WhatsApp to bypass its own encryption. If such a system exists, there’s always potential for its misuse by WhatsApp employees, or hacks by third parties.

The privacy of WhatsApp chats have been in the eye of a storm lately. Last year, several private chat messages of Indian celebrities including Deepika Padukone had been ‘leaked’, which led to WhatsApp running ads during the IPL about how its chats were secure. This year, WhatsApp had sent users a pop-up which hinted that some WhatsApp data would be shared with parent company Facebook, and after threats by users of migration to rival apps including Signal and Telegram, WhatsApp was once again forced to clarify that its chats were secure. With another report now suggesting that WhatsApp chats aren’t as fully encrypted as the company has previously led users to believe, public confidence in the privacy of the messaging platform might be at its lowest ebb yet.