How to tackle misinformation on WhatsApp

Rumors on WhatsApp have led to mob violence. Here’s how to prevent them — without sacrificing privacy.

Aviv Ovadya
4 min readJun 27, 2020

--

I originally published this in Bloomberg Opinion in 2019. Reprinted with permission. The opinions expressed are those of the author. For more detail addressing potential obstacles, see this follow-up post.

Last July, an engineer at Accenture was beaten to death by a crowd that thought he was a child kidnapper. They were angry, violent and completely wrong. The rumors about the man were “fake news” spread on WhatsApp, an incredibly popular messaging service owned by Facebook.

In response, the Indian government now wants to force companies to take several steps — including breaking encryption — that could compromise privacy and security for their users. But what if there were a way to combat misinformation on the platform while still maintaining privacy?

Most of WhatsApp’s 1.5 billion users use the app to communicate with friends, conduct business and stay in touch with family. But it’s also becoming a worldwide conduit for political disinformation, anti-vaccine fearmongering, and mob-rousing rumors. Especially in regions with poor education and governance, the consequences can be deadly and destabilizing.

There’s no easy fix for this problem. WhatsApp messages are encrypted end-to-end, meaning that no one besides the user — not even WhatsApp itself — can read them. The company can use metadata to determine who contacted whom, and how often, but the content of each message is inaccessible. This type of encryption is critical for privacy: It ensures that WhatsApp can’t target you with ads based on what you write or lose your messages to hackers, and it offers crucial protections for users in authoritarian regimes. But it also makes addressing misinformation extremely hard.

To its credit, WhatsApp has tried to get around these limitations. It has launched a public-education campaign, limited message forwarding, and added a label that shows when a message has been passed on from others, which could be an indicator of misinformation. External groups of “rumor busters,” including the cross-newsroom collaboration Verificado, ask WhatsApp users to forward them potential misinformation so they can publicly debunk it. But while these efforts are a decent start, they haven’t yet caught on nearly as well as the rumors have.

In this case, there seems to be fundamental trade-off between privacy and information quality. But WhatsApp may not actually have to choose.

For starters, it could create an updatable list of rumors and fact-checks, similar to what Facebook uses to identify misinformation in its news feed. Each phone could regularly receive a portion of this list tailored to match what the user would be likely to see (based on metadata the app already collects, such as location). Whenever users post or receive a link or rumor that’s on the list, WhatsApp could display a fact-check, related article or other context, just as Facebook has started to. It could even warn them before they share known misinformation. The beauty of this approach is that it doesn’t require WhatsApp to collect any new information about anyone. It maintains privacy while directly addressing misinformation.

To make such a system work, though, a few questions would have to be answered — notably, how would WhatsApp find out about misinformation, and how could it be curated? Let’s consider each in turn.

First, WhatsApp could add an “Is this real?” feature that lets users forward suspicious-looking messages or media to a pool of trusted verifiers, similar to the system pioneered by Facebook and the International Fact-Checking Network. Once the content has been reviewed, the user who reported it could be notified if it had been debunked or if additional context was available. That information would also be made available to all users, whether they reported it or not. This would give everyone the ability to discreetly flag questionable content, even if shared by friends or family, while allowing WhatsApp to alert users about misinformation without infringing on their privacy.

Second, fact-checkers would need to know what to focus on. A brute-force approach to every suspicious piece of content won’t work. Instead, WhatsApp should help journalists triage. Facebook does this by analyzing what content is being shared the most, but WhatsApp’s encryption means that it can’t take the same approach. A potential solution to this dilemma is “differential privacy,” a technique used by Apple to extract insights from large sets of data — such as determining what’s popular, or what websites consume the most battery life — without compromising any particular individual’s privacy.

Ideas like these won’t entirely solve the misinformation problem, which will require major changes across society. But they can protect well-meaning friends and relatives from being tricked into forwarding dubious content. They can provide a sort of real-time contextual education, helping users interpret and judge the information they receive. Perhaps, too, they can help us have our cake and eat it: maintaining privacy, supporting public discourse, and protecting people from deadly rumors.

Aviv Ovadya is the founder and CEO of the Thoughtful Technology Project, and was previously Chief Technologist at the Center for Social Media Responsibility at UMSI. He is also a non-resident fellow at the German Marshall Fund’s Alliance for Securing Democracy.

For more details addressing potential obstacles, please see this followup piece. If you believe there are important aspects not addressed here, or work on end-to-end encrypted systems, please get in touch at av@aviv.me or on twitter at @metaviv.

My other writing can be found at aviv.me/writing, on twitter, and my mailing list.

--

--

Aviv Ovadya

See aviv.substack.com. BKC & GovAI Affiliations. Prev Tow fellow & Chief Technologist @ Center for Social Media Responsibility. av@aviv.me