Search engines are nice. But we can do far better with modern AI.

‘We are increasingly seeing the weaponization of context,’ — Claire Wardle.

Search engines transformed the first decade of the millennium. Recommendation engines revolutionized the second decade. Neither in their current form are sufficient for addressing misinformation. They focus on discovery and primarily rely on relevance. But they are not particularly helpful at many other important information tasks, particularly contextualization.

We need better tools to help people quickly contextualize media that they come across online.


Understanding decision-making systems and what that means for governance, corporations, and technology creators.

This is the first piece in the series ‘Reimagining Social Technologies’.
Most of my recent public work focuses on misinformation, online platforms, and the impacts of AI/ML. This may look somewhat tangential—but is actually deeply connected, as we will see as the series continues.

We will not be able to address our urgent global crises without improving our systems and processes for decision-making and conflict resolution—our decision-systems. Improving these systems is also crucial for ensuring that both state and non-state powers incorporate human values; from governments, to corporations, to our technological creations.

Both of these challenges relate to what I…


This was originally provided as a public comment to the Facebook Oversight Board to inform its decision on President Trump’s Facebook account; that document can be found here. It is being shared here for ease of reading, and because of the media literacy connection between ‘literacy friction’ and the recently published brief on ‘contextualization engines’.

It does not advocate for Trump either being platformed or deplatformed from Facebook, but instead seeks the broaden the understanding of what Facebook is and broaden the options available to Facebook and the Oversight Board.

The questions that have been posed to the Oversight Board…


AI-augmented knowledge summarization, refactoring, & integration are about to transform the world. Again.

Technology creators, funders, and policymakers must understand how changes in knowledge operations can impact society.
This is intended to be a short primer making a series of claims about these impacts. Each claim could be a book (many are).

1. Knowledge operations matter.

Knowledge operations are ‘actions that involve the conveying or processing of knowledge’. For example, publishing and broadcasting are ‘distribution’ knowledge operations where one ‘conveys’ knowledge to a large audience with minimal feedback. Summarization is a ‘processing’ knowledge operation.

Many core ‘societal activities’—from conflict mediation to identity formation—both (1) require knowledge operations and (2) operate in a world awash with ongoing knowledge…


Metrics are key to how product teams at tech companies function

Photo: Chesnot/Getty Images

After the 2020 election, a Twitter dashboard that I first prototyped four years before started going wild. It estimates misinformation prevalence by monitoring “the percent of retweets and likes pointing toward domains that had made a habit of sharing misinformation.” This metric had been going up throughout the election cycle from a low around 10% up to almost 20% on November 3rd. And then it jumped wildly to 30% over the next week and stayed there for almost a month. Something was likely very wrong.

Tracking this sort of change is a valuable step toward understanding the platform’s impact. It…


Improvement in image synthesis, from https://arxiv.org/abs/1406.2661 https://arxiv.org/abs/1511.06434 https://arxiv.org/abs/1606.07536 https://arxiv.org/abs/1710.10196 https://arxiv.org/abs/1812.04948, via Ian Goodfellow

How, when, and why synthetic media can be used for harm

This is post is an excerpted section from a working paper with Jess Whittlestone (shared in 2019, but minimal updates were needed). While the full paper was focused on synthetic media research, this section is far more broadly applicable and often referenced in other contexts—it applies in general to malicious use of technologies, from video generation, to language models (e.g. GPT-3), to cryptocurrencies. This piece jumps into the meat, so for more background on this topic, see the paper overview here.

We aim to connect the dots between the theoretical potential for the malicious use (mal-use) of synthetic media technology…


Rumors on WhatsApp have led to mob violence. Here’s how to prevent them — without sacrificing privacy.

I originally published this in Bloomberg Opinion in 2019. Reprinted with permission. The opinions expressed are those of the author. For more detail addressing potential obstacles, see this follow-up post.

Last July, an engineer at Accenture was beaten to death by a crowd that thought he was a child kidnapper. They were angry, violent and completely wrong. The rumors about the man were “fake news” spread on WhatsApp, an incredibly popular messaging service owned by Facebook.

In response, the Indian government now wants to force companies to take several steps — including breaking encryption — that could compromise privacy and…


How can we fortify our “Knowledge Pipeline” in the face of synthetic media?

Close to two years ago, I started applying the framework described below in order explore ways to reduce the negative impacts of synthetic media (and other forms of misinformation). It’s not perfect, but I’ve found it useful enough to share in venues around the world, and I am continuing to expand on it with others. Using frameworks like this as a form of shared language can help us make sense of complex problems and share approaches for addressing them.

If we can’t distinguish fact from fiction, or reality from fakery, we can’t make effective decisions as society. Synthetic media technology…


Considerations and potential release practices for machine learning

Jess Whittlestone and I recently distributed a working paper exploring the challenges and options around ensuring that machine learning research is not used for harm, focusing on the challenges of synthetic media. This post is just a brief overview so read or skim the full paper here—and it was written specifically to be skimmable and referenceable! (Here is the citable arXiv link, though it might be missing some minor fixes given update delays).

Over the last few years, research advances — primarily in machine learning (ML) — have enabled the creation of increasingly convincing and realistic synthetic media: artificially generated…


Totally flirting for your password…

And how should that change the way we approach security and disclosure?

The technology now exists to create tools for gathering public information on people and spear-phishing them — automatically, at scale. Or creating a system that uses calls with forged realistic voices to impersonate someone. These new attack capabilities are being made possible by modern AI and may have significant implications on how we should approach security disclosure.

So what exactly is new?

Two examples: advances in AI enable conversation and impersonation

  • We can make text bots that far more realistically imitate a conversation with a human.
  • We can make human quality speech from text. We can even imitate voices of a particular person extremely well.

Aviv Ovadya

Founder of the Thoughtful Technology Project & GMF non-res fellow. Prev Tow fellow & Chief Technologist @ Center for Social Media Responsibility. av@aviv.me

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store