Making Sense of Deepfake Mitigations

How can we fortify our “Knowledge Pipeline” in the face of synthetic media?

Aviv Ovadya
5 min readFeb 5, 2020

Close to two years ago, I started applying the framework described below in order explore ways to reduce the negative impacts of synthetic media (and other forms of misinformation). It’s not perfect, but I’ve found it useful enough to share in venues around the world, and I am continuing to expand on it with others. Using frameworks like this as a form of shared language can help us make sense of complex problems and share approaches for addressing them.

If we can’t distinguish fact from fiction, or reality from fakery, we can’t make effective decisions as society. Synthetic media technology poses a variety of challenges to our ability to discern, and the real-world impacts are beginning to be felt in finance, geopolitics, journalism, and elsewhere. Of course the underlying tools and problems are not new — manipulated media has existed just as long as media has existed — but the scope of likely societal impact has increased dramatically as the costs of manipulation shrink by orders of magnitude.

There are two “dual deepfake challenges” to address. First, synthetic media technology enables deception — fakery can be easier and more impactful. Secondly, the possibility of such deception can weaken the strength of true incriminating evidence — it can make real verification more difficult and less effective.

There is no easy fix to either challenge. We have no choice but to grapple with many complex, intermingled mitigations, that address just some of the likely impacts. Thankfully, we can at least tame some of that complexity. We can think about addressing the challenge of synthetic media as “fortifying our knowledge pipeline.”

The Knowledge Pipeline

The “Knowledge Pipeline” describes the ways in which content moves through the ecosystem, from creation, to distribution, to belief, to impact.

The core Knowledge Pipeline framework, showing the progression from creation, to distribution, to believability, to impact (and the actors involved in each phase). I have also referred to a more complex variant, with feedback loops between the phases, as the “Knowledge Lifecycle.”

For example, we can consider the following questions around potentially synthesized videos:

  • How easy is it to create a manipulated — or real — video?
  • How easy is it to distribute a video so that people actually see it?
  • What influences whether or not someone believes the video?
  • What determines what actions are taken based on resulting beliefs, and how a society is impacted, from courts, to journalists, to diplomats?

Analogous questions can also apply to other forms of media such as audio and text.

At each stage in the pipeline, we can consider levers that can be wielded by, or that act on, the key actors at that stage (creators, distributors, recipients, and societal systems). Levers are ways to impact the dual challenges: decreasing the ROI of fake content, and increasing the ROI of real content.

Increase the ROI of fake content, and decrease the ROI of real content.

The knowledge pipeline is of course a simplified model. In practice, there are feedback loops, where e.g. belief and impact influence distribution (we can incorporate those feedback loops to make the slightly more complex “Knowledge Lifecycle” diagram). A lever may also address aspects of multiple stages, or may be situated in one stage, but impact others (e.g., a watermarking system might be added at content creation, but only have an effect on a downstream societal system). The framework is applicable (or adaptable) to the majority of mitigations, helps make relatively distinct functions clear, and provides the basis for a visual language for exploring systems, mitigations, and feedback loops.

Additional Dimensions

In addition, it’s often valuable to consider the governance of those levers — how are they controlled? Some innovations are not around levers directly, but ways to ensure that utilizing a lever does not decrease freedom or otherwise reduce human rights.

At each stage, we can consider both levers and governance.

There are a variety of other dimensions worth considering, such as whether a mitigation is meant to be preventative, and/or curative; who might implement or fund the mitigation, and whether a mitigation focuses on good actors, or malicious actors.

An example

Putting this all together, consider the example of the mitigations I outline in this piece on building responsible synthetic media tools. These mitigations, such as putting visible disclosures and hidden watermarks in synthesized content, would directly impact creation. This is a lever that would make it more difficult for content creators to create synthetic media that could be used to cause harm. It would be implemented (and therefore “governed”) by the creators of synthetic media tools.

An infographic summary of our MIT Tech Review article on how to create responsible synthetic media tools.

A further lever that could make these controls more ubiquitous would be if the Apple and Google app stores required all synthetic media tools to implement them. This would then have those companies impacting creation and partially governing how synthetic media can be created on their platforms. Finally, a company like Facebook also take advantage of the existence of hidden watermarks to treat synthesized content differently, impacting distribution (and governing their own influence; though they may be able to offer some of that governing power to independent bodies, as they do with third party fact checkers).

All of these restrictions are limited in impact — for example, malicious actors might still find tools that don’t have any controls. But with the right incentives, those tools are likely be harder to access and inferior in quality, as they may be more difficult to monetize if they are not available on popular platforms. No mitigation to this challenge is a silver bullet. We need defense-in-depth.

If we want to make sense of the problems we are facing, we need shared models and language that can augment our thinking. In an ideal world, every conversation about mitigations might be centered on a map of problems and a systems diagram of underlying forces and constraints. In lieu of that ideal, it is at least critical to map the complex world of mitigations in order to support best practice sharing, funding allocation, tradeoff analysis, and collaboration. The Knowledge Pipeline framework aims to be a step toward taming some of that complexity and improving our understanding and communication.

If you found this framework valuable, I also recommend checking out Claire Wardle’s taxonomy of misinformation and the Washington Post’s taxonomy of manipulated video. Stay tuned for updates; you can find me on Twitter @metaviv, via this mailing list, or email me av@aviv.me.

--

--

Aviv Ovadya

See aviv.substack.com. BKC & GovAI Affiliations. Prev Tow fellow & Chief Technologist @ Center for Social Media Responsibility. av@aviv.me