‘Contextualization Engines’ can fight misinformation without censorship

Search engines are nice. But we can do far better with modern AI.

What a contextualization engine might look like in practice

Option A: Without a contextualization engine

Option B: With a (very basic) contextualization engine

  1. The contextualization engine compares the content being shared with that from authoritative sources and provides articles or other media results that are sufficiently related. This might be in a search result style interface, though a chatbot, or a hybrid. (The more advanced approaches described below don’t require pre-filtering of sources; this is just the minimal system that someone might find useful.)
  2. If it finds no close enough matches, it warns the user and potentially identifies the most likely relevant keywords that the user can run a more traditional search with if they would like (with another tap).
  3. It adds the media object to a triage queue for relevant organizations to potentially evaluate (e.g. fact-checkers).
This magnifying glass feature on WhatsApp was a valuable step forward. But it doesn’t currently work in practice in many cases. It makes it easier to look up messages on Google, but keyword search doesn’t work for long messages, images, videos, audio, with data voids, etc. We need more tools designed for contextualization.

Why even the basic contextualization engine helps

  • Analyzes complete ‘media objects’to see how likely they are to be related to one another; e.g. the entire chain message, entire fact-check articles.
  • Focuses on authoritative sources — likely initially using whitelist certification through recognized 3rd parties such as the International Fact-Checking Network (IFCN), First Draft, News Guard, standards organizations, etc.
  • Warns about data voidslets the user know if the system can’t find good information on the topic.
  • Supports the people doing deeper investigations — provides the human fact-checkers and other organizations with information about what is important to explore — and potentially revenue from web traffic in ways that are directly aligned with the users’ goals.

Contextualization systems can be even more helpful

  • Stop (SIFT): The contextualization engine flow can provide educational support for executing other aspects of media literacy. For example, it can help remind users to pause and notice their emotional reactions to the content. It might even provide tips on how to bring up the potential misinformation in a delicate way in a group chat or comment thread.
  • Investigate the source (SIFT): If the contextualization system already has information on why a source might be considered authoritative, it can provide that information to the user — showing why they might trust it (e.g. this source is certified by IFCN).
  • Find better coverage (SIFT): Building on the ‘analyze’ component described earlier, a more fully featured contextualization engine would not only auto-generate audio and video transcripts from media, but also automatically interpret any imagery and captions in order to better understand the content and find contextually relevant sources.
  • Trace claims, quotes, and media to the original context (SIFT): Finally, the contextualization engine can do the tracing for the user. It can essentially scour the web for the original context of any content.

The potential — and risks — of artificial intelligence advances

How can we make this happen?

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Aviv Ovadya

Founder of the Thoughtful Technology Project & GMF non-res fellow. Prev Tow fellow & Chief Technologist @ Center for Social Media Responsibility. av@aviv.me