Countermeasures

There are significant efforts across the industry working to stop the effects of disinformation. These countermeasures take a wide range of forms.

Countermeasures

At Jigsaw, we have been working with teams of researchers across Google and academia to test new technology for detecting manipulated media. Learn more about two of our approaches below.

How Detectors Work

When an image is manipulated—such as merging two images together or deleting something from the background—the image may leave behind traces of that manipulation. Detectors work by training algorithms or using machine learning to identify these traces, indicating where and how an image has been manipulated.

Slide the bar over to see what has been added to this image. “Splicebuster,” a detector from the University Federico II of Naples, attempts to compare “noise” patterns in different parts of an image to see if more than one camera (make and model) was used to create an image.

Jigsaw is working on an experimental technology called Assembler that brings together detectors from across the academic community with the goal of making it easier to identify image manipulations.

(Source: Cozzolino et al.)

Scroll

Deepfake Detection Research

Manipulated and synthetic videos, including deepfakes, are an emerging tactic for disinformation that poses a threat to news organizations and society at large. In order to build ways to detect deepfakes, we partnered with Google Research to create a dataset of deepfakes to help the research community develop machine learning techniques to detect when a video has been manipulated. We made that dataset available to the academic research community to help support efforts to develop detection methods. The Technical University of Munich and the University Federico II of Naples have incorporated this dataset into their new FaceForensics benchmark, which gives researchers a dataset with which to measure their models against. The field is moving quickly and the threat will evolve. To counter it, we plan to continue the collaboration with the research and technology community to improve our own technology and expand our learnings.

(Reference: Google Research)

Scroll
Next: Countermeasures
Next: Questions
Next: Learn our approaches
Next: How Detectors Work
Next: Deepfake Detection Research
Next: More Articles
Next: New Perspectives

How do we stop inauthentic behavior?

Policy

Technology companies have adopted policies that prohibit many deceptive behaviors, such as misrepresenting identity, and enforce these policies through investigative processes. For example, Facebook, YouTube, and Twitter have all taken enforcement actions—including account suspension and removal—against coordinated influence operations.

(Sources: Facebook, Google, Twitter )

What could help debunk fake news and false claims?

Labeling

Labeling false claims may reduce a false claim’s credibility and virality. But many disinformants avoid spreading debunked claims, instead favoring content that confirms people’s existing beliefs. In addition to fact checking, there are other ways to give people more context about the information they see online. For example, YouTube surfaces its publisher funding information panels if a channel is owned by a news publisher that receives some level of public or government funding. This equips users with additional context to help them better understand the sources of news content that they choose to watch on YouTube. YouTube also surfaces contextual information from third-party sources alongside videos and search results related to topics that may be prone to misinformation online, like the moon landing.

(Sources: Mena, Clayton et al., YouTube )

Fact-Checking

The International Fact-Checking Network and its member organizations around the world play an important role in journalism, and research continues to explore how fact-checking can be made more effective. Yet, a meta-analysis of academic literature suggests fact-checking is far from a panacea and its effects may be small, especially when the study designs resemble real-world scenarios. So what else might be done to supplement the role fact-checking plays?

(Sources: Walter et al.)

How do we build societal resilience?

Digital Media Literacy

Everyone may not possess the skills and competencies, referred to collectively as “digital media literacy,” needed to successfully navigate a fragmented and complex information ecosystem. Efforts are underway to make digital media literacy training more accessible, useful, and engaging. For example, the Bad News Game is an experiential learning tool designed by Drog that attempts to teach players the common tactics of disinformation campaigns. Likewise, Google partnered with online safety and media literacy experts to create the “Be Internet Awesome” program which helps elementary school students learn about digital safety and media literacy, like how to check whether a news source is credible and otherwise avoid online scams. Governments around the world are boosting funding for digital literacy campaigns.

(Sources: Google, Roozenbeek et al., )

Inoculation

False claims may be difficult to debunk, but preemptively exposing people to disinformation—in a controlled way—may build resistance to false beliefs, similar to how vaccines “inoculate” against disease. Decades of research across diverse disciplines has demonstrated the efficacy of pre-emptively warning people about false claims in advertising, animal rights, the environment, politics, and public discourse. Recent inoculation studies have counteracted the negative effects of fake news on public knowledge, decreased belief in conspiracy theories, and improved acceptance of scientific consensus.

(Sources: Braddock, Bonetto et al., Cook et al., van der Linden et al. )

Can we detect doctored images and video?

Detectors

Technology can help humans more quickly detect manipulation across a variety of mediums. As the technology to deceive becomes more sophisticated, better tools are needed to more efficiently and accurately identify manipulations. Researchers from across the academic community have developed a variety of detectors to identify various types of manipulations and there is a growing number of commercial tools available aimed at identifying manipulated media. At Jigsaw, we are working on new approaches to bring detector technology into the hands of fact-checkers and piloting new technology to drive the science forward.

Want to leave a comment or ask a question? Hover over and highlight text you want to respond to.

You highlighted

What is highlighting?

Highlighting lets the Jigsaw team know what areas interest our readers the most. We may bring the discussion to Twitter or write an article to further explain the situation.

Privacy

The information you provide will be treated in accordance with our Privacy Policy and Terms of Service.

Thanks for your feedback

We may bring the discussion to Twitter or write an article to further explain this topic, so make sure to follow us .