R

R

R

R

R

Countermeasures

Machine learning can help reduce toxicity, improving online conversation

O ne of the most advanced fields of machine learning is natural language processing. Natural language processing is the ability for a machine to take in human speech, analyze its meaning and make predictions based on that analysis. This goes beyond transcription, as machines using natural language processing are able to mimic understanding of the words as a human might. The applications for this technology today are extensive—if you’ve ever talked to the Google Assistant, searched for directions, used a chatbot or translated from one language to another, you’ve seen it at work.

“AI holds the potential for some of the biggest advances we are going to see.”

-Sundar Pichai

When it comes to toxicity, advancements in natural language processing and artificial intelligence offer a promising augmentation to human intervention. By using machines to spot toxic comments for humans to review, they could more efficiently address, or even prevent, this kind of behavior on their digital properties. Our recommendation is always to use Perspective to support and scale human moderators, never to replace them.

Meet Perspective

Perspective is a free API that uses machine learning to spot toxic comments by scoring their perceived impact on conversation.

We built Perspective using models that we developed in partnership with Google’s Counter Abuse Technology team. The training process is as follows:

Gather data from the internet

The team gathered millions of comments. A group of content moderators and annotators labelled each comment for whether or not they felt the comment was toxic.

Train and test the models

Models were trained on this dataset and learned to predict toxicity based on the examples in the dataset. The models were tested in a variety of scenarios to validate that they learned the right associations, and then were released for use with publishers and platforms.

Use feedback to improve

Publishers and platforms using the models provide feedback and expand the dataset. Models are frequently retrained to make improvements and keep them up-to-date.

00%
likely to be toxic
Learn More ↗︎

Empowering publishers to create more space for dialogue

P erspective is designed to be flexible so that different kinds of publishers and platforms can use the technology in the way that best fits their needs. In many cases, that means helping them moderate forums more efficiently by automatically flagging toxic comments and alerting human moderators to potential violations of community guidelines.

In other cases, platforms are using Perspective to provide feedback to commenters about potential perceived toxicity of their content in real time. A message as simple as, “this comment may violate community guidelines,” can have a meaningful impact on the language that people decide to use online.

Perspective also powers a valuable tool for individuals called Tune, which allows readers to choose the level of toxicity they’ll see in comments across the internet.

“Working with Perspective has enabled us to encourage positive interactions across hundreds of dynamic and engaged communities and tens of millions of users. The solutions are elegant and frictionless, supporting our vision for higher quality engagement with less toxicity.”

-Nadav Shoval, CEO & Co-Founder OpenWeb

Perspective processes over

430, 945, 748

requests every day

Perspective has a demonstrated ability to reduce toxicity on a diverse range of platforms, from major publishers like The New York Times and The Wall Street Journal to small newspapers like the Southeast Missourian, to online gaming platforms like FACEIT. In each case, data suggests that using Perspective can measurably increase participation and reduce toxicity, all while reducing the burden on the human resources required to moderate platforms.

Machine learning models are subject to human biases

M achine learning can measurably reduce toxicity online, but the technology is far from perfect. Perspective still makes mistakes and, just like any machine learning model, is subject to the biases of the world that created it.

What is algorithmic bias?

Algorithmic bias is the phenomena of observing results that are systematically prejudiced. Humans and the real world have biases, and when we build models from data from the real world, we risk propagating or amplifying that bias. When those flawed assumptions are built into a model, the output of the model is inherently flawed, too.

Understanding and addressing bias in Perspective

After the initial launch of Perspective API in 2017, users discovered a positive correlation between identity terms containing information on race or sexual orientation and toxicity score. For example, the phrase "I am a gay Black woman" received a high toxicity score. In this case, the identity terms are not being used pejoratively, so this example was classified incorrectly.

The source of the error was the training data itself—the training set did not contain sufficient examples of nontoxic comments containing identity terms for the model to learn that the terms themselves were neutral. This was because the vast majority of usage of these words in online forums is toxic—the model was truly reflecting the state of the world. But the context of that usage matters.

Challenges like this illuminate opportunities for improvement to machine learning. In fact, mitigating bias in data and algorithms is a primary area of ongoing research for Perspective API. This problem is not unique to Perspective API. Further research is needed to make the role played by AI and ML in content moderation processes more transparent. Many of our partners are committed to exploring ways to mitigate bias in machine learning.

“We’re committed to constantly improving our models and to having open, transparent conversations about model performance with those impacted by our models.”

-Lucy Vasserman, Software Engineer at Jigsaw

Building Perspective is an ongoing process of discovery and improvement. Our hope that being open and sharing our research and experiences will help the whole industry improve together as we work to create less biased, fairer models for everyone.

See Perspective in action