R
R
R
R
R
Case Studies
Perspective is Reducing Toxicity in the Real World
The New York Times: Moderator
To create a more scalable, efficient and healthy review process, we partnered with The New York Times to build Moderator: a tool that leverages Perspective to prioritize comments for human moderation and approve those most likely to pass a manual review.
To ensure that their properties remain a dynamic place for conversation and exchange, The New York Times community desk used to review every reader submission by hand. But faced with a growing number of comments, many of which seemed increasingly vitriolic, they asked, “could technology offer a solution?” Working together, we concepted and created Moderator.
Moderator is a departure from the most common approach, which is to prioritize each submission in the order it was received. Instead, Moderator scores each comment based on the likelihood that reviewers at the Times would reject it, and surfaces its prediction of why that comment might be rejected (eg. inflammatory or insubstantial). Over time, we periodically build new, updated models to improve performance, helping the Times community desk maintain a vibrant and thoughtful exchange for all readers to enjoy.
Coral by Vox Media
A significant portion of toxicity on the Internet comes from users who don’t have a history of toxicity. This leads moderators to wonder if they can help these users make healthier contributions by giving them an extra moment to consider their impact.
Vox Media uses the Coral project as a way to improve community on websites. Coral provides authorship feedback with the help of Perspective API. This plugin, “Toxic Comments,” pre-screens comments for toxic tone or language, and gives commenters an opportunity to change their wording before they post. If the system still detects toxicity, the comment will be sent to a human moderator who will make the final decision. Coral teamed up with McClatchy to do a controlled study to measure its impact and found that 40% of users changed their comment in response to feedback.
FACEIT
With the explosive growth of online gaming sites comes issues with toxicity and online harassment. FACEIT, one of Europe’s largest gaming platforms, wanted to find new ways for the community to engage free of harassment. Perspective API provided FACEIT with the opportunity to make gamers more mindful of their behavior at scale. Perspective provides a toxicity score, which the team uses alongside two additional metrics to flag the incident and decide whether a warning or ban to the user should be automatically issued. Gamers then have the ability to appeal this decision for review by human moderators after the game. With this strategy, FACEIT is able to use Perspective to meet the need to reduce toxicity in real time, while still enabling moderators to make corrections when needed. The team will also use Perspective to pinpoint the exact messages that triggered the incidents and provide feedback to the gamers. FACEIT has had a 20% reduction in toxic messages since working with Perspective.
Disqus
Disqus is the largest comment hosting service platform, with more than 50 million monthly comments across almost 200 countries, 2 billion monthly unique visitors and 17 billion monthly page views. Their mission is to help publishers succeed in an open & diverse Internet and they are trusted by millions of publishers, websites and blogs around the world, including ABC News, Entertainment Weekly, and Rotten Tomatoes.
Disqus was one of the first platforms to adopt Perspective for their detection and management of toxic comments, through their Toxicity Mod Filter which uses natural language processing and machine learning to detect and tag comments in the Moderation Panel that have a high likelihood of being toxic. Moderators can use this information to prioritize their efforts and lower the negative impact of toxic content on their communities. This allows moderators to proactively identify toxic content and deal with it more effectively, instead of relying solely on manual mechanisms and indicators such as word filters and user actions in order to identify comments for moderation.
OpenWeb
OpenWeb is a leading audience engagement platform that hosts 100 million active users per month. Their clients include publishers like AOL, Huffington Post, Tech Crunch, Fox News, Hearst, Refinery29, SkySports, MarketWatch, Salon, Newsweek and many others. Their mission is to elevate conversations online: they want to inspire an open exchange of ideas across the web — with less hostility and toxicity.Their goal is to encourage healthy debate and make room for diverse opinions, without suppressing free speech.
OpenWeb’s moderation platform has been using Jigsaw’s Perspective API for a little over three years but they recentlylaunched a “Real-Time Feedback” feature with the API at its core. The feature moderates comments in real-time, using Perspective API to determine whether comments violate community guidelines. The user can either choose to edit their comment and repost, or they can post it anyway, accepting the outcome.
This new feature was launched after an OpenWeb study, conducted as part of a broader Jigsaw research study, found that out of all commenters who edited their language, 44.7% of commenters replaced or removed offensive words, and 7.6% elected to rewrite their comment entirely.
Globalizing Perspective: New Languages
Starting in 2018, Jigsaw has been investing in scaling Perspective in new languages. This is a non-trivial effort because, while the process for training Perspective to work in new languages is identical to training it in English, it requires substantial data-sets — i.e., lots of public online comments in each new language. It’s not always sufficient to have the machine learning models translate comments from one language to another — you have to teach the AI the unique differences in each language so it can be more accurate and precise.
To achieve that, Jigsaw has been partnering with prominent news organizations and platforms from around the world. By analyzing millions of past public comments from their websites, much as we did with the New York Times, we have been able to train our machine learning models to understand how to spot toxicity and understand the linguistic nuances in Spanish, French, Italian, Portuguese, German - and many other languages in the future- as well as the linguistic nuances of those languages. Some of our partners include: El Pais, Le Monde, Cofina and several other publishers who understand the importance of hosting good-quality conversations on their platforms, in diverse languages. Cofina integrated Perspective API into their CMS to moderate the comments in their sports and financial newspapers and as a result, were able to decrease the need for human moderation by 60%, which saves their team at least 2 hours per day.
Explore the issue: