Unsung heroes: Moderators on the front lines of internet safety

Unsung heroes: Moderators on the front lines of internet safety thumbnail

What, you might wonder, does a content moderator actually do? Let’s begin at the beginning to answer that question.

What is content moderation?

Although the term moderation is often misconstrued, its central goal is clear–to evaluate user-generated content for its potential to harm others. Moderation refers to the prevention of extreme or malicious behavior, such as exposure to graphic images and videos, fraud, or exploitation, when it comes to content.

There are six types of content moderation:

  1. No moderation: No content oversight or intervention, where bad actors may inflict harm on others
  2. Pre-moderation: Content is screened before it goes live based on predetermined guidelines
  3. Post-moderation: Content is screened after it goes live and removed if deemed inappropriate
  4. Reactive moderation: Content is only screened if other users report it
  5. Automated moderation: Content is proactively filtered and removed using AI-powered automation
  6. Distributed moderation: Inappropriate content is removed based on votes from multiple community members

Why is content moderation important to companies?

Malicious and illegal behaviors, perpetrated by bad actors, put companies at significant risk in the following ways:

  • Losing credibility and brand reputation
  • Exposing vulnerable audiences, like children, to harmful content
  • Failing to protect customers from fraudulent activity
  • Losing customers to competitors who can offer safer experiences
  • Allowing fake or imposter account

The critical importance of content moderation, though, goes well beyond safeguarding businesses. Every age group should be able to manage and remove offensive and sensitive content.

It takes a multi-pronged approach in order to mitigate the widest range of risks, as many safety service and third-party trust experts can attest. To maximize brand trust and user safety, content moderators must employ both proactive and preventative measures. It is not possible to wait and watch in today’s politically and socially charged online environment.

“The virtue of justice consists in moderation, as regulated by wisdom.” — Aristotle

Why are human content moderators so critical?

Many types of content moderation require human intervention at some time. Reactive moderation and distributed moderation may not be the best options as harmful content is not dealt with until it has been seen by users. Post-moderation is an alternative. AI-powered algorithms scan content for potential risk factors and alert a human moderator to determine if certain images, videos, or posts are harmful and should be removed. These algorithms improve with machine learning. It would be great to eliminate human content moderators. However, the nature of the content they are exposed to (including graphic violence and child sexual abuse material) makes it unlikely that this will ever happen. Artificial means cannot replicate human understanding, comprehension, interpretation, empathy. These human qualities are crucial for maintaining integrity in communication. In fact, 90% of consumers say authenticity is important when deciding which brands they like and support (up from 86% in 2017).

While the digital age has given us advanced, intelligent tools (such as automation and AI) needed to prevent or mitigate the lion’s share of today’s risks, human content moderators are still needed to act as intermediaries, consciously putting themselves in harm’s way to protect users and brands alike.

Making the digital world a safer place

While the content moderator’s role makes the digital world a safer place for others, it does expose moderators to disturbing content. They act as digital first responders, protecting innocent users from disturbing content, especially children.

Some trust and safety service providers believe that a more thoughtful and user-centric way to approach moderation is to view the issue as a parent trying to shield their child–something that could (and perhaps should) become a baseline for all brands, and what certainly motivates the brave moderators around the world to stay the course in combating today’s online evil.

The next time you’re scrolling through your social media feed with carefree abandon, take a moment to think about more than just the content you see–consider the unwanted content that you don’t see, and silently thank the frontline moderators for the personal sacrifices they make each day.

This content was created by Teleperformance. It was not written by the editorial staff of MIT Technology Review.

Read More