Who Moderates the Social Media Giants? A Call to End Outsourcing

Page 5

1. Introduction

‘Content moderators are the people literally holding this platform together. They are the ones keeping the platform safe.’ — A Facebook design engineer participating on an internal company message board, 2019

Picture what social media sites would look like without anyone removing the most egregious content posted by users. In short order, Facebook, Twitter, and YouTube (owned by Google) would be inundated not just by spam, but by personal bullying, neo-Nazi screeds, terrorist beheadings, and child sexual abuse. Witnessing this mayhem, most users would flee, advertisers right behind them. The mainstream social media business would grind to a halt. Content moderation—deciding what stays online and what gets taken down— is an indispensable aspect of the social media industry. Along with the communication tools and user networks the platforms provide, content moderation is one of the fundamental services social media offers—perhaps the most fundamental. Without it, the industry’s highly lucrative business model, which involves selling advertisers access to the attention of targeted groups of users, just wouldn’t work.1 “Content moderators are the people literally holding this platform together,” a Facebook design engineer reportedly said on an internal company message board during a discussion of moderator grievances in early 2019. “They are the ones keeping the platform safe. They are the people Zuck [founder and CEO Mark Zuckerberg] keeps mentioning publicly when we talk about hiring thousands of people to protect the platform.”2 And yet, the social media companies have made the striking decision to marginalize the people who do content moderation, outsourcing the vast majority of this critical function to third-party vendors—the kind of companies that run customer-service call centers and

back-office billing systems. Some of these vendors operate in the U.S., others in such places as the Philippines, India, Ireland, Portugal, Spain, Germany, Latvia, and Kenya. They hire relatively low-paid labor to sit in front of computer workstations and sort acceptable content from unacceptable. The coronavirus pandemic has shed some rare light on these arrangements. As the health crisis intensified in March 2020, Facebook, YouTube, and Twitter confronted a logistical problem: Like millions of other workers, content moderators were sent home to limit exposure to the virus. But the platforms feared that allowing content review to be done remotely from moderators’ homes could lead to security and privacy breaches. So the social media companies decided temporarily to sideline their human moderators and rely more heavily on automated screening systems to identify and remove harmful content. In normal times, these systems, powered by artificial intelligence (AI), identify and, in some cases, even eliminate, certain disfavored categories of content, such as spam and nudity. Other categories, including hate speech and harassment, typically still require human discernment of context and nuance.

WHO MODERATES THE SOCIAL MEDIA GIANTS? A CALL TO END OUTSOURCING

3


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.