3. How Social Media Companies Have Responded to Disinformation
“
‘The companies are getting much better at detection and removal of fake accounts.’ — Dipayan Ghosh, co-director of the Harvard Kennedy School’s Platform Accountability Project
”
In a sense, the major social media companies have been preparing for the 2020 election since 2017, when they began to acknowledge the Russian disinformation problem. Since then, these companies have taken a wide range of general steps to clean up their sites and harden their defenses. They also have put in place a number of measures specifically aimed at protecting election integrity. Both categories—general and electionspecific—should have a bearing on 2020. To their credit, the companies are doing more—more communicating with each other, the government, and outside experts; more deleting of fraudulent accounts; more mobilizing of special teams focused on election irregularities. But there’s still much more to do.
General Changes Removing Sham Accounts “The companies are getting much better at detection and removal of fake accounts,” Dipayan Ghosh, co-director of Harvard’s Platform Accountability Project, said in an interview. Facebook uses improved AI to delete automated fake accounts by the billions—2.19 billion in just the first three months of 2019. Most of these accounts were blocked within minutes of their creation, preventing them from doing any harm, according to the company.60 Over the past two-and-a-half years, Facebook also has announced dozens of smaller, more-targeted takedowns comprising many thousands of accounts and pages which have demonstrated “coordinated inauthentic behavior.” 61 In Facebook’s opinion, the operators of these accounts and pages have worked together to deceive users about
who runs them and what they’re doing. The company stresses that it punishes misleading behavior, not content. But the behavior in question often includes disseminating disinformation. For its part, Twitter has challenged and taken down millions of fake accounts for similar reasons. And YouTube has done so as well, if to a lesser degree. Compared to the companies’ relative passivity in 2016, these actions demonstrate greater vigor. In addition to honing their AI-screening algorithms, the platforms have hired thousands of additional staff reviewers and outside contractors to hunt for accounts spewing problematic content. But the impressive numbers of fake or deceptive accounts eliminated also indicate the vast supply of such accounts and the near certainty that the companies aren’t catching them all—or perhaps even most of them. “Very many fake accounts are going undetected and can be used for manipulation,” according to Filippo Menczer, a professor of informatics and computer science at Indiana University who studies disinformation. These bot accounts, he added in an email exchange, “can be used to amplify the spread of misinformation, deepfakes, attacks, fear-mongering, voter suppression, or to fake support for candidates.”62
DISINFORMATION AND THE 2020 ELECTION: HOW THE SOCIAL MEDIA INDUSTRY SHOULD PREPARE
13