University Journal
Content in the Crosshairs University experts examine initiatives to manage social media content
As Facebook moves to manage the hate speech and misinformation that circulates on its platform and as calls intensify to reform the legislation that grants internet providers immunity for content on their sites, University of Miami scholars survey these and other critical issues impacting the unruly world of social media. Sam Terilli, chair of the Department of Journalism and Media Management in the School of Communication, and John Newman, associate professor in the School of Law, offer insights on Facebook’s Oversight Board, the independent body created to rule on emblematic hate speech, misinformation, and violent content posted on the platform. Neither professor is optimistic that the board is sufficiently empowered to address the core issues relating to content, and both doubt its ability to manage the avalanche of controversial content produced continuously by Facebook’s more than 3 to 4 billion users worldwide. “Clearly, Facebook has gone to a great deal of trouble—creating an independent endowment for funding [the board], selecting very interesting people from a wide cross section, and even giving the board clear authority to make decisions on its takedowns,” Terilli remarks. “Yet providing an avenue for people who are upset when their posts are taken down is half the problem at best,” Terilli adds. Newman identifies four concerns that could undermine the board’s effectiveness: judge selection, case selection, judicial bias, and the court’s subject matter jurisdiction. Yet most concerning for Newman, whose core expertise is in antitrust regulation and competition or absence thereof, is Facebook’s business model.
12 MIAMI Spring 2021 miami.edu/magazine
“The [business] incentive is not just to design a great product—that’s there, too—but to design it to addict people,” he says, adding that while the court lacks authority on this issue, the Federal Trade Commission could potentially play a role from a fair competition perspective. Both Terilli, who practiced law for 30 years, and Newman agree there are no precedents for Facebook to follow. “There’s not anything parallel where a company like Facebook is trying to exert this degree of content moderation while also trying to retain its privileged status under Section 230,” Newman says, referencing the provision under the Communication Decency Act (CDA) that grants immunity to service providers for content posted on their sites. The CDA was enacted in 1996— before social media platforms even existed. Today, a range of parties, from individuals who have been harassed on social media to conservatives charging media bias, have increasingly called for
the reform or repeal of Section 230. A. Michael Froomkin, a School of Law professor with expertise in constitutional and internet law, recognizes that there are noble reasons to protect individuals and certain groups from the mental and verbal abuse that proliferates on the platforms but argues that the protections for free speech outweigh the merits of repeal. “Section 230 is one of the key reasons why the internet is as useful as it is, and why the United States is the location of choice for major internet content companies,” he says. In contrast, Terilli insists that the internet world is vastly different from when the CDA was enacted. “It was a reasonable response to the problem as it was understood at that time,” he says. “But as with any other law and fast-changing form of technology, we need to take a step back and reevaluate—not for the political reasons that have been articulated—but for the reasons related to protecting people and better serving society.”