3 minute read

It’s time tech companies are held responsible for dangerous content

The U.S. government has been embroiled in debates surrounding social media companies for years. Most eyes have turned to Congress’ recent grilling of TikTok CEO Shou Zi Chew before the House Energy and Commerce Committee. But in the last month, the Supreme Court has deliberated the future of Section 230 and the liability social media companies have for the content published on their sites.

Koen Rodabaugh Correspondent

Advertisement

a company owned by Google, did not publish the content — ISIS and its members did. In effect, Google claimed liability protections under Section 230 of the Communications Decency Act of 1996. Under this law, tech companies are immune from liability for the content posted by their users.

internet companies have a part to play in limiting the publication of terrorist content?

The two cases, Gonzalez v. Google, LLC. and Twitter, Inc. v. Taamneh, surround the same set of facts. In 2015, Nohemi Gonzalez was killed in one of many terrorist attacks in Paris. In the day following the attacks, ISIS released a written statement and a YouTube video claiming responsibility.

The father of the victim sued Google, Twitter and Facebook for aiding and abetting acts of international terrorism by allowing the terrorists to use their platform to “recruit members, plan terrorist attacks, issue terrorist threats, instill fear, and intimidate civilian populations.”

Google’s response was that YouTube,

Under Section 230, internet companies can limit what content is published on their site but are not ultimately liable for the content itself. As Michael Barbaro described on “The Daily” podcast, tech companies get to be both newspapers and a bookstore but with the protections of the bookstore.

Section 230 was instrumental in fostering the growth of the early internet. With protections in place, companies could expand their reach and accessibility to the general public. However, there were no algorithms at the time promoting and curating content for their users like there are today.

In Twitter, Inc. v. Taamneh, the central question is whether Twitter did enough to remove terrorist content on its site, thus aiding and abetting international terrorism under the Justice Against Sponsors of Terrorism Act.

With all of this in mind, what should the future of Section 230 look like? Do

I certainly don’t believe that companies like Google, Twitter and Facebook are acting with malintent or are intentionally aiding and abetting terrorism. But, it seems ridiculous that these companies have no legal responsibility to reject terrorist content.

In Twitter, Inc. v. Taamneh, Justice Sonia Sotomayor said to the lawyer representing Twitter, Seth Waxman, “You knew that ISIS was using your platform,” to which Waxman agreed.

If a company is aware of such a user on its platform, it should be expected to remove that person, especially if they’re using the platform to further terrorist goals. Beyond the legality of such an act, it’s just basic common sense.

We hold plenty of other widely-available services, such as banking, to such a standard — an argument that Justice Elena Kagan mentioned. Terrorist propaganda is something that must be diligently weeded out. Failure to do so is extremely dangerous and negligent.

What seems apparent from both cases though is that the court is reasonably skeptical of scrapping Section 230 entirely but is open to the idea of changing its interpretation to a certain extent. As Kagan noted, Section 230 was made before algorithms existed and the Supreme Court is by no means the ultimate expert on the internet.

As with many things, the solution is in the middle. Social media holds a unique space in the world of publication and algorithms are useful tools for organizing and presenting pertinent information to users. However, an algorithm’s existence as a tool for all content from cooking videos to terrorist propaganda clearly lends itself to problems.

Terrorists should not be making content, but more importantly, internet companies should not allow terrorists to publish on their sites. Algorithms currently feed off of human insecurities and tribalism. Failure to adjust to these trends can lead to violence, polarization and terrorist recruitment.

We can have protections for social media companies’ content while making exceptions for clearly illegal and dangerous content. If a user decides to post a defamatory video or post, the internet company doesn’t need to be involved. However, we must draw a line at international terrorist propaganda as something that internet companies should actively remove and denounce.

Retelling of Goldilocks and the Three Bears

Harrison Burstion, Staff Cartoonist

A second-year studying art and design

A Wandering Fish: Part V Jacinto Sho Hernandez, Staff Cartoonist

A second-year studying art and design

Cloudwatching

Lucy Osborn, Staff Cartoonist

A second-year studying art and design

Season’s Pickin’s: Rhubarb

Wylie Phu, Staff Cartoonist

A second-year studying art and design

Laundry

Sophie Gabriel, Staff Cartoonist

A second-year studying art and design

Childhood Memories

Avery Szakacs, Staff Cartoonist

A second-year studying art and design

Nova and Comet: Skaterboy

Pearl Knight, Staff Cartoonist

A fourth-year studying art and design

Dog and its Reflection

Harrison Burstion, Staff Cartoonist

A second-year studying art and design