Page 1


The Dangers of Online Hate Speech Kristin Brean Chapman University




The Dangers of Online Hate Speech “The world is a dangerous place to live, not because of the people who are evil, but because of the people who don‟t do anything about it” (Albert Einstein). Einstein‟s words are especially pertinent to the debate regarding the regulation of online hate speech. In the United States, most forms of online hate speech are protected by the First Amendment. This, according to those who oppose regulating online hate speech is exactly as it should be. However, in the United States and around the world, more and more people are attempting to doing something about limiting online hate speech. According to Becker, Byers, and Jipson (2000), although debates about free speech have long been raging, conflict over the relationship between the First Amendment and hate speech did not begin until the 1920s. Now, with the development of the Internet, the hate speech debate has become even more complicated. In fact, the Anti-Defamation League (2000) asserted, “The Internet has become the new frontier in hate, ensnaring both inexperienced and frequent visitors to the World Wide Web” (p. 1). People spreading messages of hate have the ability to remain anonymous and never have to come in contact with the individuals and/or communities they are targeting. In addition, with the exception of the cost of web service, individuals and organizations do not have to pay to ensure their hateful messages reach millions of people around the world (Nemes, 2002). Becker, et al. (2000) maintained the Internet is being used “to recruit, disseminate information, organize activities, appeal to the emotion of potential sympathizers, and dramatize their presence” (p. 34). From a deontological standpoint, Kant argued (as cited in Tavani, 2011) we have a moral duty to treat one another with respect and to ensure that all people are treated with the same moral worth. In other words, we should always act in ways that will benefit all people,



regardless of race, class, gender, or any other circumstance. Consequently, we have a moral duty to do everything in our power to silence online hate speech that causes physical, psychological, and/or emotional harm. This is true, even if doing so will not bring us pleasure or happiness. Because a conversation about regulating online hate speech brings up issues dealing with the First Amendment, Rossâ€&#x;s theory of Act Deontology is perhaps more appropriate than the theory originally developed by Kant. Kantâ€&#x;s theory, according to Tavanni (2011), did not allow for decisions to be made between conflicting moral duties. As Americans, many of us feel very strongly about the importance of maintaining our First Amendment rights. However, there are also a growing number of people who feel a moral obligation to reduce the amount of hate speech on the Internet. In such a situation, we are faced with a moral dilemma between conflicting moral duties. According to Ross, (as cited in Tavanni, 2011), on a case by case basis, one needs to weigh between the conflicting duties to determine which is the overriding duty. Using this deontological lens, my argument regarding the regulation of online hate speech is as follows: Premise 1: Hate speech directed at individuals and communities is present on the Internet. Premise 2: Hate speech can cause physical, psychological and/or emotional harm to individuals and communities. Premise 3: Although the First Amendment protects our right to speak freely, this right is not absolute. _____________________________________________________________________ Conclusion: Online hate speech that causes physical, psychological and/or emotional harm should be regulated and should not be protected by the First Amendment.



Hate Speech Online hate speech comes in many different forms and originates from people and organizations with a wide range of ideologies. Consequently, it should not be surprising that there is a great deal of disagreement about what constitutes hate speech. The problem, according to Nemes (2002), is when it comes to hate speech, “there is no universal consensus on what is harmful or unsuitable” (p. 195). Coliver (1992) (as cited in Nemes, 2002) defined hate speech as speech “which is abusive, insulting, harassing, and/or which incites to violence, hatred or discrimination” (p. 196). Later, Weintraub-Reiter, R. (1998) defined hate speech as “offensive, racist, hate-laden speech which disparages racial, ethnic, religious, or other discrete groups, including women, lesbians, and homosexuals” (p.145). Yet another definition, proposed by Becker et al. (2000), is “speech that inflicts emotional damage and contains inflammatory comments meant to arouse other individuals to cause severe social dislocation and damage” (p. 36). Although there are many types of hate speech, the majority of the hate speech found online is racist in nature. In fact, according to Nemes (2002), race and ethnicity is “the most predominant target for organized groups” (p. 196). For example, there are a number of websites that have been established by white supremacist groups to spread threatening and hateful messages directed at minority groups. In addition, Tavani (2011) stated there are extremist groups who share an anti-federal government ideology who use the Internet to post ideas for harming and/or killing government officials. Also, there are people who have created websites that provide information for kidnapping children and for making bombs (Tavani, 2011). Not only are people using the Internet to spread hateful messages, they are also taking steps to ensure unsuspecting people will view the offensive material. In order to increase the



number of hits these types of websites receive, site developers have started using deceptive metatags to lead people to their sites (Tavani, 2011). For example, a Nazi website may use keyword metatags such as Holocaust or Jewish. Consequently, if someone who is researching the Holocaust enters either of these search terms into a search engine, they are likely to be led, unknowingly, to the Nazi website. The First Amendment Few would argue that hate speech has negative consequences. However, the First Amendment of the United States Constitution states, “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people to peaceably assemble, and to petition the government for a redress of grievances”. Consequently, according to the Anti-Defamation League, “Internet speech that is merely critical, annoying, offensive or demeaning enjoys constitutional protection” (Frequently Asked Questions, 2001). In other words, many forms of online hate speech are protected by the First Amendment. Nemes (2002) argued although the freedom of speech guaranteed by the First Amendment is important and “should not be lightly undermined” (p. 197), she also warned we should not ignore the possibility that hate speech may lead to racially charged violence and may also result in an increase of racist attitudes. Consequently, if society allows hate speech to exist unregulated, it “may cause civil society to lose its civility, and reduce the very value it was trying to preserve” (Nemes, 2002, p. 197). Although many forms of hate speech benefit from First Amendment protection, not all hate speech is protected. For example, speech that incites violence or causes physical harm to others is not protected by the First Amendment. A well know example of this limitation on free



speech is the idea that it is illegal to shout “fire” in a crowded public venue (Tavani, 2011). In addition, “Speech that expresses a clear intent to commit or threaten harm is not protected speech” (Becker et al., p. 39). Lastly, “Persistent or pernicious harassment aimed at a specific individual is not protected if it inflicts or intends to inflict emotional or physical harm” (ADL, FAQ). Despite these restrictions on free speech, the reality is many forms of hate speech are protected in the United States. In fact, in many situations, the courts shy away from pursuing cases that deal with hate speech, because of the First Amendment. In other words, the U.S. “government is often unwilling or unable to pursue legal action unless the speech in question falls squarely within an unprotected category” (Henry, 2009, p. 240). This is because it is difficult to prove that the hate speech in question posed a true threat or that it caused harm to an individual or to a group of people; this is especially true if the harm was emotional or psychological. Challenges to Hate Speech in U.S. Courts Although the courts in the U.S. often cite the First Amendment when they hand down decisions in support of an individual‟s right to post online hate speech, there have been exceptions. One example of the federal government stepping in to prosecute a website operator for hate speech was the case of the Department of Housing and Urban Development v. Wilson (2000). In this case, Ryan Wilson, the leader of a white supremacist group, was accused of using his website to threaten and intimidate Bonnie Jouhari and her teenage daughter. Jouhari attempted to get the Federal Bureau of Investigation (FBI) involved in her case, but because of concerns related to free speech, they would not get involved. Ultimately, because Jouhari worked for the Department of Urban Housing and Development (HUD), they stepped in to



pursue the case. According to Henry (2009), if Jouhari had not worked for the HUD, they “would have been unable to pursue the case and no other government agency was willing to do so” (p. 239). In another court case, Planned Parenthood v. ACLA (1999, 2001, 2002), Neal Horsely and the American Coalition of Life Activists (ACLA) were sued by Planned Parenthood for posting personal information about abortion providers, maintaining a hit list of abortion doctors that was updated when a doctor was murdered, and including surveillance tape of the comings and goings of abortion providers on their anti-abortion website called „The Nuremberg Files‟. Ultimately, a jury decided the site posed a true threat to the doctors and to their families and the court ordered the ACLA to pay the plaintiffs $100 million dollars. In addition, the court mandated the ACLA close the site. In addition to successful civil suits, there have also been successful criminal cases where the defendant was ordered to serve jail time for spreading hate speech on the Internet. For example, in 1996 Richard Machado was sentenced to one year in prison for sending hateful email messages to more than 50 Asian students at UCI. In the emails, Machado wrote he would “make it my life carreer (sic) to find and kill everyone one (sic) of you personally. OK?????? That‟s how determined I am … Get the f*ck out” (Cited in Henry, 2009, p 238). Three years later, in 1999, Kingman Quon, a college student, pled guilty to sending hateful and threatening emails to Hispanic people at CSULA (Henry, 2009). International Response to Online Hate Speech Although there have been a handful of cases where the U.S. courts ruled certain instances of hate speech were not protected by the First Amendment, as stated previously, these are the exception and not the rule. However, this is not the case in many other countries. In fact, there



is a push in many of the European nations to develop laws and policies in an effort to eliminate hate speech from the Internet. For example, in the Cybercrime Treaty drafted in 2002, the Council of Europe called for the criminalization of any speech that “advocates, promotes, incites (or is likely to incite) acts of violence, hatred or discrimination against any individual or group of individuals, based on race, colour, (religion, descent, nationality,) national or ethnic origin” (p. 2). The Council suggested anyone convicted of using the Internet to spread these types of speech should serve two years in prison (Nemes, 2002). Although the United States is a signatory to this treaty, because of First Amendment concerns, the U.S. did not agree to ratify this particular provision (Henry, 2009). There are also individual countries that have developed legislation regarding hate speech within its borders. For example, in Germany, expressing hatred of a minority group is a criminal offense punishable with up to five years in jail. Interestingly, this law applies to any speech that takes place in Germany, even if the speech originated outside of the country. However, getting other countries to extradite the accused has historically proved difficult (Henry, 2009). Also, Tavani (2011) stated, “In France, it is illegal to sell anything that incites hate and racism” (p.289). Similar to Germany, this law applies to items that originate in another country and are sold online to a person living in France. As a result, Yahoo‟s French website is no longer able to grant French citizens access to Nazi related items on its site. That is not to say, however, someone living in France could not use an ISP address based in another country to make his or her purchases. Because most hate speech is protected in the United States, countries attempting to regulate online hate speech have not been very successful. This is because people in other



countries can easily run their sites from IP addresses within the United States (Henry, 2009). Consequently, the majority of hate sites are based in the United States (Nemes, 2002). NGO Response to Online Hate Speech Tavani (2011) suggested government control is not the only way to regulate hate speech. Rather, he pointed out that social and market pressures have, at times, been effective in curbing offensive speech typically protected by the First Amendment. This was especially true after the events of 9/11. In addition, the work of non-government agencies such as the Southern Poverty Law Center and the Anti-Defamation League have shown that social and market pressures can be effective tools for monitoring and eliminating online hate speech. The SPLC is a non-government agency that monitors the Internet to locate and investigate online hate speech and to educate the public about the ways hate groups are using the Internet to spread their messages. The SPLC communicates its findings through online publications, blogs, and electronic news letters. In addition, the SPLC has developed an online map which shows where various hate groups are located and has also produced lesson plans and materials for teachers to use in the classroom. Lastly, the SPLC works closely with law enforcement by not only providing information about hate groups, but also by running training programs for law enforcement officials (Henry, 2009). Another non-government agency that works to limit hate speech on the Internet is the ADL. The ADL, however, takes a different approach than the SPLC in that they work closely with Internet Service Providers (ISP) by locating hateful speech, bringing its presence to the attention of the ISP, and requesting the ISP remove the material. Similar to the SPLC, the ADL provides educational materials for teachers to use in the classroom and also works with law enforcement officials by providing training, newsletters, and other materials.



Limitations to NGO Response Although the ADL has experienced success through teaming up with some ISPs, this only works if the ISP has a terms of service agreement that prohibits hate speech. According to Henry (2009), there are ISPs such as Stormfront that actually have made hosting racist content and hate speech a large part of their mission. In addition, the choice to remove the offensive content is in the hands of the ISP; they cannot be forced to delete the material because ISPs are not editors and therefore are not liable for the content their users post (Henry, 2009). However, Tavani (2011) pointed out if an ISP does remove offensive content, they may face unintended consequences. This is because the more they remove, the more they take on an editorial role. If their actions make them appear to be editors, they could lose the protection they currently possess. In addition, Henry pointed out even if an ISP is willing to delete hateful content, the Internet is so vast, it is impossible to find every instance of hate speech on the web. Another limitation, according to Henry (2009), is that the SPLC‟s “online work is reactive and not proactive” (p. 246). The same can be said for the ADL‟s efforts to work with ISPs because the hate speech needs to be present in order for it to be reported and removed from the Internet. Also, Tavani (2011) raised the issue that providing information to the public about the location and nature of hate groups “provides an easy way for consumers of hate speech to locate and visit particular hate sites that serve their interests” (p. 290). Lastly, groups like the SPLC, who monitor the websites of hate groups, do not have access to many of the spaces these groups are beginning to utilize. For example, a number of hate groups have developed password protected websites and have started posting their messages on private listservs and bulletin boards. Although these private spaces limit the number of



people who are exposed to the messages, it also creates a space where hateful messages can be exchanged without any sort of rebuttal from the public. Protecting Hate Speech Those who are opposed to regulating online hate speech cite a variety of reasons for their opposition. The American Civil Liberties Union (ACLU) has been involved in a number of court cases because they believe regulating online hate speech infringes upon our First Amendment rights (Becker et al., 2000). Others who are opposed to regulating hate speech believe that just as the First Amendment protects people who spread messages of hate on the Internet, so too does it protect citizens who disagree with the hateful or derogatory comments they find on the Internet. In other words, as Henry (2009) argued, “Once hate speech is displayed publically, it can be answered by speech that reveal its falsity and offensiveness, and also by counter-messages that promote positive values” (p 236). In addition, cyber-anarchists and cyber-libertarians, according to Nemes (2002) are opposed to the regulation of hate speech. Although they recognize that hate speech can cause harm, they believe “that the harm in regulating online speech is greater than the harm caused by online speech” (p. 196). This is because cyber-libertarians believe the only way to ensure creativity and further advancement of the Internet is to make sure it remains free of government control. Conclusion Nemes (2002) explained that victims of online hate speech reported “feeling degraded, silenced, afraid to go out into the community as well as generally feeling a loss of self-esteem” (p. 210). Although the concept of harm may be difficult to prove in a court of law, it is not difficult to see that online hate speech hurts individuals and our society as a whole. The Internet is limitless and without borders, and many nations are attempting to regulate the amount of hate



speech that is being transmitted across the web. However, as long as the United States continues to avoid the topic due to First Amendment concerns, the rest of the world will find their hands tied. Clearly, the United States has established limitations on the First Amendment in the past; it is time for this to happen once again. However, government restrictions alone are not going to be sufficient to stop online hate speech. It will take a combination of efforts to effectively regulate online hate speech. Despite the limitations of non-government agency efforts, the education they provide for law enforcement, teachers, and the general public is valuable to such an effort. In addition, members of society who oppose hate speech need to make their voices heard in order to send hate groups the message that we do not agree with their cruel, threatening, and hurtful words.



References Anti-Defamation League (2000). Combating Extremism in Cyber-Space: The Legal Issues Affecting Internet Hate Speech. New York: ADL. Anti-Defamation League. Frequently Asked Questions. Banks, J. (2011). European regulation of cross-border hate speech in cyberspace: The limits of legislation. European Journal of Crime, Criminal Law and Criminal Justice, 19, 1-13. Becker, P., Byers, B., Jipson, A. (2000). The contentious American debate: The first amendment and internet-based hate speech. International Review of Law Computers, 14(1). 33-41. Council of Europe (2002). Additional Protocol to the Convention on cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems. Department of Housing and Urban Development v. Wilson (2000). HUDALJ 03-98-0692-8. Henry, J. (2009). Beyond free speech: Novel approaches to hate on the internet in the United States. Information & Communications Technology Law, 18(2). 235-251. Nemes, I. (2002). Regulating hate speech in cyberspace: Issues of desirability and efficacy. Information & Communication Technology Law, 11(3). 193-220. Planned Parenthood v. ACLA (1999, 2001, 2002). Civil No 95-1671-JO Tavani, H. (2011). Ethics and Technology: Controversial Questions and Strategies for Ethical Computing (6th ed.). Hoboken New Jersey: John Wiley & Sons, Inc. Weintraub-Reiter, R. (1998). Note: Hate speech over the internet: A traditional constitutional analysis or a new cyber constitution? Boston Public Interest Law Journal, 8. 145.

Brean Cyberethics Final Paper  

In this paper I explore the controversy regarding the regulation of online hate speech and argue it is time we do more to limit the amount o...