ONLINE CONTENT
SOCIAL MEDIA Amy-Louise Watkin and Joe Whittaker look at the interface between terrorism and the internet, and whether big tech companies should be doing more to to tackle online propaganda and the rise in digital radicalisation
REGULATING TERRORIST CONTENT ONLINE: CONSIDERATIONS AND TRADE-OFFS A
t first glance, the removal of terrorist content online seems like an intuitive goal. In the heyday of the Islamic State’s virtual caliphate, the group was able to spread their message far and wide. Scholars found that online platforms were being used to spread propaganda, recruit potential terrorists, and disseminate instructional material. Far right terrorists, too, have abused platforms by disseminating content. The world watched in horror as the Christchurch attack was livestreamed on Facebook, and multiple attackers have posted their manifestos online. Although different countries have different freedom
of speech norms enshrined in law, there has been a widespread move towards removing terror content from the Internet, including the Christchurch Call, the UK Online Harms White Paper, and the German NetzDG law. Although they are sometimes maligned for not acting quickly enough, the big tech companies have, by and large, adopted policies which help to remove terror content from their platforms, as well as working together to share best practices, like in the case of the Global Internet Forum to Counter Terrorism and Tech Against Terrorism. There have been considerable successes to this approach; E
ISSUE 39 | COUNTER TERROR BUSINESS MAGAZINE
47