ICIW 2013- The Proceedings of the 8th International Conference on Information Warfare and Security

Page 1

Proceedings of the 8th International Conference on

IInformation f ti W Warfare f and Security

Regis University, University Denver Denver, Colorado, Colorado USA 25-26 March 2013

Edited by Dr. Douglas Hart, School of Computer & Information Sciences Regis University University, Denver Colorado, Colorado USA

A conference managed by ACI, UK



The Proceedings of the 8th International Conference on Information Warfare and Security ICIW-2013 Regis University, Denver Colorado USA 25-26 March 2013 Edited by Dr. Douglas Hart School of Computer & Information Sciences Regis University Denver Colorado USA


Copyright The Authors, 2013. All Rights Reserved. No reproduction, copy or transmission may be made without written permission from the individual authors. Papers have been double-blind peer reviewed before final submission to the conference. Initially, paper abstracts were read and selected by the conference panel for submission as possible papers for the conference. Many thanks to the reviewers who helped ensure the quality of the full papers. These Conference Proceedings have been submitted to Thomson ISI for indexing. Further copies of this book and previous year’s proceedings can be purchased from http://academic-bookshop.com E-Book ISBN: 978-1-909507-11-1 E-Book ISSN: 2048 9889 Book version ISBN: 978-1-909507-09-8 Book Version ISSN: 2048 9870

Published by Academic Conferences and Publishing International Limited Reading UK 44-118-972-4148 www.academic-publishing.org


Contents Paper Title

Author(s)

Page No.

Preface

iii

Committee

iv

Biographies

vi

Strategies for Combating Sophisticated Attacks

Chad Arnold, Jonathan Butts, and Krishnaprasad Thirunarayan

1

Analysis of Programmable Logic Controller Firmware for Threat Assessment and Forensic Investigation

Zachry Basnight, Jonathan Butts, Juan Lopez and Thomas Dube

9

Top-Level Goals in Reverse Engineering Executable Software

Adam Bryant, Robert Mills, Michael Grimaila and Gilbert Peterson

16

An Investigation of the Current State of Mobile Device Management Within South Africa

Ivan Burke and F. Mouton

24

A Taxonomy of Web Service Attacks

Ka Fai Peter Chan, Martin Olivier and Renier Pelser van Heerden

34

DUQU’S DILEMMA: The Ambiguity Assertion and the Futility of Sanitized Cyber War

Matthew Crosston

43

Hacking for the Homeland: Patriotic Hackers Versus Hacktivists

Michael Dahan

51

Consequences of Diminishing Trust in Cyberspace

Dipankar Dasgupta and Denise Ferebee

58

Towards a Theory of Just Cyberwar

Klaus-Gerd Giesen

65

Defamation in Cyber Space: Who do you sue?

Samiksha Godara

72

Identifying Tools and Technologies for Professional Offensive Cyber Operations

Tim Grant and Ronald Prins

80

The Emergence of Cyber Activity as a Gateway to Human Trafficking

Virginia Greiman and Christina Bain

90

Deep Routing Simulation

Barry Irwin and Alan Herbert

97

Development of a South African Cybersecurity Policy Implementation Framework

Joey Jansen van Vuuren, Louise Leenen, Jackie Phahlamohlaka and Jannie Zaaiman

106

Replication and Diversity for Survivability in Cyberspace: A Game Theoretic Approach

Charles Kamhoua, Kevin Kwiat, Mainak Chatterjee, Joon Park and Patrick Hurley

116

Situation Management in Aviation Security – A GraphTheoretic Approach

Rainer Koelle and Denis Kolev

125

Exercising State Sovereignty in Cyberspace: An International Cyber-Order Under Construction?

Andrew Liaropoulos

136

SCADA Threats in the Modern Airport

John McCarthy and William Mahoney

141

Improving Public-Private Sector Cooperation on Cyber Event Reporting

Julie McNally

147

Copyright Protection Based on Contextual Web Watermarking

Nighat Mir

154

i


Paper Title

Author(s)

Page No.

Towards a South African Crowd Control Model

Mapule Modise, Zama Dlamini, Sifiso Simelane, Linda Malinga, Thami Mnisi and Sipho Ngobeni

159

A Vulnerability Model for a Bit-Induced Reality

Erik Moore

169

Results From a SCADA-Based Cyber Security Competition

Heath Novak and Dan Likarish

177

Design of a Hybrid Command and Control Mobile Botnet

Heloise Pieterse and Martin Olivier

183

Functional Resilience, Functional Resonance and Threat Anticipation for Rapidly Developed Systems

David Rohret, Michael Kraft and Michael Vella

193

What Lawyers Want: Legally Significant Questions That Only IT Specialists can Answer

Yaroslav Shiryaev

203

The Weakest Link – The ICT Supply Chain and Information Warfare

Dan Shoemaker and Charles Wilson

208

Thirst for Information: The Growing Pace of Information Warfare and Strengthening Positions of Russia, the USA and China

Inna Vasilyeva and Yana Vasilyeva

215

Investigating Hypothesis Generation in Cyber Defense Analysis Through an Analogue Task

Rachel Vickhouse, Adam Bryant and Spencer Bryant

221

PHD Papers

229

The Potential Threat of Cyber-Terrorism on National Security of Saudi Arabia

Abdulrahman Alqahtani

231

Improving Cyber Warfare Decision-Making by Incorporating Leadership Styles and Situational Context into Poliheuristic Decision Theory

Daryl Caudle

240

Work in Progress

249

Attack-Aware Supervisory Control and Data Acquisition (SCADA)

Otis Alexander, Sam Chung and Barbara Endicott-Popovsky

251

Cyber Disarmament Treaties and the Failure to Consider Adequately Zero-Day Threats

Merritt Baer

255

Evaluation of a Cryptographic Security Scheme for Air Traffic Control’s Next Generation Upgrade

Cindy Finke, Jonathan Butts, Robert Mills and Michael Grimaila

259

Attack Mitigation Through Memory Encryption of SecurityEnhanced Commodity Processors

Michael Henson and Stephen Taylor

265

Action and Reaction: Strategies and Tactics of the Current Political Cyberwarfare in Russia

Volodymyr Lysenko and Barbara Endicott-Popovsky

269

Non Academic Papers

274

The Adam and Eve Paradox

Michael Kraft, David Rohret, Michael Vella and Jonathan Holston

275

Offensive Cyber Initiative Framework (OCIF) Raid and ReSpawn Project

David Rohret, Michael Vella, and Michael Kraft

284

ii


Preface These Proceedings are the work of researchers contributing to the 8th International Conference on Information Warfare and Security (ICIW 2013), hosted this year by The Regis University, Denver, Colorado, USA. The Conference Chair is Daniel Likarish and the Programme Chair is Dr. Doug Hart Regis both from Regis University, Denver Colorado, USA. The opening keynote address this year is given by David L. Willson on the topic of “Active Defense: How to Legally Defend Beyond Your Network”. The second day will be opened by William Hugh Murray who will talk about “The Drums of War”. An important benefit of attending this conference is the ability to share ideas and meet the people who hold them. The range of papers will ensure an interesting and enlightened discussion over the full two day schedule. The topics covered by the papers this year illustrate the depth of the information operations’ research area, with the subject matter ranging from the highly technical to the more strategic visions of the use and influence of information. With an initial submission of 74 abstracts, after the double blind, peer review process there are 29 research papers, 2 PHD papers, 5 Work in Progress and 2 Non academic papers published in these Conference Proceedings, including contributions from Belguim, Estonia, France, Greece, India, Israel, Netherlands, Russian Federation, Saudi Arabia, South Africa, United Kingdom, United States. I wish you a most enjoyable conference. Dr. Douglas Hart School of Computer & Information Sciences Regis University, Denver Colorado, USA Programme Chair

iii


Conference Committee Conference Executive

Daniel M Likarish, Center on Information Assurance Studies, Regis University, Denver Colorado, USA Dr. Doug Hart, Regis University, Denver Colorado, USA Daniel T Kuehl, National Defense University, Washington, DC, USA Leigh Armistead, Peregrine Technical Solutions LLC, USA Andy Jones, Security Research Centre, BT, UK and and Khalifa University, UAE William Mahoney The Peter Kiewit Institute, University of Nebraska Omaha, Omaha, USA Mini Track Chairs Dr. Robert F. Mills, Air Force Institute of Technology (AFIT), Wright-Patterson AFB, Dayton, Ohio, USA Joey Jansen van Vururen, Council for Scientific and Industrial Research (CSIR), South Africa Dr Louise Leenen, Council for Scientific and Industrial Research (CSIR),South Africa Dr Barbara Endicott-Popovsky, Center of Information Assurance and Cybersecurity, University of Washington, Seattle USA Dr Volodymyr Lysenko, Center of Information Assurance and Cybersecurity, University of Washington, Seattle USA Committee Members The conference programme committee consists of key people in the information systems, information warfare and information security communities around the world. The following people have confirmed their participation: Abukari Abdul Hanan (University For Development Studies, Ghana); Dr William Acosta (University of Toledo, USA); Gail-joon Ahn (University of North Carolina at Charlotte, USA); Jim Alves-Foss (University of Idaho, USA); Major Todd Andel (University of South Alabama, USA); Dr Leigh Armistead (Edith Cowan University, Australia); Johnnes Arreymbi (University of East London, UK); Professor Richard Baskerville (Georgia State University, USA); Dr Alexander Bligh (Ariel University Center, Ariel, Israel); Dr Svet Braynov (University of Illinois, Springfield, USA); Dr Susan Brenner (University of Dayton, Ohio, USA); Dr Raymond Buettner (Naval Postgraduate School, USA); Dr Acma Bulent (Anadolu University, Eskisehir, Turkey); Ivan Burke (CSIR, Pretoria, South Africa); Dr Jonathan Butts (AFIT, USA); Dr Marco Carvalho (Institute for Human and Machine Cognition (IHMC) , USA); Dr. Joobin Choobineh (Texas A&M University, USA); Prof. Sam Chung (University of Washington, Tacoma, USA,); Dr Nathan Clarke (University of Plymouth, UK); Dr. Ronen Cohen (Ariel University Centre, Israel); Earl Crane(George Washington University, USA); Geoffrey Darnton (Requirements Analytics., UK); Dr Dipankar Dasgupta (Intelligent Security Systems Research Lab, University of Memphis, USA); Evan Dembskey (UNISA, South Africa); Dorothy Denning (Naval Post Graduate School, USA); jayanthila Devi (Anna university, India,); Dr Glenn Dietrich (University of Texas, Antonio, USA); Dr Hart Doug (Regis University, USA); Prokopios Drogkaris (University of the Aegean, Greece); Barbara Endicott-Popovsky (Center for Information Assurance and Cybersecurity, University of Washington, Seattle , USA); Prof. Dr. Alptekin Erkollar (ETCOP, Austria); Dr Cris Ewell (Seattle Children's, USA); Larry Fleurantin (Larry R. Fleurantin & Associates, P.A., USA); Kenneth Geers (Cooperative Cyber Defence Centre of Excellence, USA); Kevin Gleason (KMG Consulting, MA, USA); Dr Samiksha Godara (Shamsher Bahadur Saxena College Of Law, India); Prof.dr Tim Grant (Netherlands Defence Academy, Netherlands); Virginia Greiman (Boston University, USA); Dr Michael Grimaila (Air Force Institute of Technology, USA); Daniel Grosu (Wayne State University, Detroit, USA, USA); Dr Drew Hamilton (Auburn University, Alabama, USA); Joel Harding (IO Institute, Association of Old Crows, USA); Dr Dwight Haworth (University of Nebraska at Omaha, USA); Dr Philip Hippensteel (Penn State University, Middletown, USA); Professor Bill Hutchinson (Edith Cowan University, Australia); Dr Berg Hyacinthe (Assas School of Law, Universite Paris II/CERSA-CNRS, France); Dr Cynthia Irvine (Naval Post Graduate School, USA); Ramkumar Jaganathan (VLB Janakiammal College of Arts and Science (affiliated to Bharathiar University), India); Joey Jansen van Vuuren (CSIR,South Africa ); Dr Andy Jones (BT, UK); James Joshi(University of Pittsburgh, USA); Ayesha Khurram (National University of Sciences &Technology, Pakistan); Prashant Krishnamurthy (University of Pittsburgh, USA); Dr Dan Kuehl (National Defense University, USA); Takakazu Kurokawa (The National Defense Academy, Japan); Rauno Kuusisto (Finish Defence Force, Finland); Dr Tuija Kuusisto (Internal Security ICT Agency HALTIK, Finland); Arun Lakhotia (University of Louisiana Lafayertte, USA); Michael Lavine (John Hopkins University's Information Security Institute, USA); Louise Leenen (CSIR, Pretoria, South Africa); Tara Leweling (Naval Postgraduate School, Pacific Grove, USA); Dan Likarish (Regis University, Denver , USA); Prof. Peter Likarish (Drew University, Madison,USA); Professor Sam Liles (Purdue University Calumet, USA); Cherie Long (Georgia Gwinnett College. Lawrenceville, GA., USA); Juan Lopez Jr. (Air Force Institute of Technology, USA); Dr Bin Lu (West Chester University of PA, USA); Volodymyr Lysenko (Center for Information Assurance and Cybersecurity, University of Washington, Seattle, USA); Fredrick Magaya (Kampala Capital City Authority, Uganda); Dr Bill Mahoney (University of Nebraska, Omaha, USA); Hossein Malekinezhad (Islamic Azad University of Naragh, Iran); Dr John McCarthy (Cranfield University, UK); Dr. Todd McDonald (Air Force Institute of Technology, USA); Dr Jeffrey McDonald(University of South Alabama, USA); Dr Robert Miller (National Defense University, USA); Dr Robert Mills (Air Force Institute of Technology, USA); Evangelos Moustakos (Middlesex University, UK); Dr Srinivas Mukkamala (New Mexico Tech, Socorro, USA); Dr Barry Mullins (Air Force Institute of Technology, USA); Muhammad Naveed (University of Engineering and Technology, Peshawar, Pakistan); Prof. Dr. Frank Ortmeier (Otto-von-Guericke Universit채t, Magdeburg, Germany); Rain Otiv


tis (Cooperative Cyber Defence Centre of Excellence, Estonia); Dr Andrea Perego (European Commission - Joint Research Centre, Ispra, , Italy); Dr Gilbert Peterson (Air Force Institute of Technology, USA); Pete Peterson (The George Washington University, USA); Andy Pettigrew (George Washington University, USA); Dr. Jackie Phahlamohlaka(Council for Scientific and Industrial Research, Petoria, South Africa); Engur Pisirici (govermental - independent, Turkey); Dr Ken Revett (British University in Egypt , Egypt ); Lieutenant Colonel Ernest Robinson (U.S. Marine Corps / Air War College, USA); Dr Neil Rowe (US Naval Postgraduate School, Monterey, USA); Daniel Ryan (National Defence University, Washington DC, USA); Julie Ryan (George Washington University, USA); Prof. Lili Saghafi (Canadian International College, Montreal, Canada); Ramanamurthy Saripalli (Pragati Engineering College, India); Sameer Saxena (IAHS Academy, Mahindra Special Services Group , India); Corey Schou (Idaho State University, USA); Dr Yilun Shang(University of Texas at San Antonio, USA); Dr Dan Shoemaker (University of Detroit Mercy, Detroit, USA); Prof. Ma Shuangge (Yale University, USA); Assoc. Prof. Dr. Risby Sohaimi (National Defence University of Malaysia, Malaysia); William Sousan (University Nebraska, Omaha, USA); Prof Michael Stiber (University of Washington Bothell, USA); Dr Kevin Streff (Dakota State University, USA); Dennis Strouble (Air Force Institute of Technology, USA); Peter Thermos (Columbia University/Palindrome Technologies, USA); Dr Bhavani Thuraisingham (University of Texas at Dallas, USA); Major Eric Trias (Air Force Institute of Technology, USA); Dr Doug Twitchell (Illinois State University, USA); Dr Shambhu Upadhyaya (University at Buffalo, USA); Renier van Heerden (CSIR, Pretoria, South Africa); Stylianos Vidalis (Newport Business School, Newport, UK); Prof. Kumar Vijaya (High Court of Andhra Pradesh, India); Dr Natarajan Vijayarangan (Tata Consultancy Services Ltd, India); Fahad Waseem (University of Northumbria, UK); Dr Kenneth Webb (Edith Cowan University , Australia); Mohamed Reda Yaich (École nationale supÊrieure des mines , France); Enes Yurtoglu (Turkish Air War College, Turkey,); Dr Zehai Zhou (University of Houston-Downtown, USA); Tanya Zlateva (Boston University, USA);

v


Biographies Conference Chair Dan Likarish is an assistant professor in the School of Computing & Information Sciences with responsibility for Information Assurance program coordination, students and research at Regis University. He is Director of the Colorado Front-range Center on Information Assurance Studies. His research and teaching interests are in the design and implementation of student cyber security competitions, security of critical SCADA infrastructure and virtualization of student lab exercises. He has installed and is calibrating a Radio Telescope for use as a K-Collegiate teaching instrument and directs the Rocky Mountain Collegiate Cyber Defense Competition. He is the recipient of various state, industry and federal grants and awards

Programme Chair Dr. Douglas Hart is a Professor in the School of Computer & Information Sciences. He is the Chair of the Information Technology Department and the Program Coordinator for the Software Engineering program in the School. Doug has over thirty years of experience in software development and scientific computing. His interests include signal processing and machine learning techniques for recognizing patterns in seismic data. His recent interests are in techniques for integration of software systems.

Keynote Speakers David L. Willson is a leading authority in cyber security and the law. He is a licensed attorney in NY, CT, and CO, and owner of Titan Info Security Group, a Risk Management and Cyber Security law firm, focused on technology and the law, and helping companies lower the risk of a cyber-incident and reducing or eliminating the liability associated with loss or theft of information. He also assists companies with difficult legal/cyber-security issues. David is a retired Army JAG officer. During his 20 years in the Army he provided legal advice in computer network operations, information security and international law to the DoD and NSA and was the legal advisor for what is now CYBERCOM.

William Hugh Murray, CISSP. Bill is a management consultant and trainer in Information Assurance specializing in policy, governance, and applications. He is Certified Information Security Professional (CISSP) and has served as chairman of the Governance and Professional Practices committees of (ISC)2. He has more than fifty years experience in information technology and more than forty in security. He has been recognized by Information Security Magazine as a Pioneer in Computer Security.

Biographies of Presenting Authors Otis Alexander is currently a student at the University of Washington, Tacoma. He is working towards a Bachelors of Science degree in Computer Science and Systems. His research interests include application level intrusion detection systems for Supervisory Control and Data Aquisition (SCADA) and artificial intelligence based solutions for cybersecurity. Abdulrahman Alqahtani is a Special Forces officer rank of major, works as a lecturer at King Fahd Security College for 12 Years. Also, he served as Managing Editor of Security Research. Alqahtani holds a bachelor degree in Security Studies (2000), a Bachelor degree in Doctrine and Perverted Groups (2006), and a master's degree in Strategy and International Security, UK (2010).Currently studying a PhD in the field of Cyber-terrorism. Chad Arnold, received a B.A. degree in computer science from DePauw University in 2006 and a M.S. in computer science from California Lutheran University in 2008. He is currently working toward a Ph.D. in computer engineering and computer science at Wright State University while participating in collaborative research with the Air Force Institute of Technology.

vi


Merritt Baer is a graduate of Harvard Law School and Harvard College. She conducted cyberlaw research at Harvard's Berkman Center for Internet, and clerked at the US Court of Appeals for the Armed Forces. She focuses on the intersection of cybercrime, Constitutional Internet law and national security. She serves as a Legislative Fellow in the US Senate. Zachry Basnight is currently an MS in Cyber Operations student at the Air Force Institute of Technology. He received his BS in Computer Science from the United States Air Force Academy in 2009. Zack is an active duty 1st Lieutenant in the Air Force and his research interests include critical infrastructure protection and information assurance. Adam Bryant earned a BS in Social Psychology from Park University in 2001, an MS in Information Resource Management from the Air Force Institute of Technology (AFIT) in 2007, a second MS in Computer Science from AFIT in 2007, and a PhD in Computer Science from AFIT in 2012. Ivan Burke is a Msc student in the department of Computer Science at the University of Pretoria, South Africa. He also works full time at the Council of Scientific and Industrial Research South Africa in the department of Defense Peace Safety and Security, where he works within the Command, Control and Information Warfare research group. Dr. Daryl Caudle is a licensed professional engineer and career naval officer with over 27 years in the United States Submarine Force. He holds degrees from North Carolina State University, Chemical Engineering; Naval Postgraduate School, MS, Physics; Old Dominion University, MS, Engineering Management, and the School of Advanced Studies, University of Phoenix, Doctor of Management. Peter Chan is a motivated MSc student with an interest in computer security and formalising approaches to negating security attacks. He is employed as a researcher in the Defence, Peace, Safety and Security (DPSS) department at the CSIR, South Africa. Dr. Matthew Crosston is the Miller Endowed Chair for Industrial and International Security and Director of the International Security and Intelligence Studies program at Bellevue University. Crosston has authored two books, several book chapters and nearly a dozen peer-reviewed articles on issues covering counter-terrorism, corruption, democratization, radical Islam, and cyber-deterrence. Dr. Dahan (Hebrew University of Jerusalem, 2001), is a veteran resident of the Middle East. His research interests focus on two primary areas – ICT usage and diffusion in the MENA area and Israeli and Palestinian politics. Currently permanent lecturer at the School of Public Policy and Public Administration, Sapir College, he is also Head of program in Political Communication at Tel Aviv-Yaffo College. Dr. Dipankar Dasgupta is a Professor of Computer Science and the founding Director of Center for Information Assurance at the University of Memphis, Tennessee, USA. His research interests include application of Computational Intelligence in cyber security. He received research funding from various federal organizations and has more than 200 publications which are being cited widely. Barbara Endicott-Popovsky is a Director for the Center of Information Assurance and Cybersecurity at the University of Washington, and Research Associate Professor with the Information School. Her academic career follows a 20-year career in industry marked by executive and consulting positions in IT architecture and project management. Barbara earned her Ph.D. in Computer Science/Computer Security from the University of Idaho. Cindy Finke is currently an MS in Computer Science student at the Air Force Institute of Technology. She received her BS in Computer Science from the US Air Force Academy in 2005. Cindy is an active duty Air Force Captain assigned to Wright Patterson AFB, OH. As a KC-135 pilot, she has accumulated 1,700+ hours of operational flight experience. Klaus-Gerd Giesen is professor of political science at the Université d’Auvergne in Clermont-Ferrand, France, and a visiting professor at the Université de Lausanne, Switzerland. He is a specialist of international ethics and of the international politics of technology. Previously, he has been professor in Germany and Belgium.. Dr. Samiksha Godara has done her B.A. (Law), LL.B., LL.M. (Criminal Law) & Ph.D. (Cyber Law) from the M.D. University, Rohtak, Haryana, India. She has experience of over 6 years as Criminal Lawyer in the District & Sessions Court, Rohtak. Presently, she is working as an Assistant Professor in SBS Law College, Rohtak. Tim Grant is retired but an active researcher (Professor emeritus, Netherlands Defence Academy). Tim has a BSc in Aeronautical Engineering (Bristol University), a Masters-level Defence Fellowship (Brunel University), and a PhD in Artificial Intelli-

vii


gence (Maastricht University). Tim's research spans the interplay between operational needs and ICT capabilities in networkenabled Command & Control systems Identifying Tools and Technologies for Professional Offensive Cyber Operations. Virginia Greiman is an Assistant Professor at Boston University in international law, cybercrime and regulation and project management and an affiliated faculty member at the Harvard Kennedy School in cybertrafficking. She has more than 20 years of experience in international development and legal reform and has held high level appointments with the U.S. Department of Justice. Major Michael Henson is a PhD candidate in computer engineering at Dartmouth College where his work focuses on the security of mobile devices. He holds a masters degree in computer science (information assurance) from the Air Force Institute of Technology. He has developed and taught network security at the United States Air Force Academy. Jonathan L. Holston, CSC, Inc. Joint Information Operations Warfare Center (JIOWC). Mr. Holston served in the US Air Force as a vulnerability analyst assigned to the National Security Agency. His research interests include identifying third-world adversarial attack methodologies on communication networks and satellite communications and their associated vulnerabilities. Dr Barry Irwin heads up the Security and Network Research Group (SNRG) in the department of Computer Science at Rhodes University. His research interests are in network modelling, and the application of network telescopes and honeypots for cyber security. He is also the Chapter lead for the South African Honeynet project. Joey Jansen van Vuuren is the Research Group Leader for Cyber Defence at the CSIR, South Africa. This research group is mainly involved in research for the SANDF and Government sectors. Her research is focused around national security and the analysis of Cyber threads using non quantitative modelling techniques. She is also actively involved in facilitating Cyber awareness programs in South Africa. Dr. Charles A. Kamhoua received his M.S. in Telecommunication and Networking and his PhD in Electrical Engineering from Florida International University in 2008 and 2011 respectively. He is currently a postdoctoral fellow at the Air Force Research Laboratory. His interdisciplinary research area includes game theory, cybersecurity, survivability, fault tolerant networks, and ad hoc networks. Rainer Koelle is a Senior ATM Security expert with EUROCONTROL, Brussels. Rainer holds a PhD from Lancaster University, 2012, and a Diploma (MSc) in Electrical Engineering (Communication Systems) from the University of the German Federal Armed Forces, Hamburg, 1994. He is a researcher with Lancaster University, Aviation Security Group, in the field of Situation Management. For more than ten years Mr. Kraft has been deeply involved with Information Assurance and network security. He holds a Master of Science in Information Assurance degree from Capitol College of Maryland. Mr. Kraft is a Certified Information Systems Security Professional (CISSP). Michael E. Kraft, CSC, Inc. Joint Information Operations Warfare Center (JIOWC) For more than ten years Mr. Kraft has been deeply involved with Information Assurance and network security. He holds a Master of Science in Information Assurance degree from Capitol College of Maryland. Mr. Kraft is a Certified Information Systems Security Professional (CISSP). Dr. Andrew Liaropoulos, Biographical Note Dr. Andrew Liaropoulos is a Lecturer in University of Piraeus, Department of International and European Studies, Greece. He also teaches in the National Security College, the Air War College and the Naval Staff Command College. He is also a Senior Analyst in the Research Institute for European and American Studies. Volodymyr Lysenko is a research scientist at the Center of Information Assurance and Cybersecurity. He is a graduate of the Ph.D. program in Information Science at the Information School of the University of Washington, Seattle. He also has a degree in Physics. Volodymyr’s research interests are in the area of political cyberprotests and cyberwars in the international context. Dr. William Mahoney received his B.A. and B.S. degrees from Southern Illinois University, and his M.A. and Ph.D. degrees from the University of Nebraska. He is an Associate Professor in the College of Information Science and Technology, University of Nebraska at Omaha, and is the Director of the Nebraska University Center for Information Assurance (NUCIA). Julie McNally is a master’s student in the International Security and Intelligence Studies program at Bellevue University in Bellevue, Nebraska. She is an Intelligence Community Center of Academic Excellence IC Scholar.

viii


Nighat Mir is working as an Assistant Professor, Computer Science Department, College of Engineering and as an Institutional Research Coordinator, Quality Assurance Department at Effat University , Jeddah Saudi Arabia. Her major is information security and the research focus is in the field of Digital Watermarking , Cryptography and Steganography. Erik Moore has served as Co-Director of the Center for Information Assurance Studies at Regis University, as Associate Dean of Engineering and Information Sciences at DeVry University, and is currently the Director of Academic Computing Services for Adams 12 school district in Colorado. His research is on security and virtualization have been presented at SEC2011, HICSS2010. Heath Novak is a recent graduate from the Master’s of Information Assurance Program at Regis University and 2010 recipient of the United States Department of Defense Information Assurance Scholarship. Academic contributions include aiding Regis University faculty to design and implement cyber security competitions hosted by the university, most recently CANVAS 2011 and RMCCDC 2012. Heloise Pieterse Is a MSc student in the department of Computer Science at the University of Pretoria, South Africa. She is currently on a Studentship program at the Council of Scientific and Industrial Research and works within the Command, Control and Information Warfare research group. Research interests include information security and mobile devices. David M. Rohret, CSC, Inc. Joint Information Operations Warfare Center (JIOWC). Mr. Rohret has pursued network security interests for over 20 years to include developing and vetting exploits for use with red teams and for adversarial research. He holds degrees in CS from the University of Iowa, 1981, and La Salle University, 1994. Mr. Rohret is a member of the IEEE Computer Society and is currently a Senior Principal Systems Engineer for the Computer Sciences Corporation (CSC). Yaroslav Shiryaev is a PhD candidate at the University of Warwick. His doctoral research investigates the deficiencies of the existing international law regime in covering the threat of cyber-attacks and cyberterrorism. Yaroslav’s life experiences are quite diverse, and include e.g. compulsary military service, volunteering in Uganda and Kosovo, and traveling to the North Pole.

Inna Vasilyeva is currently a senior student at the Kuban State University of Technology, Faculty of Computer Security and Information Defense, Russia. Her research interests are: information security, information/cyber security awareness, intelligent systems and information operations. She is actively involved in the science life at her university, specializing in a field of Information Warfare. Michael P. Vella, CSC, Inc. Joint Information Operations Warfare Center (JIOWC). For more than fifteen years Mr. Vella has been deeply involved with computer network security with specialty in pen-testing the last 5 years. Mr. Vella is a Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH), and CompTIA Security+. Dr Jannie Zaaiman is the Deputy Vice Chancellor: Operations of the University of Venda in the Limpopo Province, South Africa. Before entering the academic world, he was inter alia Group Company Secretary of Sasol, Managing Executive: Outsourcing and Divestitures at Telkom and Group Manager at the Development Bank of Southern Africa. His area of research is cyber security awareness especially in rural areas of South Africa.

ix



Strategies for Combating Sophisticated Attacks Chad Arnold2, Jonathan Butts1, and Krishnaprasad Thirunarayan2 1 Department of Electrical and Computer Engineering, Air Force Institute of Technology, Wright Patterson AFB, Dayton, Ohio, USA 2 Department of Computer Science and Engineering, Wright State University, Dayton, Ohio, USA Arnold.102@wright.edu Jonathan.Butts@afit.edu t.k.prasad@wright.edu Abstract: Industrial control systems (ICS) monitor and control the processes of public utility infrastructures that society depends on—the electric power grid, oil and gas pipelines, transportation and water facilities. Attacks that impact the operations of these critical assets could have devastating consequences. Yet, the complexity and desire to interconnect ICS components have introduced vulnerabilities and attack surfaces that previously did not exist. Cyber attacks are increasing in sophistication and have demonstrated an ability to cross over and create effects in the physical domain. Most notably, ICS associated with the critical infrastructure have proven susceptible to sophisticated, targeted attacks. The numerous communication paths, various ingress and egress points, diversity of technology and operating requirements provide myriad opportunities for a motivated adversary. Indeed, the complex systems enable both traditional and nontraditional attack surfaces. Current defense strategies and guidelines focus on defense‐in‐depth as a core component to protect critical resources. System security relies on multiple protection mechanisms to present an attacker with various challenges to overcome. This strategy, however, is not adequate for safeguarding critical assets against sophisticated attacks. This paper analyzes current ICS defense strategies and demonstrates that defense‐in‐depth alone is not a successful means for preventing attacks. Findings indicate that a paradigm shift is required to thwart advanced threats. As an alternative, cyber security for ICS is examined from the notion of weakest link as opposed to the current recommended strategies. Recent examples, including Stuxnet, are examined to shed light on the next‐generation targeted attack in the context of current defensive strategies. The results demonstrate that current defense‐in‐depth strategies are necessary but not sufficient. Keywords: ICS security, defense‐in‐depth limitations, critical infrastructure protection

1. Introduction As industrial control systems grow in complexity and are connected to business and external networks, the number of security issues and the associated risks grow as well (US‐CERT, 2009). Cyber attacks are increasing in sophistication and new guidelines are required to adapt to next generation attacks. A single security product, technology or solution alone cannot adequately protect an ICS. Indeed, a multiple layer strategy involving two or more overlapping security mechanisms, a technique known as defense‐in‐depth, has been recommended to minimize the impact of a failure (Rebane, 2001). Defense‐in‐depth uses multiple layers of defense and diverse strategies to prevent an attacker from successfully penetrating an ICS network. The strategies implement subsequent layers of defense to present an attacker with progressively more critical challenges to overcome. In general, attacks can be targeted or indiscriminate depending on conditions surrounding the impacted entity. In either situation, if a trusted component becomes compromised, an attacker may be able to gain access to other components and create cascading effects downstream. While the popular defense‐in‐depth techniques may work against indiscriminate attacks, these strategies alone are not sufficient against targeted attacks. An indiscriminate attack is an attack that is not directed at a specific company, individual, or process. This may consist of a common virus distributed over email or drive‐by downloads from malicious websites that infect random machines. The cyber incident involving Browns Ferry nuclear plant is indicative of such an event as excessive network traffic caused problems with recirculation pumps (U.S. Nuclear Regulatory Commission, 2007). The Browns Ferry Unit 3 suffered from a broadcast storm, which is representative of many unintentional ICS cyber incidents. Many nonnuclear facilities have also experienced similar broadcast storms that have impacted the operation of power plants, refineries, and energy management systems (Weiss, 2010). These indiscriminate attacks are generic and can impact many devices that cannot handle the flood of data. In the historical instances surrounding ICS environments, the attacks typically exploited Windows platforms. The effects on ICS operations were the indirect result of using systems that contained the vulnerability.

1


Chad Arnold, Jonathan Butts, and Krishnaprasad Thirunarayan Targeted attacks are specific attacks designed to affect a particular person, network, process, or end device on a network. These are typically more complex than indiscriminate attacks. Developers of targeted attacks likely possess deep insider knowledge of the environment, such as architecture, software, and the interaction between components (Brunner et al., 2010). Such sophisticated attacks are becoming increasing popular and are difficult to combat due to the technical ingenuity and complexity. Intrusion detection or other defensive systems that rely solely on signatures may not recognize such attacks since there will be no signature for them. Stuxnet, discovered in 2010, is an example of a targeted attack containing advanced malware that was specifically designed to target Siemens Simatic‐S7 product line used in ICS environments. The malware targeted field devices and related control components in a fashion never publicly seen before and demonstrates how a motivated adversary can cause significant havoc with steadfast preparation and execution.

2. ICS distributed applications Sophisticated and targeted cyber intrusions against owners and operators of ICS across multiple critical infrastructure sectors have dramatically increased in the past two years (Industrial Control Systems Cyber Emergency Response Team Control Systems Security Program, 2012). In general, ICS is a term that can represent several different control systems such as a process control system (PSC), distributed control system (DCS), or supervisory control and data acquisition (SCADA) system (Macaulay and Singer, 2012). ICS gather information from a variety of endpoint devices about the current status of a production process and can be fully or partially automated. ICS can be relatively simple or incredibly complex depending on the application and underlying architecture that is implemented. Figure 1 represents a notional model of example domains and actors associated with the ICS environment. This diagram demonstrates a smart grid implementation and shows the operational intricacies. The domains consist of customers, markets, service providers, operations, bulk generation, transmission, and distribution. Actors include devices, systems, or programs that make decisions and exchange information necessary for performing applications (Office of the National Coordinator for Smart Grid Interoperability, 2010). At the ICS level, each individual network can be separated physically or logically depending on the underlying architecture. The interconnection of asset owners, companies, consumers, and customers adds to the overall system complexity. Additionally, connections to the Internet provide convenience as well as introduce potential security risks.

Figure 1: Conceptual model (Office of the National Coordinator for Smart Grid Interoperability, 2010)

2


Chad Arnold, Jonathan Butts, and Krishnaprasad Thirunarayan Cyber threats to an ICS include myriad threat vectors, including non‐typical network protocols, commands that cannot be blocked due to safety or production issues (e.g., alarm and event traffic), and otherwise valid communications used by an attacker in invalid ways (Macaulay and Singer, 2012). The numerous communication paths, various ingress and egress points, and diversity of control systems provide many opportunities for a motivated adversary to perform a cyber attack.

3. Current defense strategies Government organizations and standards bodies recommend defense‐in‐depth as the primary strategy for achieving information assurance in computer networks and ICS communications (US‐CERT, 2009), (Office of the National Coordinator for Smart Grid Interoperability, 2010),(National Security Agency, n.d.). Multiple layers of defense can be established to detect and mitigate many security issues. Several strategies are recommended for Internet facing control systems (Industrial Control Systems Cyber Emergency Response Team, 2011) but can also be applied to non Internet facing networks. Dividing control system functions into zones create clear boundaries to assist in effectively applying the appropriate level of defense. Given that adversaries can attack a target from multiple points using either internal or external access, organizations deploy protection mechanisms at multiple locations to resist all classes of attacks. (National Security Agency, n.d.). Focus areas generally include the network and infrastructure perimeter, enclave boundaries, and trusted communication paths. Physical or virtual boundaries can be established based on functional responsibilities and may include an external zone, corporate zone, data zone, control zone, and safety zone and may consist of a variety of security mechanisms (US‐CERT, 2009). Figure 2 depicts typical security zones and associated security devices that provide several layers of defense. Several components can be integrated together to create a solid defense‐in‐depth foundation. The core elements are described below.

Figure 2: ICS zones and security mechanisms (US‐CERT, 2009)

Network segmentation is used to create demilitarized zones (DMZs). This can be accomplished with multiple routers and firewalls to provide granularity in defining access rights and privileges for separate functions.

3


Chad Arnold, Jonathan Butts, and Krishnaprasad Thirunarayan

Firewalls are implemented at different networking layers to filter unwanted traffic. Many firewall options exist such as packet filter firewalls, proxy gateway firewalls, host based firewalls, or field‐level firewalls that may be appropriate depending on the given control architecture.

Passive intrusion detection systems (IDS) or active intrusion prevention systems (IPS), typically using signature‐based checking, are used to monitor and sometimes take action on network activity that is unusual or unauthorized. Passive detection systems are generally used since availability is important in ICS applications. However, certain activity and abnormal traffic can trigger active responses depending on the location of the defensive system. Like a firewall, an IDS can be placed at ingress and egress points in the architecture or at the critical connectivity points such as security zones.

Policies and procedures define guidelines for training all personnel, patching vulnerable components, analyzing event logs, responding to security incidents, and mitigating risk. A well‐defined and properly executed plan is critical to the success of the defensive strategy. Security Incident Event Management (SIEM) technologies can collect, aggregate, and display log information for various events and provide insight for effective incident response, forensic activities, and for mitigation of risk.

Several security mechanisms are critical to defense‐in‐depth. Many defensive components are common in traditional information technology deployments; however, in ICS domains, it is important to adapt firewall rule sets, IDS attack signatures, and audit log software appropriately to protect data and communications. These defenses can improve security at several layers and assist in providing a secure network against indiscriminate attacks.

4. Case studies Many components are necessary to successfully implement defense‐in‐depth strategies. Network architects and administrators implement multiple security zones, firewalls on the perimeters, intrusion detection systems, and other security mechanisms prevent attackers from penetrating ICS networks. The degrees of protection can be limited by deployed technologies and ICS configuration requirements. Implementations vary by industry and are sometimes constrained by limited resources. While defense‐in‐depth techniques have been prescribed, specific implementations may not conform exactly to the model nor may it contain every available defensive component. A network administrator or engineer for an ICS may interpret and implement defense‐in‐depth principles differently due to the uniqueness of an underlying SCADA system. The notional figure below was created to demonstrate a realistic SCADA configuration of an oil pipeline company (National Transportation Safety Board (NTSB) ‐ NTSB/PAR‐02/02; PB2002‐916502, 1999) with defense‐in‐depth techniques applied. While the depicted layers of defense can stop many indiscriminate attacks, it may only delay a motivated adversary. In the example above, with best practices applied, many vulnerabilities remain. An adversary may choose to exploit witting or unwitting insiders with USB drives, supply chain CD or DVDs, or exploit firmware upgrades. Dial‐up modems, external terminal connections, and other unknown connections present additional weaknesses and enable non‐traditional access points. In addition, wireless devices and sensors may provide additional injection opportunities for an adversary. There are many avenues that a motivated adversary can explore and it is very difficult to safeguard against all attack vectors. Even with defense‐in‐depth security measures in place, ICS are still susceptible to sophisticated cyber attacks or failures. ICS have seen an increase in intentional and unintentional cyber attacks over the past ten years (Rebane, 2001). Several reported incidents are described below which highlight system damage from worms, virus, and other malicious cyber attacks. The attacks in this section are referred to as indiscriminate attacks which may have been prevented with proper defense‐in‐depth elements. In 2003, the Sobig virus infected computers at the Amtrak dispatching headquarters, causing signaling systems to shut down and halt ten trains between Pennsylvania and South Carolina (Niland, 2003). The Slammer worm penetrated a computer at an Ohio nuclear plant in 2003, causing the safety monitoring system to be disabled for nearly five hours (Poulsen, 2003). At the Browns Ferry nuclear power plant in 2006, a “Data Storm” spike in traffic caused a programmable logic controller (PLC) to crash, resulting in the failure of recirculation pumps and forcing a manual reactor shutdown (United States Nuclear Regulatory Commission Office of Nuclear Reactor Regulation, April, 2007).

4


Chad Arnold, Jonathan Butts, and Krishnaprasad Thirunarayan

Figure 3: Notional SCADA network diagram In August of 2012, Saudi Aramco, the world’s biggest oil company, was attacked by the Shamoon virus which spread across the corporate network and erased 30,000 hard drives (Fineren and Bakr, 2012). According to reports, the virus did not directly impact the control systems or oil field data but instead affected the corporate network. This was only one of several attacks that have indirectly affected control systems and related networks. A few months earlier in April 2012, the National Iranian Oil Company, the second largest crude producer, was also affected by malicious software (Nasseri, 2012). Many additional significant cyber incidents have occurred over the past several years (CSIS: Center for Strategic & International Studies, n.d.). Fortunately, the majority of historical events are the result of secondary effects and the damage has been minor. The absence of an overwhelming disaster, however, only perpetuates the false sense of security. Indeed, more sophisticated, targeted attacks could result in significant disruption or mass casualties.

5. Limitations of defense‐in‐depth strategies Many layers of defense are described in this paper to potentially thwart cyber attacks. In the first set of case studies, the intrusions may have been detected, deterred, or possibly prevented if recommended security practices had been in place. While similar strategies may protect against indiscriminate attacks, they will likely not succeed in preventing sophisticated, targeted attacks. Indeed, cyber security is a weakest link issue, as non‐traditional attack vectors are used in combination with sophisticated malware. This combination can allow adversaries to bypass network perimeters to gain access to areas that are assumed to be air‐gapped. Traditionally, air‐gapped networks, which were originally defined as having no external connections or direct access to the Internet, were thought to be secure. Note that many ICS were originally air‐gapped, but are becoming more connected due to convenience. A targeted attack can easily defeat traditional intrusion detection and related layered defensive technologies. Sophisticated, targeted malware generally presents unprecedented technical ingenuity and complexity and is difficult for traditional security devices to detect or thwart. Many existing solutions are reactive and require

5


Chad Arnold, Jonathan Butts, and Krishnaprasad Thirunarayan the presence of a known signature or predetermined behavior pattern for a threat to be detected. For example, an IDS can detect a wide range of attacks based on existing attack signatures, network traffic patterns, filenames, or file hashes. However, the signatures required to monitor for malicious traffic in many control networks are not adequate (US‐CERT, 2009). Signature databases contain millions of signatures and most antivirus software solutions fail to detect between 40% and 90% of novel malware less than two weeks old (Macaulay and Singer, 2012). The Stuxnet virus, discovered in 2010, is a prime example of sophisticated, targeted malware that bypassed traditional security defenses and used USB thumb drives to spread and bypass network perimeters. Additionally, in 2001, a disgruntled former employee launched a wireless attack on a sewage facility in Maroochy Shire, Queensland, which released millions of gallons of raw sewage into parks and rivers (Slay & Miller, 2008). The employee used authorized credentials and knowledge of the operating environment to achieve specific effects. Indeed, these two examples highlight scenarios where the defense‐in‐depth strategy is not sufficient. Further examination of Stuxnet shows the malware was introduced into the target network, consisting of a Windows PC, using a USB drive and an unsuspecting human user. From the compromised machine, the malware propagated via the enterprise network through additional USB drives, infected PLC programming project files, network shares, and other methods utilizing several zero‐day vulnerabilities in the Windows environment until it reached a control PC (Symantec: Nicolas Falliere, Liam O Murchu, and Eric Chien, 2011). The control PC is generally a Windows machine that is used to program PLCs. The malware traversed the network and installed on systems, avoiding detection by using valid (stolen) certificates. Stuxnet masked its presence on each PC by creating a root‐kit and removing itself from USB devices after a set number of infections. Once the malware identified a control PC running WinCC or Step 7 PLC control software, alternate code was prepared, injected, and ultimately transferred to the specified PLC. The payload of Stuxnet was the compiled code that reprogrammed the end target PLC to manipulate the industrial processes. Disabling pumps, progressively activating turbines, and modifying speeds were a few of the Stuxnet operations. A targeted attack using nontraditional access is depicted in Figure 4.

Figure 4: Targeted Stuxnet attack Attackers that create targeted malware require deep insight and insider knowledge about the target environment settings to achieve desired goals. This information can be gathered using various methods, including spear phishing to target an employee and compromise a legitimate account on the network. Additionally, a manager or remote operator may use virtual private network (VPN) access to perform maintenance or operational updates. Because these access points are considered trusted, they are not protected by the traditional defense‐in‐depth security mechanisms; compromise of any access point enables the attacker to become a trusted agent on the system. Stuxnet demonstrates a new class and dimension of malware as it acts as a worm, virus, malware and exploit in a single package. Stuxnet uses a multi‐staged attack vector to propagate to the targeted PLC. The worm is first introduced in the Windows environment. This is significant since the initial target is from a different operating system and environment than the end‐target. It’s difficult to notice infections from attacks such as Stuxnet on non‐targeted systems since its presence is masked and does not impact the functionality of non‐ICS operations.

6


Chad Arnold, Jonathan Butts, and Krishnaprasad Thirunarayan Stuxnet highlights the nontraditional inputs (e.g., USB drive) that can be exploited by attackers using cyber methods. Unlike traditional cyber attacks, Stuxnet did not enter the network through the Internet via a compromised firewall or other ingress point. Non‐traditional devices such as portable hard drives, personal laptops, music players, and cell phones can be extremely difficult to control and can allow a system to become infected with targeted malware bridging air‐gapped networks. There are legitimate uses for the access points (e.g., applying system patches and updating software). As such, constraining the attack surface is nontrivial. In addition, new attack surfaces are introduced during system upgrades and architecture enhancements.

6. Alternative defense strategies Implementing best practices or recommended security procedures, such as network segmentation and intrusion detection systems can often deter attacks, significantly reduce the time to detect an attack and reduce the impact of cyber attacks. However, as demonstrated, strategies consisting of defense‐in‐depth alone are not sufficient. Indeed, these techniques only attempt to the secure the network from the outside‐in. To compliment this approach, the network must also be evaluated from the inside‐out. This approach can help focus resources to more effectively combat targeted attacks. New approaches to combating cyber attacks must also be proactive and evolve with time. Securing a network can begin with layered defenses to detect indiscriminate attacks. This can include reactive technologies which are efficient in detecting and characterizing known threats. However, for previously unseen cyber threats, additional strategies are needed to look for targeted malware. Existing recommendations can leave internal network components vulnerable to attack since defense‐in‐depth only protects the outer most perimeter and network layers. However, inside these boundaries lie trusted components that can generally communicate without limitation or supervision from defensive technologies. A new security paradigm is required. Individual components should not be blindly trusted because of their location inside a network. Currently, individual components lack validation. Processes and communications within the internal network, however, should remain untrusted until proven otherwise. With this approach, input and output validation at the component level is emphasized. Note that this notion is contrary to current security practices that do not evaluate every input and output connection, starting at the core. Building trust chains from inside‐out allows evaluation and prioritization of assets in order to focus efforts and enable graceful degradation in the event of a cyber attack.

7. Conclusion and future work This paper evaluates traditional security defense strategies from a perspective that challenges the current recommendations from standards organizations. Defense‐in‐depth is defined and applied in the context of ICS and network security. A notional ICS network, with security devices applied per standards organizations, is presented. Several examples demonstrate security challenges that ICS incur from cyber actors. The difference between indiscriminate and sophisticated, targeted attacks is described in detail. The necessity for additional evaluation methods, beyond the recommended defense‐in‐depth, becomes apparent when traditional strategies are not sufficient in cases such as Stuxnet. Alternative strategies are discussed that aim at protecting ICS in areas where defense‐in‐depth falls short. Finally, a new method to evaluate ICS cyber security is discussed as alternative strategies lay the ground work for a future evaluation model. Future work will introduce a new framework for analyzing exposures using input and output validation in an ICS environment containing malicious activity. ICS networks and related components will continue to experience cyber attacks. Preventing destruction or severe effects across critical assets, which could have devastating consequences, is necessary. ICS networks need to be able to detect and withstand cyber attacks. Current standards organizations must evolve existing recommendations to expand defense‐in‐depth strategies. While experts agree that defense‐in‐depth is necessary, it has been demonstrated that it is not sufficient, especially when combating targeted, sophisticated cyber attacks.

References Brunner, M., Hofinger, H.K.C., Roblee, C., Schoo, P. and Todt, S. (2010) Infiltrating Critical Infrastructures with Next‐ Generation Attacks, December, [online], HYPERLINK "http://www.aisec.fraunhofer.de/content/dam/aisec/en/pdf/studien/AISEC_InfiltratingCriticalInfrastructures.pdf" http://www.aisec.fraunhofer.de/content/dam/aisec/en/pdf/studien/AISEC_InfiltratingCriticalInfrastructures.pdf .

7


Chad Arnold, Jonathan Butts, and Krishnaprasad Thirunarayan CSIS: Center for Strategic & International Studies Significant Cyber Events, [online], HYPERLINK "http://csis.org/files/publication/121113_Significant_Cyber_Incidents_Since_2006.pdf" http://csis.org/files/publication/121113_Significant_Cyber_Incidents_Since_2006.pdf [2012]. Fineren, D. and Bakr, A. (2012) Saudi Aramco repairing damage from computer attack, 26 August, [online], HYPERLINK "http://in.reuters.com/article/2012/08/26/saudi‐aramco‐hacking‐idINL5E8JQ43P20120826" http://in.reuters.com/article/2012/08/26/saudi‐aramco‐hacking‐idINL5E8JQ43P20120826 . Industrial Control Systems Cyber Emergency Response Team (2011) ICS‐ALERT‐11‐343‐01—CONTROL SYSTEM INTERNET ACCESSIBILITY, 09 December, [online], HYPERLINK "http://www.us‐cert.gov/control_systems/pdf/ICS‐ALERT‐11‐343‐ 01.pdf" http://www.us‐cert.gov/control_systems/pdf/ICS‐ALERT‐11‐343‐01.pdf . Industrial Control Systems Cyber Emergency Response Team Control Systems Security Program (2012) ICS‐CERT Incident Response Summary Report: 2009‐2011, July, [online], HYPERLINK "http://www.us‐cert.gov/control_systems/pdf/ICS‐ CERT_Incident_Response_Summary_Report_09_11.pdf" http://www.us‐cert.gov/control_systems/pdf/ICS‐ CERT_Incident_Response_Summary_Report_09_11.pdf . Industrial Control Systems Cyber Emergency Response Team Control Systems Security Program (2012) ICS‐TIP‐12‐146‐01— TARGETED CYBER INTRUSION DETECTION AND MITIGATION STRATEGIES, 25 May, [online], HYPERLINK "http://www.us‐cert.gov/control_systems/pdf/ICS‐TIP‐12‐146‐01.pdf" http://www.us‐ cert.gov/control_systems/pdf/ICS‐TIP‐12‐146‐01.pdf . Macaulay, T. and Singer, B. (2012) Cyber Security for Industrial Control Systems: SCADA, DCS, PLC, HMI, and SIS, Boca Raton: CRC Press. Nasseri, L. (2012) Iran Computer Worm Targets Oil Ministry, State Companies, 23 April, [online], HYPERLINK "http://www.bloomberg.com/news/2012‐04‐23/iran‐detects‐computer‐worm‐targeting‐oil‐ministry‐mehr‐says.html" http://www.bloomberg.com/news/2012‐04‐23/iran‐detects‐computer‐worm‐targeting‐oil‐ministry‐mehr‐says.html . National Security Agency Defense in Depth: A practical strategy for achieving Information Assurance in today’s highly networked environments, [online], HYPERLINK "http://www.nsa.gov/ia/_files/support/defenseindepth.pdf" http://www.nsa.gov/ia/_files/support/defenseindepth.pdf . National Transportation Safety Board (NTSB) ‐ NTSB/PAR‐02/02; PB2002‐916502 (1999) Pipeline Rupture and Subsequent Fire in Bellingham, Washington, 10 June, [online], HYPERLINK "http://www.ntsb.gov/doclib/reports/2002/PAR0202.pdf" http://www.ntsb.gov/doclib/reports/2002/PAR0202.pdf . Niland, M. (2003) Computer virus brings down train signals, August, [online], HYPERLINK "http://www.informationweek.com/news/security/vulnerabilities/showArticle.jhtml?articleID=13100807" http://www.informationweek.com/news/security/vulnerabilities/showArticle.jhtml?articleID=13100807 . Office of the National Coordinator for Smart Grid Interoperability (2010) 'NIST framework and roadmap for smart grid interoperability standards, release 1.0', U.S. Department of Commerce and National Institute of Standards and Technology, NIST Special Publication 1108, Available: HYPERLINK "http://www.nist.gov/public_affairs/releases/upload/smartgrid_interoperability_final.pdf" http://www.nist.gov/public_affairs/releases/upload/smartgrid_interoperability_final.pdf . Poulsen, K. (2003) Slammer worm crashed Ohio nuke plant network, August, [online], HYPERLINK "http://www.securityfocus.com/news/6767" http://www.securityfocus.com/news/6767 . Rebane, J.C. (2001) The Stuxnet Computer Worm and Industrial Control System Security, New York: Nova Science Publishers, Inc. Reed, T. (2004) At the Abyss ‐ An Insiders History of the Cold War, New York: Ballantine Books. Slay, J. and Miller, M. (2008) 'Lessons learned from Maroochy water breach', IFIP International Federation for Information Processing, vol. 253, pp. 73‐82. Symantec: Nicolas Falliere, Liam O Murchu, and Eric Chien (2011) W32.Stuxnet Dossier, February, [online], HYPERLINK "http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_dossier. pdf" http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_dossier. pdf . U.S. DHS ICS‐CERT (2012) Gas Pipeline Cyber Intrusion Campaign, April, [online], HYPERLINK "http://www.us‐ cert.gov/control_systems/pdf/ICS‐CERT_Monthly_Monitor_Apr2012.pdf" http://www.us‐ cert.gov/control_systems/pdf/ICS‐CERT_Monthly_Monitor_Apr2012.pdf . U.S. Nuclear Regulatory Commission (2007) NRC Information Notice: 2007‐15: Effects of Ethernet‐Based, Non‐Safety Related Controls on the Safe and Continued Operation of Nuclear Power Stations, April, [online], HYPERLINK "http://www.nrc.gov/reading‐rm/doc‐collections/gen‐comm/info‐notices/2007/in200715.pdf" http://www.nrc.gov/reading‐rm/doc‐collections/gen‐comm/info‐notices/2007/in200715.pdf [2012]. United States Nuclear Regulatory Commission Office of Nuclear Reactor Regulation (April, 2007) Effects of ethernet‐based, non‐safety related controls on the safe and continued operation of nuclear power stations, NRC Information Notice 2007‐15. US‐CERT (2009) Recommended Practice: Improving Industrial Control Systems Cybersecurity with Defense‐In‐Depth Strategies, Department of Homeland Security's United States Computer Emergency Readiness Team (US‐CERT). Weiss, J. (2010) Protecting Industrial Control Systems from Electronic Threats, New York: Momentum Press.

8


Analysis of Programmable Logic Controller Firmware for Threat Assessment and Forensic Investigation Zachry Basnight, Jonathan Butts, Juan Lopez and Thomas Dube Air Force Institute of Technology, Wright‐Patterson Air Force Base, USA zachry.basnight@afit.edu jonathan.butts@afit.edu juan.lopez@afit.edu thomas.dube@afit.edu Abstract: Modern industrial control systems (ICSs) regulate operations over a variety of different applications. Of most interest to national security is the role ICSs play in the management of critical infrastructure (CI) such as the national power grid, water treatment, and chemical industry. The control systems used in such sectors are developing into highly networked collections of distributed devices. Unfortunately, security has only recently become a topic of major concern for these devices. This leaves many implementations without secure configurations due to their long lifespan compared to the rate of advancing threats. In the paradigm of ICSs, programmable logic controllers (PLCs) represent the front line between the cyber world and physical systems. Attacks like Stuxnet have already proven the effectiveness of cyber‐physical attacks by altering and disguising PLC programming, but the next generation of threats will likely focus on PLC firmware. Just as traditional computer malware evolved to hide itself using operating system‐level rootkits, so will ICS attacks evolve to embed themselves in the PLC equivalent: the firmware. Since little research has been done in the area of PLC firmware security, this paper begins by addressing the related security concerns. One such concern is the application of digital forensics to a potential incident of ICS attack. Forensic investigations of digital devices have traditionally been limited to the analysis of typical computer systems like desktops or laptops. As forensic capabilities begin to expand into the scope of embedded devices like smartphones, parallels can be drawn to PLCs that will enable the development of more advanced forensic tools and processes. By performing a firmware analysis through reverse engineering, a PLC can be exploited for both malicious and forensic purposes. This paper discusses the techniques and procedures required to access, inspect, and manipulate firmware for an Allen‐Bradley PLC to suit the purposes of the examiner. From this analysis, lessons can be learned not only about the capabilities and methods required by a potential attacker, but also about the accessibility and effectiveness of recovering PLC firmware for forensic investigation of a potential attack. Keywords: industrial control system, programmable logic controller, firmware, embedded device, forensics, threat assessment

1. Background Industrial control systems (ICSs) today are responsible for the operation of many different processes including various critical infrastructure (CI) sectors such as the national power grid, water treatment, and chemical industry. ICSs typically consist of networks of physical assets, control devices, and management systems collectively referred to as supervisory control and data acquisition (SCADA) systems. In a SCADA system, human operators use the management systems to monitor and program control devices called programmable logic controllers (PLCs). In turn, PLCs follow their programming to monitor and control physical aspects of the SCADA system such as temperature sensors, valves, and servos (Stouffer 2011). The ability of control systems to affect physical operations through cyber means presents an enticing target for attack. In 2010, a malicious program called Stuxnet was discovered that provided a powerful example of this threat. Stuxnet targeted Iranian nuclear fuel enrichment plants by infecting a piece of Windows software called Step 7 that is used to program PLCs. An infected copy of Step 7 maliciously reprogrammed PLCs that controlled gas centrifuge motors used for the nuclear enrichment process. The reprogramed PLCs would then vary the operating speed of those motors to cause damage and prevented proper operation (Falliere 2011, Langner 2011). This largely effective attack brought to light the inadequacies of ICS security.

2. Future threats and concerns The capabilities of malicious actors are constantly advancing. Advanced attacks seen today quickly become a new standard for future attacks. Take for example the evolution of computer malware. Early malware had little capability to hide itself from the user or operating system. However, this began to change as rootkits became more common and the rootkits themselves began to attack progressively lower levels of the system in an effort to better protect itself. A parallel can be drawn to PLC security. Stuxnet contained a relatively simple form of rootkit that hid its existence from the infected programming computer. This functionality, however,

9


Zachry Basnight et al. provided no ability to hide Stuxnet’s actions on the PLC itself. An uninfected Step 7 computer connected to an affected PLC could easily see the modifications made to the ladder logic by Stuxnet. Therefore, the next logical step in the evolution of such an attack is to place a rootkit on the PLC itself. McMinn (McMinn 2012) describes three possible layers of attack on the PLC itself: (i) programming (i.e., ladder logic), (ii) firmware (i.e., operating system), and (iii) hardware. The first of these is considered a minimal threat because any modification of ladder logic programming is readily identified by any secure management computer. The physical security and verification of digital circuit components is itself a complex issue currently under research (McFadden 2010, Tehranipoor 2011). Indeed, most issues surrounding hardware verification are associated with supply chain compromise. As such, firmware remains the most viable threat from an advanced attacker desiring full control over PLC functionality. From the apparent firmware security threat, there are three main concerns that should be addressed to provide appropriate protection for these systems. The first concern is the risk of remote manipulation and alteration of firmware. This type of attack would attempt to force an unintended firmware update containing malicious logic to a PLC from across a network connected to the target device. Keep in mind that this threat is not limited to open networks. For example, an attack may first target a Windows PC on an air‐gapped SCADA network, as with Stuxnet. From here, the payload of that malware could be programmed to force a firmware update from inside the network. The second main concern with firmware security is the risk of an operator willingly uploading a supposedly legitimate firmware update, when in actuality the update contains malicious code. Section 4 discusses possible defenses against these first two security concerns. The third concern related to firmware security is the current lack of adequate post‐mortem forensic analysis capabilities with PLCs. Specifically, the ability to perform a forensic analysis on PLC firmware is noticeably absent. The primary issue at the moment is the challenge of retrieving the firmware code from an affected device in an efficient manner. Without this capability, the rest of the forensic process is bottlenecked. Section 5 addresses various options for obtaining a firmware dump from a PLC.

3. Reversing and customizing firmware The research presented in this paper focuses on Allen‐Bradley brand PLCs manufactured by Rockwell Automation. Allen‐Bradley PLCs are among the most common used for industrial control applications in the United States; therefore, this selection provides a basis for covering many different real‐world scenarios. Specifically, two different model lines are considered: ControlLogix and MicroLogix. The former is more common in large‐scale industrial control systems, the latter is typically reserved for more budget‐limited situations. To begin a firmware analysis on these controllers, the first step is to obtain a copy of the firmware. Conveniently, Rockwell Automation freely provides firmware updates for download on their website. While the website requires a user to register their information and create an account before downloading any updates, this poses only a minor inconvenience to the casual investigator. Anyone can easily create an account, using false information if necessary. To add an additional layer of complexity, vendors could also require the user to register a valid serial number for their target device, but again this provides only a mild deterrent. Any determined actor (with sufficient funding) could easily buy the product to acquire a serial number. Indeed, possession of an identical PLC model to use as a reference device is a practical prerequisite for learning much about the device. Alternatively, if the desired firmware were not easily available as with these PLCs, one may need to rely on some of the various forensic techniques discussed later in Section 5 to retrieve the firmware from the device itself. After the firmware copies have been obtained, the IDA Pro disassembler tool is used to analyze the binary code. In order for IDA to interpret a binary image, a processor type must be specified. Beginning with the MicroLogix 1100 PLC, the main processor type can be determined using various techniques. First, a thorough review of official documentation for the system may hint at the underlying processor; however, this was not the case for the MicroLogix. Without physical access to the system, a manual analysis of the raw binary data in the firmware image could also reveal common byte sequences indicative of a particular processor. On the other hand, physical possession of a reference PLC allows for the physical disassembly of the device. A visual inspection of the MicroLogix 1100 main board reveals that the device uses a Freescale Coldfire processor.

10


Zachry Basnight et al. Once the processor type is specified in IDA, it will load the firmware binary, but without an entry point address IDA refuses to attempt a disassembly analysis. Fortunately, Santamarta (Santamarta 2011) provides a simple IDA script that, given a known byte pattern marking the start of functions, scans a loaded binary for all occurrences of the pattern and performs a code analysis on each function. In the case of the MicroLogix’s Coldfire processor, the link assembly instruction is often used to initialize the stack at the beginning of a function. With this slight modification of the search pattern, Santamarta’s script is used to have IDA analyze most of the MicroLogix firmware. When dealing with the ControlLogix system, the disassembly process becomes slightly more complicated. Analysis of firmware for the 1756‐ENBT Ethernet module of the ControlLogix PLC is hampered by sections of compressed binary data, as well as the fact that unlike the MicroLogix firmware, the 1756‐ENBT firmware is not based at 0x0 (Peck 2009). Using the Deezee tool by Matasano, zlib‐compressed sections of the binary are discovered and analyzed to learn the base address of 0x00100000. Finally, Peck uses trial and error to discover the processor type running in the module to be a Power PC and upon rebasing the code gains a useful IDA analysis of the disassembly. At this point, the target firmware can be further analyzed to determine exactly what portion should be modified to meet the goals of the custom firmware. However, simply making the desired change to the firmware will result in an invalid binary image. Each firmware image has a header containing checksums intended for validation purposes. For the ControlLogix Ethernet module, analysis of the disassembly uncovers a simple checksum algorithm that sums every 2 bytes to create a 2‐byte checksum (Peck 2009). The header contains both a checksum of the header itself as well as a checksum of the rest of the firmware image. By recalculating these checksums after modifying the firmware and correctly updating the header, a modified firmware can easily be uploaded to the ControlLogix Ethernet module. So far, no such checksum algorithm has been found in the firmware image of the MicroLogix 1100. After comparing different firmware versions and using a trial and error method of uploading various firmware modifications, results indicate that, as with the ControlLogix Ethernet module, there exists both a header checksum and body checksum in the header; however, the method of their calculation is currently unknown. The MicroLogix controller also contains a boot firmware image separate from the operating system firmware image being analyzed. It is likely that the checksum algorithm is only present in this boot firmware. Unfortunately, the boot firmware image is not openly available from Rockwell Automation like the OS firmware. This means that finding the checksum algorithm from the boot firmware would require retrieval of the boot firmware image directly from the PLC using one of the forensics techniques described in Section 5.

4. Threat assessment and defense As demonstrated, the difficultly and level of effort to create and upload custom firmware to a PLC can vary widely depending on the target system. A successful custom firmware upload on the MicroLogix 1100 requires more research to determine the correct method of checksum calculation. Conversely, a custom firmware upload to a ControlLogix Ethernet module is relatively straightforward and has been accomplished. Regardless of the relative differences in difficulty between these two devices, it is significant that the only protection provided by either of these systems to custom firmware uploads besides obscurity is a simple checksum in the image header. Furthermore, the purpose of these checksums is only to provide a method of validation. They are not intended to be secure, but rather protect against accidental corruption of the firmware. There is in fact no intentional security protecting these devices from malicious firmware updates. Higher‐end PLCs, like the ControlLogix, typically feature physical key switches used to control the operating mode of the PLC. Without this switch being physically turned to the programming mode using the key, it is not possible to upload new firmware. While this does provide an additional layer of protection against unintended firmware updates, the feature is primarily intended to ensure accidental changes being made to the system while it is running. This would not stop an operator from intentionally uploading a malicious firmware if they think it is a legitimate update. Similarly, some newer systems are starting to require password authentication in order to upload new firmware, but again this only protects against unintentional firmware updates and it is possible that authentication could be circumvented altogether (Santamarta 2012).

11


Zachry Basnight et al. The first mode of defense against malicious firmware updates requires action on behalf of the PLC vendors to provide better security by design. As mentioned, mode keys and authentication mechanisms are beneficial and should be implemented, but a more secure method to protect against intentional updates with malicious firmware is to begin digitally signing firmware images. In theory, attackers should not be able to forge a digital signature and the PLC should reject any update without a valid signature. Unfortunately, vendor adoption of such practices may take a significant amount of time to reach the customer. Development time on the vendor’s side is partially to blame, but expectations of long product lifespans on existing devices may also result in slow adoption rates. The second mode of defense involves using external device protection that is independent of the PLCs themselves. This would take the form of an add‐on device, most likely sitting in line between the PLC and the rest of the network, providing protection against malicious updates. An example of such a device is presented by McMinn (McMinn 2012) as a firmware verification tool. This tool compares a firmware update as it is transmitted to the PLC against a known good firmware, detecting bad firmware updates. While, this solution may not be as desirable as having digitally signed firmware images, it would be much quicker to deploy and would still provide an adequate layer of protection. The third mode of defense is to have a network‐based detection and protection mechanism. This can be thought of as an intrusion detection system (IDS) or intrusion prevention system (IPS) specifically capable of detecting and preventing illegitimate attempts to update firmware on PLC devices. Such a functionality could be implemented as a stand‐alone network monitoring system or could be integrated into a preexisting SCADA‐ specific IDS/IPS. Current research into IDS technologies specifically designed for SCADA systems (Carcano 2011, Verba 2008) could be augmented to include firmware update detection and authentication.

5. Forensics The National Institute of Standards and Technology (NIST) Guidelines on Cell Phone Forensics provides an excellent overview of methods for obtaining memory captures on obstructed embedded devices (Jansen 2007). While the NIST document focuses on cell phones as the obstructed device, the term and its implications can be equally applied to a PLC. Although some details of the approaches discussed will differ for PLC evidence acquisition, many of the same methods remain relevant to PLCs. It should be noted that during a full forensic investigation, the ladder logic programming present on the PLC also has significant forensic value. In cases where the programming interface cannot be trusted (e.g. Stuxnet), some of the following methods could be used to recover the ladder logic as well.

5.1 Software‐based methods Software backdoors in a system are ways of gaining control over the system that involve taking advantage of access mechanisms intended to provide low‐level access to system developers or maintainers and are either left over from development or improperly secured. One such type of backdoor common in PLCs is hard coded passwords. Recent research has shown how PLC developers are notorious for leaving hard coded passwords in their devices (Beresford 2011, Santamarta 2011, Zetter 2010). These hard coded passwords can be used to access the system and perform actions the user should typically not be able to. Depending on the software design of the firmware, this may include read access to memory where the firmware resides. Software debugging functionality built into a system is another type of software backdoor. In order to test the firmware, system developers may include undocumented software debugging features and functions in the firmware that allow direct memory access. Unfortunately, there is no way to know if any such features exist without reverse engineering the firmware code. However, if such an effort is put forward and can uncover a backdoor, it is possible that this could be a viable method to extract a firmware image from the PLC. Another software‐based option involves exploiting a vulnerability on the PLC in order to dump the firmware image. This method is similar to the jailbreaking method used on iPhones (Halbronn 2010). At the strategic level, the goal of this method is to gain control over the execution path of the device and instruct it to output the contents of the non‐volatile memory containing the firmware image. As an example, a common tactic to achieve execution control is to find and exploit a buffer overflow vulnerability. Finding such an exploitable vulnerability involves much trial and error; therefore, a test system identical to the target device is required. This will provide a platform for testing that does not affect the actual target device, yet responds to input in an

12


Zachry Basnight et al. identical manner. Various different tactics may be used to search for a vulnerability such as input fuzzing or manually searching the disassembled firmware for vulnerable buffers. However, there is no one proven and reliable method for finding vulnerabilities. Due to the highly unique combination of characteristics with any given target device, as well as the fact that any vulnerability exploited by this method may be patched if it is publicly disclosed, it is likely that this method must be repeated for every case. The main advantage of software‐based methods is the ability to obtain a firmware image remotely without physical access to the device. In addition, it may also be possible to obtain the image from a live system while minimizing or avoiding any interruption to the operational objectives of the system. This could prove quite advantageous in a system controlling critical services where the target device cannot be shutdown without significant cost or damage. The disadvantage with these methods is their inherent need to be executed on the device. This may adversely affect forensic fidelity and completeness.

5.2 Hardware‐based methods Joint Test Action Group (JTAG) is the name commonly used to reference the Institute of Electrical and Electronics Engineers (IEEE) Standard 1149.1 Standard Test Access Port and Boundary Scan Architecture (IEEE 2001). JTAG is a standard used for hardware debugging of circuits and, for the purposes of forensics, can be used as a type of hardware backdoor to the system. Typically, memory chips themselves will not be enabled for debugging protocol like JTAG, but if the target system has a JTAG enabled processor, it is possible to use this interface to dump an image of the firmware (Breeuwsma 2006). Unfortunately, since JTAG is an optional standard, it is not necessarily the case that every processor will have JTAG access ports readily available. In the worst case, JTAG may not be implemented at all for a particular device. A thorough search of the documentation should be performed to determine if there is any indication of JTAG compatibility; however, it is not uncommon for manufacturers to omit any mention of debugging capabilities in official documentation. A lack of JTAG documentation does not necessarily mean a lack of implementation, but it does indicate that finding the access ports may be difficult. Fortunately, Breeuwsma describes that there are some particularly unique features of JTAG pins than can be used to identify the access ports (Breeuwsma 2006). Once the ports are identified, JTAG testing equipment can be used to directly read memory locations on the board (Breeuwsma 2007). Barring the success of an image capture using JTAG, another alternative is to perform an independent chip analysis. This method involves physically removing the flash chip containing the firmware from the circuit board in order to read it directly. Breeuwsma describes the process in three main steps (Breeuwsma 2007). The desired flash chip must first be carefully desoldered to avoid damage, then prepared for further analysis by ensuring the contact points are clean and even. Finally, a universal chip socket and reader software, such as that developed by the Netherlands Forensics Institute (NFI), can be used to access and dump the firmware from the chip. These hardware‐based methods offer the benefit of providing a complete image of the memory while minimizing the chance of altering it, maintaining a high level of forensic integrity (Breeuwsma 2006, Breeuwsma 2007). The major drawback to hardware‐based methods is their dependency on physical access to the system. In the case of independent chip analysis, the system must even be dismantled (possibly permanently).

5.3 Black‐box and side‐channel methods Failing to obtain the firmware binary directly, the indirect methods of black‐box and side‐channel testing may be used to infer what actions are occurring in the PLC. Black‐box testing involves systematically manipulating system inputs while measuring outputs to determine what the PLC has been designed to do. This process assumes no knowledge of the underlying software implementation. For this reason, black‐box testing is often not as straightforward to implement as it seems; the complexity of the firmware code can lead to many ambiguous input/output combinations during testing. Furthermore, any malicious logic present in the firmware may have been programmed to run only for a limited time, preventing such post‐mortem analysis from capturing malicious activities. In spite of these potential disadvantages, it is still possible for black‐box testing to infer a good amount of information regarding the firmware and the system in general. Common condition types tested for during black box testing include conformance to specification, error recovery,

13


Zachry Basnight et al. security, performance, and configurability (Koopman 2011). When integrated with a comprehensive incident response plan, black‐box testing can be used to measure the current operation of a PLC and help determine the cause of failure of the field device or system. Additionally, this technique is a practical way to confirm normal operation of the device, especially when it is impractical to power it down. The concept of side‐channel attacks is well known with relation to cryptanalysis. The goal is to infer information about the system based on measurements of external factors (the side channels). This concept can also be applied to PLC forensics. By measuring a side channel, it may be possible to learn what operations are occurring inside the system. Some possible side channels worth considering include power (Kocher 1999), timing (Kocher 1996), temperature, and electromagnetic emanations (Agrawal 2003). Investigation using such a side channel could prove useful in gleaning forensic evidence.

6. Conclusion With the advancing threat of targeted attacks against cyber‐physical ICSs, specifically CI, a committed focus on ICS security is necessary to protect national interests. As the end‐devices that control physical aspects of these systems, PLCs are likely targets of malicious action. Future threats to PLCs will certainly concentrate on low‐ level software control by maliciously altering firmware. Research into customizing PLC firmware has revealed that few mechanisms protect against alteration. Defensive measures must certainly be taken with this understanding by urging support from hardware vendors and implementing stopgap measures in existing systems. In addition, adequate post‐mortem forensic analysis techniques are required to assess cyber incidents involving PLCs that do occur. Working towards these goals will provide support for the security of systems critical to the daily operation of the nation.

References Agrawal, D., Archambeault, B., Rao, J., and Rohatgi, P. (2003) “The EM Side‐Channel(s)”, Cryptographic Hardware and Embedded Systems‐CHES 2002, pp 29‐45. Beresford, Dillon, “Exploiting Siemens Simatic S7 PLCs”, Black Hat USA+2011, Las Vegas. Breeuwsma, I. (2006) “Forensic imaging of embedded systems using JTAG (boundary‐scan)”, Digital Investigation, Vol 3, No. 1, March, pp 32‐42. Breeuwsma, M. (2007) “Forensic Data Recovery from Flash Memory”, Small Scale Digital Device Forensics Journal, Vol 1, No. 1, June, pp 1‐17. Carcano, A. et al. (2011). “A multidimensional critical state analysis for detecting intrusions in SCADA systems”, IEEE Transactions on Industrial Informatics, Vol 7, No. 2, pp 179‐186. Falliere, N., Murchu, L., and Chien, E. (2011) "W32.Stuxnet Dossier," Symantec Corp., Cupertino, Feb. Halbronn, C. and Sigwald, J. (2010) “iPhone security model & vulnerabilities”, [online], Hack in the Box Security Conference 2010, http://reverse.put.as/wp‐content/uploads/2011/01/D2T1‐Cedric‐Halbronn‐and‐Jean‐Sigwald‐iPhone‐Security‐ Model.pdf. Institute of Electrical and Electronics Engineers (2001) IEEE 1149.1‐2001 Standard Test Access Port and Boundary‐Scan Architecture, New York: IEEE. Jansen, W. and Ayers, R. (2007) "Guidelines on Cell Phone Forensics”, National Institute of Standards and Technology, Special Publication 800‐101, May. Kocher, P. (1996) “Timing Attacks on Implementations of Diffie‐Hellman, RSA, DSS, and Other Systems”, Advances in Cryptology—CRYPTO’96 , pp 104‐113, Springer, Berlin/Heidelberg. Kocher, P., Jaffe, J., and Jun, B. (1999) “Differential power analysis”, Advances in Cryptology—CRYPTO’99, pp 789‐789. Springer, Berlin/Heidelberg. Koopman, P. (2011) “Embedded Software Testing”, [online], Carnegie Mellon University, http://www.ece.cmu.edu/~ece649/lectures/08_testing.pdf. Langner, R. (2011) "Stuxnet: Dissecting a Cyberwarfare Weapon", IEEE Security and Privacy, Vol 9, No. 3, June, pp 49‐51. McFadden, F. and Arnold, R. (2010) "Supply chain risk mitigation for IT electronics," 2010 IEEE International Conference on Technologies for Homeland Security (HST), IEEE, Waltham, pp 49‐55. McMinn, L. and Butts, J. (2012) “A Firmware Verification Tool for Programmable Logic Controllers”, Paper presented at 6th IFIP WG 11.10 International Conference on Critical Infrastructure Protection, Washingtion, D.C, March. Peck, D. and Peterson, D. (2009) “Leveraging Ethernet Card Vulnerabilities in Field Devices”, Paper presented at 2nd SCADA Security Scientific Symposium, Miami Beach, Florida, January. Santamarta, R. (2011) “Reversing Industrial Firmware for Fun and Backdoors I”, [online], Reversemode, http://reversemode.com/index.php?option=com_content&task=view&id=80&Itemid=1. Santamarta, R. (2012) “Project Basecamp ‐ Attacking ControlLogix”, Report for 5th SCADA Security Scientific Symposium, Miami Beach, Florida, January. Stouffer, K., Falco, J., and Scarfone, K. (2011) "Guide to Industrial Control Systems (ICS) Security", National Institute of Standards and Technology, Special Publication 800‐82, June.

14


Zachry Basnight et al. Tehranipoor, M. et al. (2011) "Trustworthy Hardware: Trojan Detection and Design‐for‐Trust Challenges," Computer, Vol 44, No. 7, July, pp 66‐74. Verba, J. and Milvich, M. (2008) “Idaho national laboratory supervisory control and data acquisition intrusion detection system (SCADA IDS)”, 2008 IEEE Conference on Technologies for Homeland Security, May, pp. 469‐473. Zetter, K. (2010) “SCADA System’s Hard‐Coded Password Circulated Online for Years”, Wired, July.

15


Top‐Level Goals in Reverse Engineering Executable Software Adam Bryant1, 2, Robert Mills2, Michael Grimaila2 and Gilbert Peterson2 1 Riverside Research, Beavercreek, Ohio, USA 2 Air Force Institute of Technology, Wright‐Patterson AFB, Ohio, USA adambryant11@gmail.com robert.mills@afit.edu michael.grimaila@afit.edu gilbert.peterson@afit.edu Abstract: People perform reverse engineering to discover vulnerabilities, to understand how attackers could exploit vulnerabilities, and to determine ways in which vulnerabilities might be mitigated. People reverse engineer executable programs to determine the structure, function, and behavior of software from unknown provenance that may not be trustworthy or safe to use. Reverse engineering also allows the investigation of malicious code to understand how it works and how to circumvent self‐protection and stealth techniques used by malware authors. Finally, reverse engineering can help engineers determine how to interface with legacy software that only exists in executable form. Although each of these applications of reverse engineering provides part of an organization's defensive knowledge of their information systems, there has been relatively little work in understanding the human factors involved with reverse engineering software from executable code. Consequently, reverse engineering work remains a highly specialized skill, and many reverse engineering tools are difficult for analysts to use. To better understand the human factors considerations of reverse engineering executable software, we conducted semi‐structured interviews with five nationally‐renowned subject matter expert reverse engineers and analyzed the verbal data from the interviews using two analysis approaches. We used thematic analysis techniques borrowed from educational psychology to investigate themes from the interview responses, first at the idea level, then at the sentence level. We decomposed the responses into a set of main goals that we describe in this paper. Keywords: reverse engineering, binary analysis, cognitive task analysis, knowledge engineering

1. Introduction This paper describes a semi‐structured interview study to elicit the top‐level goals involved with how reverse engineers make sense of executable programs. We wanted to learn how to connect the low‐level details involved with reverse engineering software from executable representations with the higher‐level concepts and processes that reverse engineers refer to when talking about reverse engineering work. To connect these details, we interviewed subject matter expert (SME) reverse engineers, analyzed the text data using two separate approaches, and decomposed the primary goals involved with reverse engineering work. The resulting decomposition provides a conceptual framework to organize further efforts in developing cognitive supports for the “DigR” reverse engineering tool suite and to provide the structure for a focused and effective reverse engineering training course aimed at training today’s cyber security workforce.

2. Background Reverse engineering primarily involves “making sense” of a program (Tilley 1998). The act of making sense is a human activity involving integrating knowledge, conjecture, and inference; connecting inference and observations; explaining ambiguous observations; restricting inferences; and iteratively modifying mental models and data from the environment (Klein et al. 2007; Pirolli and Card 2005; Zhang et al. 2009). In most domains, these consist of a number of “bottom up” and “top down” processes aimed at developing a case, connecting evidence, and searching for information (Pirolli and Card 2005). In reverse engineering program executables, the fundamental activity involves refining and using a mental model to understand information in the environment (the code) (Bryant et al. 2011). The process of developing a mental model in a reverse engineering task can be thought of as involving seven steps (Bryant et al. 2012):

Creating goals – determining a mental representation of attributes to represent a goal (Beklin 1980; Quesada et al. 2005)

Planning – connecting the current state of the environment to another state through sequences of actions (Newell and Simon 1972)

16


Adam Bryant et al.

Carrying out a plan – following a sequence of actions to change the state (VanLehn 1989)

Sensing information – perceiving and reacting, monitoring the environment for changes, and keeping track of progress (Fu and Pirolli 2007)

Interpreting information – connecting conceptual meanings to elements in the environment (Rajlich 2009; Tilley 1998)

Updating the mental model – adding new knowledge, modifying existing knowledge, and changing relationships between pieces of knowledge (Rumelhart and Ortony 1977; Zhang et al. 2009)

The idea of reverse engineering being the “modification of a mental model” is not new. There have been a number of conceptual models of source code comprehension. In some, the process had been viewed as “abstraction” from low‐level source code to a high‐level mental model (Gannod and Cheng 1999; von Mayrhauser and Vans 1994). Biggerstaff and others (1994) presented reverse engineering as the assignment of concepts to program locations, as have others (Rajlich 2009; Duala‐Ekoko and Robillard 2007). Still others see reverse engineering activities as the recognition of “plans” intended by the developer of the software (Quilici and Woods 1998; Allemang 1991; Soloway and Ehrlich 1984). In each of these descriptions of reverse engineering, the primary artifact has been source code rather than a disassembled executable. However, reverse engineers tend to work with disassembled executable code instead of source code (Eilam 2005; Hoglund and McGraw 2005). As Song and others (2008) have discussed, reverse engineering executable programs is much different than comprehending programs from source code. Executable programs are more complex, there are few if any high‐level semantics to help the reverse engineer construct meaning, the reverse engineering activity requires a “whole‐system view,” and many programs are obfuscated or otherwise protected to make analysis more difficult. Additionally, reverse engineering executable software requires skill in using specialized reverse engineering tools such as disassemblers, debuggers, import table reconstructors, unpackers, deobfuscators, and hexadecimal editors (Eilam 2005; Canzanese et al. 2005). It also requires knowledge of assembly language, operating system calls, memory and process layout, and attack and defense techniques (Blunden 2009; Skoudis and Zeltser 2003; Szor 2005). For these reasons, we anticipated that the goals involved in reverse engineering would be unique to this type of task and offer us insight into how to better develop reverse engineering tools and training aids. The remainder of the paper is laid out as follows: in Section 3, we describe how we collected the interview data and the methods we used to analyze it. In Section 4, we present and describe the resulting organization of top‐level goals involved in reverse engineering executable software.

3. Collection and analysis of interview data We conducted interviews with five SMEs who attended the 2010 DOD Reverse Engineering Workshop sponsored by the Anti‐Tamper / Software Protection Technology Office, or who had performed recent reverse engineering research, analysis, or tool development work for that office. A small number of SMEs was used based on the general unavailability of expert reverse engineers of the right experience level. The SMEs were chosen based on recommendations from numerous other reverse engineers across several different organizations about who they would consider “the best reverse engineers in the United States.” Each SME had an advanced degree in computer science or a related subject, had six to 12 years of hands‐on reverse engineering experience, and had developed large‐scale programs to automate reverse engineering tasks. The SMEs were distinct and varied in their geography, training, education, and employment history. The interview questions were designed to probe the work domains in reverse engineering, goals and activities involved in reverse engineering tasks, decision points and information cues in reverse engineering, specialized knowledge requirements, and the role of automaticity and tacit knowledge in reverse engineering. The questions were written and the questionnaire was pilot tested with four other reverse engineers with skills in hardware and software analysis to ensure the questions made sense and that the interviews would produce the desired information. The questionnaire and overall study methodology were also vetted by the Wright‐Site Institutional Review Board.

17


Adam Bryant et al. The interviews were two hours each, conducted between February and March 2011. The interviewer took notes and recorded the interviews with a mini‐cassette recorder. The interview questions were presented with as little variation as possible. The interviewer gave clarification as needed, but provided no additional elaboration in order to keep information from one interview from affecting a subsequent interview. When the interviewer had to clarify responses from the SMEs, clarification was requested with generic requests such as “what do you mean by ____?” After the interviews were transcribed, they were analyzed for conceptual content, organization of concepts, and to answer the research questions of the study. The interviewer read through the printed transcriptions several times and took notes to record how each of the SMEs responded to the interview questions, to notice patterns and themes, and to relate themes across the different interviews and questions. The themes from the post‐interview notes were compared with the notes taken during the interviews to ensure no themes or important concepts were missed from impressions captured during the interview. The text documents containing the interview transcripts were segmented and coded for analysis. We analyzed the verbal data from the interviews using two approaches. We first employed a thematic analysis approach borrowed from methods used in educational psychology (Cohen 2007). In the first method (idea‐ level analysis), we segmented the verbal data so each segment represented an individual “idea” and provided labels so groups of segments could be easily classified and organized. In a second analysis approach (sentence‐level analysis), we re‐segmented the same interview data so that each segment represented an individual sentence or phrase, and added a set of labels to each sentence to represent themes. Using the results from these approaches, we organized the primary themes into a set of top‐level goals that are described in Section 4. These goals provide a way to organize information needs and interface requirements that will contribute to the development of a reverse engineering tool called “DigR” being developed at Riverside Research as well as the development of a reverse engineering training course customized to the needs of new cyber operators. The organization of domains and goals was re‐verified with the subject matter expert reverse engineers and adjusted according to their comments.

4. Results and discussion The SMEs described reverse engineering tasks as involving software, hardware, and firmware. Within software reverse engineering, they discussed several different arenas of software where reverse engineering is performed including web and network applications, desktop applications, documents containing software, libraries and DLLs, embedded systems, and system‐level software. The SMEs primarily differentiated software reverse engineering according to the purposes for which reverse engineering is conducted. This line of differentiation was the most prominent in the interviews, and breaks down into four major categories:

Vulnerability discovery

Malicious software analysis (including looking for rootkits and backdoors)

Software protection analysis

Reverse engineering unprotected software

Across all of the interviews, the salient feature that separated software reverse engineering from other activities was that software reverse engineering involves reading programs from assembly code rather than source code. For instance, the SMEs explicitly excluded network penetration testing (a related cyber security activity) and looking for vulnerabilities in source code from consideration as reverse engineering, because while they involved similar types of problem‐solving, these activities do not involve reading assembly language code. The SMEs discussed their approaches to problem‐solving in reverse engineering, their goals, and how their goals affected their understanding of the programs they reverse engineered. All the reverse engineers described a number of particular problems they remembered from experiences they had reverse engineering

18


Adam Bryant et al. programs. Many descriptions involved difficult challenges in breaking software protections, deobfuscating program code, or getting access to instructions of encrypted or packed programs to enable them to begin to understand the program. Table 1: Top‐level goals in reverse engineering Goal Understand the purpose of analysis Finish the analysis quickly Discover general properties of the program Understand how the program uses the system interface Understand, abstract, and label instruction‐level Understand, abstract, and label the program’s functions Understand how the program uses data Construct a complete “picture” of the program

The SMEs also discussed approaches used to understand the program and to make sense of “what the program does” in the context of their goals. While reverse engineering tasks contain many different goal structures that can be hierarchically composed, this paper focuses on the common top‐level goals elicited from the interviews (Table 1). These goals apply to all four domains, in that each domain requires the understanding of unprotected executable software.

4.1 Understand the purpose of analysis Since the properties that are important depend upon the purpose for which analysis is conducted, the SMEs expressed that determining that purpose is itself an important goal. One of the SMEs commented: “If you don’t start with a specific question your goals will be aimless. The question you have also drives other questions you have to answer as you go through the process.” For example, if the output of the reverse engineering effort is a report describing the behaviors of a program, the goals of analysis are constrained to those which will help a reverse engineer gain information that relates to that goal. When goals are constrained in this way, the reverse engineer can focus efforts on those activities that will help provide information about the program’s behaviors rather than other less relevant information. The SMEs indicated that they commonly ignore large parts of programs that are not directly related to their analysis objectives. In order to save time in the analysis task, the entire program cannot be investigated and analyzed, so they focus on those parts of the program that will provide them the most benefit. In this respect, the desired output of the task drives the goals of analysis, which in turn drives the overall direction in which analysis proceeds.

4.2 Finish the analysis quickly Though it seems like more of a constraint than a goal, all of the SMEs explicitly described the constant need to complete the analysis tasks as quickly as possible. Reverse engineering a program is manpower intensive, so it can be an expensive way for an organization to find out information about a program. Reverse engineers have a strong motivation to stay focused on achieving the overall goal of finishing the reverse engineering task and to avoid distractions. In fact, finishing the task quickly was considered by the SMEs to be more important than understanding the program in extensive detail. They described making decisions about the value trade‐off where more time spent in analysis may not provide better value to the sponsor that is paying for them to reverse engineer the target program. The goal of finishing the task quickly leads to the selection of strategies that can accomplish the task rather than those that are slower but provide richer information or better understanding of the program. Finishing quickly also means that in practice reverse engineers constantly try to find faster ways of performing effective analysis, breaking protections, automating repetitive tasks, and generating value for their customers.

4.3 Discover general properties of the program The SMEs mentioned the main goal for each type of reverse engineering task was to discover as much as possible about the program. For a small program, this means identifying all of the program’s behaviors for all possible inputs. However, the state space of a program grows (sometimes exponentially) as more decision

19


Adam Bryant et al. procedures are added to the program’s code. This means the goal for studying larger programs is to understand the most important aspects of a program’s behavior given the most relevant inputs to the program. One of the ways to quickly gather information about a program is by looking at its general observable properties, such as its file size, the size of the sections of the program that are mapped into memory, the names of the sections, whether or not the file’s header is well‐formed, and any text strings in the program. This information provides “quick and dirty” approaches to quickly narrow down what needs to be investigated in the program.

4.4 Understand how the program uses the system interface Other properties of the program are more complex, such as how a program uses the system’s interface. In order for programs to perform any tasks on a system they typically have to make programmatic requests through the operating system’s system call interface. Functionality extended through the system call interface includes I/O functionality like video buffer write operations or file system read / write operations. The SMEs described looking at the library calls that a program imports and the system functions that it uses to form a mental model about what the program does, sometimes before ever stepping through the code or watching it execute. System calls provide information that allows a person to explore the behaviors of a program, or they can be used to enable the person to generate a hypothetical explanation of what behaviors the program might perform, which can then be looked for in the program’s code. The SMEs described getting information about how the program uses the system interface by examining the import tables, scrolling through the program looking for system calls that the debugger or disassembler identifies, and by hooking the system APIs and letting the program run in order to discover sequences of system calls used by the program.

4.5 Understand, abstract, and label instruction‐level information The SMEs indicated that another important process in understanding programs is examining the instructions inside functions to be able to assign meaning to patterns of instruction sequences. The SMEs described understanding sequences of assembly instructions by tracing data values as they moved through a program’s execution, or through translating assembly instructions into a higher‐level programming language syntax or into pseudocode, either mentally, on paper, or in a text editor. Analyzing instruction sequences helps reverse engineers better understand how the code inside a function works in order to help them better understand the function. Once the behavior of a sequence of instructions is specified and understood, the details of that sequence can be abstracted away and replaced with a meaningful symbol or label to represent the behavior of the instruction sequence in the person’s memory. Symbols or labels for sequences of instructions make it so that sequence does not have to be processed each time it is encountered. Instead, a reverse engineer can group a sequence of some number of instructions with the label “decryption routine,” and then can refer to that sequence by name, as if it were its own entity with its own attributes and behaviors.

4.6 Understand, abstract, and label the program’s functions Another theme from the interviews was that reverse engineers analyze a program’s functions and subroutines to determine the behaviors of the program. Many programs are written using functions and subroutines and much of the functional structure of a program is preserved when programs are compiled from source code into machine code. For instance, when a program module performs a CALL instruction to another area of the program, the program executes until it comes to a RETN instruction and returns control flow to the originating program module, typically with a return value stored in the EAX register (depending on the calling convention). Reverse engineers form mental models of how program control flow works by dividing the instructions of a program into meaningful basic blocks, often using tools that display graphical representations of the code’s control flow. Many analysis tools such as IDA (Hex‐Rays 2011) perform this analysis automatically to present

20


Adam Bryant et al. graph‐based views of code to the person using the tool. More advanced features can incorporate dynamic information from execution traces as well (Figure 1).

Figure 1: Enhanced graphical view in DigR When a reverse engineer understands what a function does, it becomes meaningful to look at patterns of function calls in the program. Patterns in the ordering of function calls can make analysis tasks move from concerns about syntax issues to concerns over functional and behavioral‐level aspects of programs. Additionally, reverse engineers can gain information about how functions interact with each other, such as in how functions pass arguments and return values back and forth. In understanding the relationships in how functions call each other in a program, reverse engineers get a better understanding of roles the different modules of the program perform. The SMEs indicated that it often requires using top‐down knowledge about the problem domain (such as malicious software analysis or vulnerability discovery) in order to make sense of how functions work in the context of the domain.

4.7 Understand how the program uses data SMEs indicated that understanding how instructions interact with program data can help clarify the functionality a code segment provides. It can also provide insight about data structures that might be used in the program. For instance, if several instructions read and write memory to a small group of values in the program’s heap, it could indicate that the functions belong to what in the source code was a dynamically allocated object instance of a C++ class. A reverse engineer can understand local code by following the flow of data, registers, and memory values forward from a starting point or by tracing the flow back from an ending point. Tracing data forward and backward from points in the program’s instruction sequence is a filtering process which helps isolate important instructions from less important ones. Tracing the flow forward from a particular point in the code might be useful in order to discover how a value in memory changes or to determine which of the subsequent local instructions are relevant to that value. This can serve as a way to filter the instructions to only those that are relevant. Tracing the flow backwards from an end point allows seeing where a value came from and how it was constructed. Tracing backward allows determining which instructions are important to a register or memory address having the value that it does. Reverse engineers can also use information about how a program uses data to determine how the program interacts with the outside world. Programs take input from the world in the form of data, which is processed

21


Adam Bryant et al. by functions and instruction sequences in the program. This information can be used to determine the control flow of a program, for instance if malicious software “senses” whether it is being run in a virtual machine, or if it is a bot which looks for a certain type of input or set of commands before transferring control to parts of the program involved in carrying out its behaviors. Also, if a reverse engineer is looking for exploitable vulnerabilities, understanding the location and safety of how the program handles data that comes from outside the program can help isolate bugs that can be manipulated by an adversary. Knowing that data is from outside of the program involves being able to trace the data from where it was generated to where it is used. Understanding whether or not the program handles the data safely involves understanding how the program uses data as well as how the program “should have” used the data.

4.8 Construct a complete “picture” of the program The SMEs discussed “building a complete picture of the program.” This “picture” of the program represents a “mental model” or “situation model” of a program. The SMEs discussed the complete picture of the program as understanding “what the program does,” “how the program works,” “what the parts of the program are,” and “where” the different parts of the program are located in memory. From these descriptions and the other aspects of understanding programs outlined above, the main properties of a situation model or “complete picture” of a program involve:

Program components (functions, subroutines, or sequences of instructions)

Program behaviors (things the program does)

Program functionality (the mechanism of how the parts work)

The SMEs described the activity of switching back and forth between top‐down activities (like understanding functions) and bottom‐up activities (like tracing data through the program) until they come to the complete picture of the program. The complete picture of a program might also involve questions of intent, such as “why did the developer write the program this way?” Interpreting program intent requires one to be knowledgeable about behaviors programs can perform, goals and incentives of the developer, and scenarios where the developer’s intentions could be achieved. Understanding how people interpret the intent of programs (like malicious programs) from assembly language representations is an area for future research.

5. Conclusions The contribution of this paper is a taxonomy of top‐level goals believed to be involved in most software reverse engineering tasks, and in the domains in which these top‐level goals apply. The top‐level goals of reverse engineering involve understanding the purpose of analysis, finish the analysis quickly, discover properties of the program, understand the system interface, understand instruction information, understand function information, understand how the program uses data, and finally use this information to construct a complete “picture” of the program. These goals are being used to design requirements and patterns of interaction that will be used to reduce a user’s attention and working memory in reverse engineering tasks. It is anticipated that future engineering work organized around these top‐level goals will make it easier for analysts to quickly make sense of programs for the purposes of malicious software analysis, vulnerability discovery, software protection analysis, and understanding unprotected executable programs. While limited in their applicability outside of reverse engineering executable programs, the findings from this study, in concert with a separate observational study of reverse engineers (Bryant et al. 2012) and a case study of conceptual knowledge in reverse engineering (Bryant 2012), are currently being used in a program interaction plug‐in for a reverse engineering tool called “DigR” being developed at Riverside Research. The set of top‐level goals also provides the structure and the key procedural skills needed for a reverse engineering course being developed at Riverside Research focusing on quickly enabling new cyber operators to understand executable programs from assembly language representations.

Acknowledgements The Sensors Directorate at Wright‐Patterson Air Force Base supported this research through its Entrepreneurial Research Fund program. The views expressed in this article are those of the authors and do

22


Adam Bryant et al. not reflect the official policy or position of the United States Air Force, Department of Defense, or the United States Government.

References Allemang, D. (1991). Using functional models in automatic debugging. IEEE Expert. 6(6):13—18. Beklin, N.J. (1980). Anomalous states of knowledge as a basis for information retrieval. Canadian Journal of Information and Library Science. 5. Biggerstaff, T.J., Mitbander, B.G., and Webster, D.E. (1994). Program understanding and the concept assignment problem. Communications of the ACM. 37(5):72—82. Blunden, B. (2009). The Rootkit Arsenal: Escape and Evasion in the Dark Corners of the System. Wordware. Bryant, A. (2012). “Understanding how reverse engineers make sense of programs from assembly language representations.” Doctoral Dissertation. Department of Electrical and Computer Engineering, Air Force Institute of Technology. Bryant, A., Mills, R., Peterson, G., and Grimaila, M. (2011) “Software reverse engineering as a sensemaking task.” Journal of Information Assurance and Security, 6(6) pp. 483—494. Bryant, A., Mills, R., Peterson, G., and Grimaila, M. (2012) “Eliciting a sensemaking process from verbal protocols of reverse th engineers.” In Proceedings of the 34 Annual Meeting of the Cognitive Science Society, Sapporo, Japan, Aug 1‐4. pp. 1386—1391. Canzanese, Jr., R.J., Oyer, M., Mancoridis, S, and Kam, M. (2005). A survey of reverse engineering tools for the 32‐bit Microsoft Windows environment. Retrieved on Jan 2012. https://www.cs.drexel.edu/~spiros/teaching/CS675/ Cohen, L., Manion, L., Morrison, K., and Morrison, K.R.B. (2007). Research Methods in Education. Psychology Press. Duala‐Ekoko, E. and Robillard, M.P. (2007). Tracking code clones in evolving software. Proceedings of the 29th International Conference on Software Engineering. 158—167. IEEE Computer Society. Eilam, E. (2005). Reversing: Secrets of Reverse Engineering. Wiley. Fu, W.T. and Pirolli, P. (2007). SNIF‐ACT: A cognitive model of user navigation on the World Wide Web. Human‐Computer Interaction. 22(4):355‐412. Gannod, G.C. and Cheng, B.H.C. (1999). A framework for classifying and comparing software reverse engineering and design recovery techniques. Proceedings of the Sixth Working Conference on Reverse Engineering. 77—88. Hex‐Rays (2012). The IDA Pro Disassembler and Debugger. Hoglund, G. and McGraw, G. (2005). Exploiting Software: How to Break Code. Addison‐Wesley. Klein, G., Phillips, J.K., Rall, E.L., and Peluso, D.A. (2007). A data‐frame theory of sensemaking. Expertise Out of Context: Proceedings of the Sixth International Conference on Naturalistic Decision Making. 113—155. Newell, A. and Simon, H.A. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice‐Hall. Pennington, N. (1987). Stimulus structures and mental representations in expert comprehension of computer programs. Cognitive Psychology, 19(3):295–341. Pirolli, P. and Card, S. (2005). The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. Proceedings of the International Conference on Intelligence Analysis. 2—4. Quesada, J., Kintsch, W., and Gomez, E. (2005). Complex problem‐solving: a field in search of a definition? Theoretical Issues in Ergonomics Science. 6(1):5—33. Taylor & Francis. Quilici, A. and Woods, S. (1998). Applying plan recognition algorithms to program understanding. Proceedings of the 11th Knowledge‐Based Software Engineering Conference. 29—103. Rajlich, V. (2009). Intensions are a key to program comprehension. Proceedings of the 17th International Conference on Program Comprehension.1—9. IEEE. Rumelhard, D.E. and Ortony, A. (1977). The representation of knowledge in memory in Anderson, R.C., Spiro, R.J., and Montague, W.E. Schooling and the Acquisition of Knowledge. Skoudis, E. and Zeltser, L. (2003). Malware: Fighting Malicious Code. Upper Saddle River, NJ: Prentice Hall PTR. Soloway, E. and Erlich, K. (1984). Emprical studies in programming knowledge. IEEE Transactions on Software Engineering. SE‐10(5). Song, D., Brumley, D, Yin, H., Caballero, J., Jager, I., Kang, M., Liang, Z., Newsome, J., Poosankam, P., and Saxena, P. (2008). BitBlaze: A new approach to computer security via binary analysis. In Proceedings of the 4th International Conference on Information Systems Security. 1—25. Szor, P. (2005). The Art of Computer Virus Research and Defense. Addison‐Wesley Professional. Tiley, S. (1998). A reverse engineering environment framework. Carnegie‐Mellon Software Engineering Institute Technical Report. VanLehn, K. (1989). Problem solving and cognitive skill acquisition. In Posner, M.I. (ed) Foundations of Cognitive Science. MIT Press. von Mayrhauser, A. and Vans, A. M. (1994). Comprehension processes during large scale maintenance. In Proceedings of the 16th international conference on Software engineering, pages 39–48. IEEE Computer Society Press. Zhang, P., Soergel, D., Klavans, J.L., and Oard, D.W. (2009). Extending sense‐making models with ideas from cognition and learning theories. Proceedings of the American Society for Information Science and Technology. 1(45):23—33.

23


An Investigation of the Current State of Mobile Device Management Within South Africa Ivan Burke and F. Mouton Council for Scientific and Industrial Research, Pretoria, South Africa iburke@csir.co.za fmouton@csir.co.za Abstract: In recent years mobile devices have become a critical part of employees daily lives. Mobile devices have greatly increase the speed at which information can be communicated within an organisation. These devices are continuously improving and offer an increasing number of features to the user. The user is often unaware of the potential risk the device might pose to the organisation. Due to the feature creeping of these devices, organisational policies designed to govern these devices become outdated with each new generation of mobile devices. This paper discusses some of the technological advances that increased the risk of mobile devices to organisations. It also covers a broad overview of how organisations strive to mitigate these risks by introducing Mobile Device Management policies. For this paper surveys were conducted to ascertain the current state of Mobile Device Management (MDM) policies within South African organisations. The results of these surveys are presented and the short comings of the organisational strategies are discussed. The authors also present a method to determine the prevalence of mobile devices on a network as well as propose actionable steps that can be added to MDM policies to reduce the risk mobile devices pose to the organisational security. Keywords: mobile device management, bring your own device policies, mobile vulnerabilities

1. Introduction In recent years mobile devices have become a critical part of employees daily lives. Mobile devices have greatly increase the speed at which information can be communicated within an organisation. These devices are continuously improving and offer an increasing number of features to the user. The user is often unaware of the potential risk the device might pose to the organisation. Due to the feature creeping of these devices, organisational policies designed to govern these devices become outdated with each new generation of mobile devices. For this paper the authors investigated the current state of mobile device management within South African organisations. This was achieved by means of surveys and interviews conducted with various members of the organisation.

1.1 Scope of research This study focused on companies who predominantly operate with South Africa. This study also focused on five key industries:

Tertiary education ‐ Due to the high volume of personal information and employee turnover, especially teaching assistants;

Health care services ‐ Due to the high volume of personal information and the sensitive nature of the information;

Military institutions ‐ Due to the critical nature of information captured within their systems;

Governmental research institutions ‐ Due to high volume of confidential information collected by these type of companies and

Telecommunication companies ‐ Due to the large volume of data that transverses their systems on a daily bases.

Each of these industries contains a large amount of sensitive data which must be protected by the companies to insure the continued survival of the company. For this paper, several companies within each industry were surveyed.

24


Ivan Burke and F. Mouton

1.2 Paper layout Section 2 provides a brief history of the advancements in mobile device technologies. Section 3 describes common mobile device management strategy available to companies. Section 4 provides statistics on mobile device management strategies of various companies within South Africa. Section 5 provides the reader with information on how mobile device management strategy short comings can be address. Section 6 concludes with general findings and recommendations.

2. Evolution of mobile device capabilities The use of personal mobile devices within companies is not a new phenomenon. Mobile devices have been used by employees since the early 1990s. But what this paper strives to emphasize is the fact that mobile device security techniques and management policies are not able to keep up with the rapid development of new features available on mobile devices. Despite the similarities between mobile device security and that of personal computers, there are still several notable differences. For instance, mobile malware can generate a direct income for malware developers, by texting premium numbers (La Polla et al., 2012). This type of attack is not possible on traditional personal computers. Furthermore a mobile device has considerably less resources than a personal computer, they yet have the processing power to analyse malware in real time. Even if it had the processing power the power consumption of such operations would deplete the battery of the device considerably. This section will look at the advancements made to mobile devices with regards to communication channels and sensor capabilities.

2.1 Advances in mobile device capabilities and the threats they pose As the capabilities of mobile devices increase so too does the treat they pose. When cell phones were first introduced to the market in 1973, they posed a minimal risk due to its large size, low market penetration and limited capabilities. Figure 1 shows how mobile device capabilities have increased over time.

Figure 1: Advances in mobile device capabilities The first mobile phones introduced by Motorola, in 1973, used first generation (1G) mobile networks to establish communications (Cooper et al., 1973). 1G was unencrypted and susceptible to numerous attacks, such as eavesdropping, cloning and call dropping (DokiSoft, 2011). The advent of Global Systems for Mobile communications (GSM), in 1991, marked the start of the second Generation (2G) mobile networks. This new generation of mobile communications offered the user a plethora of new digital services. GSM allowed users to transmit data, digital faxes, e‐mails and Short Message Service (SMS) messages (La Polla et al., 2012). With the ability to transmit data phones could now be used to exfiltrate

25


Ivan Burke and F. Mouton corporate data via non‐corporate networks. GSM is encrypted and as such more secure against eavesdropping but since the network is not controlled by the corporation security experts can't stop data from exiting the organisation (DokiSoft, 2011). GSM is also plagued with several vulnerabilities (Piget, 2010), (Knight, 2011) (Jakhar, 2012). The ability to send emails and SMSs also made users susceptible to tradition spam. General Packet Radio Service (GPRS) was introduced in 2000. GPRS enabled communication was generally referred to as 2.5G. GPRS allowed for faster data transfer rates. GPRS made it possible to communicate over Wireless Access Protocol (WAP) and send Multimedia Messaging Services (MMS) messages (La Polla et al., 2012). With this increase in connectivity speed as well as the internet access provided by WAP, the role of Personal Data Assistants (PDA) could be performed by mobile phones; this has lead to the development of smart phones. Smart phones are mobile phones capable of performing computing tasks usually performed by PDAs. PDA operating systems were later adapted to form the first generation of smart phone operating systems (Sager, 2012). New communication channels such as Bluetooth, Wi‐Fi, Long Term Evolution (LTE) and Near Field Communication (NFC) also provide an attack with alternative means of exfiltrating data or infecting mobile devices. Bluetooth and Wi‐Fi vulnerabilities have been well documented in the literature (Rowe & Hurmann, 2004, NetSurity, 2005, National Security Agency, 2008). NFC and LTE are still new emerging technologies but Miller (2012) already published the first public NFC vulnerably. Table 1 captures the data rate at which these communication protocols communicate. With these high speeds it would be trivial to exfiltrate large amounts of data in a single burst, or have data slowly sent over a longer period without the user noticing. Table 1: Data transfer rates of mobile protocols Communication Channel TACS (1G) GSM

Transfer Rate < 3 kb/s 9.6 kb/s

GPRS EDGE Bluetooth V1.0

56‐114 kb/s 384 kb/s 1Mb/s

Bluetooth V2.0 Wi‐Fi NFC

3Mb/s 300Mb/s 300Mb/s

LTE

75‐300Mb/s

This section discussed some of recent advances in mobile technology as well as their associated vulnerabilities. Mobile devices have evolved from a utility item that was made to aid business, to a mobile threat. Several of the capabilities currently available to mobile devices were never envisioned to exist when mobile devices were first introduced to the market. Thus it is highly unlikely that security professional thought of these features/threat when they originally made the decision to allow or disallow mobile devices within the organisation. The next section covers some of the common Mobile device management strategies.

3. Mobile device management policies This section will cover two of the most important business policies to consider when allowing employees to use mobile devices within the workplace. It is important to note that these strategies have to evolve over time as the capabilities and functionality of mobile devices increase. Mobile policy management is a continues process not a once off task.

3.1 Bring your own device policy The trend towards Bring Your Own Device (BYOD) policies have increased in recent years (Willis, 2012). BYOD is a policy whereby an employee is allow or encouraged to use their personal mobile phones, laptops tablets and other electronic devices to access corporate services such as email, fax and telecommunications. Figure 2, depicts the results of research conducted by Osterman Research (2012). The graph depicts the percentage of organisations who provide their employees with mobile devices versus those who require employees to

26


Ivan Burke and F. Mouton provide their own. Base on the graph one can see that the majority of organisations allow employees to use their own devices for official business. Osterman Research's (2012) study also found that BYOD policies are pervasive and the number of organisations migrating to BYOD policies is increasing. While conducting their research they also found that:

A Research and Markets study found that 65% of enterprises worldwide will adopt BYOD to some extent by the end of 2012,

An Aberdeen Group study found that 75% of companies permit BYOD,

Equanet reports that 71% of tablets used in a business setting are employee owned,

Some companies are migrating to a completely BYOD approach, such as Cisco, where 100% of mobile devices are provided by employees and not the company itself. (Osterman Research, 2012)

Figure 2: Penetration of mobile devices by ownership (As a % of Users) Le Hong & Jones (2012) states that every business requires an articulated position on BYOD policies. Companies need to make it known to employees what is considered usage policies with regards to mobile devices in the work environment. There are lots of misconceptions with regards to the risks and benefits of BYOD. 3.1.1 BYOD saves the organisation money It is a common belief that BYOD reduces organisational expenses since the employee is responsible for purchasing their own devices. What these organisation fail to taking into account is the support costs associated with these policies. If the employee is expected to perform their day to day task using these devices, it is expected of the organisation to provide the employees with technical support as well potentially providing data bundles for the employees to use their devices. These cost quickly add up, not to mention the costs of potential data leakage and loss of Intellectual Property (IP) if the devices gets stolen. (Willis, 2012) 3.1.2 BYOD is a security risk BYOD in itself does not pose any security risk; on the contrary, the BYOD policies help employees understand the risks of using mobile devices better. The BYOD policy should dictate the rights and the responsibilities of the mobile device user, by not having a policy neither the organisation nor the employee will have any frame of reference from on they can judge their mobile device usage habits. (Le Hong & Jones, 2012) 3.1.3 The IT department will need to be retrained for BYOD This is a common fear of management that the current IT support is ill equipped to support BYOD. Often this is not the case as most IT support companies train their employees in a wide range of technical support capabilities. On the other hand providing training to the IT department may be far cheaper than to have each

27


Ivan Burke and F. Mouton employee be responsible for their own device support; especially when it comes to properly setting up and maintaining devices security settings (Osterman Research, 2012). Setting up a BYOD policy is not as straight forward as it may seem. Employers must keep in mind that since the device is the person property of the employee, they would have a reasonable expectation to use the devices for personal needs. This does not just constitute a potential security risk, which might lead to confidential data being leaked, but also brings about issues when an employee leaves the organisation or loses the device. The policy should address the proper mechanisms for reporting the loss or theft of the device. Steps should be taken to ensure that any organisational data is safe and secure on the device or one should be able to remotely format the device (Willis, 2012). When a device gets replaced or the employee resigns from the organisation, organisational data contain on the device needs to be removed from the device without damaging the personal data contained on the device. Often time distinguishing between person and organisational data can be a difficult task; hence it is important to address these issues in the Media sanitization and disposal policy (Crisp & Terwoeds, 2012). Section 4 will provide information on the current state of South African based organisations with regards to BYOD policies.

3.2 Mobile device management styles Each organisation will most likely have its own Mobile Device Management (MDM) policy based on their industry regulations, client base and legal constraints. In a recent study conducted by Gartner Inc, they identified four main MDM styles, which guide the policy development process. (Pettey, 2011) 3.2.1 Control‐oriented The main objective of this form of MDM strategies is to ensure quality of service, security and to lower costs. Organisations that utilizes this management style, has very strict control over what devices are permitted within the organisation. The organisation also prescribes what application may run on the devices and how data ought to be secured on them e.g. password policies and encryption of data. Due to the complete lockdown on what device can be used and how, the IT department is capable of fully supporting all prescribed mobile devices and can provide full IT support available to employees. 3.2.2 Choice‐oriented User satisfaction is the main objective of this management style. This management style allows users greater choice in their use of mobile device. The organisation provides less strict guidelines and polices with regards to the mobile devices permitted within the organisation. Typically an organisation can not support all variants of mobile devices, due to the high support costs that this would cause. Usually the organisation provides a list of devices, or device types that would be supported by the IT department. Employees are welcome to use other devices not on the official supported list as long as they conform to the standard policies and guidelines regarding the use of mobile devices within the organisation. This also means that the employee is responsible for their own device security setup and support. IT departments can usually provide guidelines, but since the employee chose to not use the approved devices the IT department is not obliged to provide support. 3.2.3 Innovation‐oriented The goal is to empower users who want substantial autonomy and are often in roles over which IT has little or no control. Users want to experiment with applications and services, and develop new techniques and processes. They are in charge, and no reasonable device, application or service request can be refused. The IT organization won't abandon responsibility for critical issues such as data privacy and corporate risk; however, the controls will likely be more policy‐oriented than technology‐oriented. Typical users are independent, often technically sophisticated, and may not want support, but may accept advice and training. 3.2.4 Hands‐off The goal is to take the minimum level of responsibility for mobile devices and services, typically by not providing them. This regime is not about avoiding responsibility, but finding approaches that mean it's not

28


Ivan Burke and F. Mouton necessary to take responsibility. It includes concepts such as employee‐owned devices and requiring employees to provide their own IT support. Typically, IT has little or no support responsibility for devices, and may relinquish responsibility for many services. Any controls that are necessary will be applied in applications or by policies. This section covered two of the most basic policies with regards to managing mobile devices within the workplace. In practice, the BYOD policy will form a small part the overall policies which govern MDM within an organisation, other key policies to consider updating include:

Lost laptop policy;

Information protection policy;

Information classification policy;

Remote employee security policy;

Media sanitization and disposal policy;

Password policies;

Acceptable use policy;

Social media policy and

Records retention policies

The next section contains survey results depicting the state of mobile device management readiness within South African organisation.

4. State of mobile device policies within South Africa For this paper numerous South African organisations were contacted to complete a survey with regards to their mobile device policies. Table, shows the number of companies that responded per industry. The response rate was lower than desired but due to the nature of the information requested and the confidentiality constraints within these industries. Table 2: Number of companies participating in survey, per industry Tertiary education

9

Health care service

5

Military institutions

3

Governmental research institutions

5

Telecommunications companies

5

Total

27

Figure 3 shows a bar graph of each industry’s state of readiness with regard to mobile devices. In many case the institutions did not have a separate BYOD policy but rather an addendum to the standard ICT policy or the storage of private information policy, to address the issues of mobile devices within the work place. Organisations that have made such provisions were counted as having a BYOD policy even though they did not have an official BYOD policy. From the graph one can see that a large number of the participating companies do not yet have a BYOD policy, 11 out of 27. Tertiary institutions seem the least prepared due to the high number of Tertiary institutions without a BYOD policy. Post survey interviews with Tertiary institution staff revealed that they are of the opinion that implementing a rigid BYOD policy is impractical due to the large employee turnover, especially with temporary lecturers and teaching assistants. They also commented that it tends to be university practice to encourage lecturers to provide students with mobile contact numbers to increase lecturer availability and contact time. Hence, lecturers are required to use their mobile devices for work purposes but receive no guidance or assistance with the securing or maintenance of these devices. Due to the high volume of

29


Ivan Burke and F. Mouton confidential information sent and received by lectures via email and other organisational services, this pose a substantial threat to the Tertiary Institution industry. Two of the university participating in the study did however mention that their organisations do provide general security awareness training annually. This training provides the lecturers with the knowledge to limit the risk mobile device pose to the organisation.

Figure 3: State of BYOD policies within South African Industries As was stated in the end of Section 3, to adequately incorporate mobile device within the workplace will require several changes to existing policies as well as possibly result in new policies to be developed. A large concern addressed by the participants in the study is, which department should take responsibility for developing a mobile device strategy. Figure 4 illustrates that, from the participant pool it would appear that the legal department currently leads the efforts in developing mobile device policies. This is especially true in the case of the medical industry. Patient confidentiality is of utmost importance in this industry and as such they rely heavily on the Legal department to govern their mobile policies. One issue brought up by this is that the Legal profession does not necessarily have the technical expertise to properly govern digital policies. In recent years organisations have started incorporating a greater diversity of departments to help govern their mobile policies (Crisp & Terwoeds, 2012) (LeHong & Jones, 2012)(Osterman Research, 2012).

Figure 4: Departments responsible for maintaining mobile device policies Figure 5 shows the Mobile device management style used across the various industries. The Control‐orientated approach is favoured by both the medical and military institutions. This is not surprising since the military industry tends to favour a rigid and well controlled managements structure and the medical institutions have their MDM strategy governed by law professionals to whom the protection of patient confidentiality is of the utmost concern. But while conducting the research for this paper the research team notice a trend that military institutions are moving away from traditional Control‐oriented and rather opting for the choice‐

30


Ivan Burke and F. Mouton oriented approach coupled with security awareness training (Chabrow, 2012) (Dunningan, 2010)(Trim et al., 2012). During the post survey interview one of the military correspondents attributed this shift to choice oriented management largely to the mindset of the younger employees, generation Y. According to him they have become so amerced in the digital world that they would often break BYOD policies rather than be inconvenienced by them. The corresponded stated that it would be better to train individuals in prober security practices than to force it upon them with rigid policies. This corresponds with reports found within the literature that generation Y employees tend to disregard BYOD policies for their own convenience (Chickowski, 2012) (Hoffman, 2012) (Trim et al., 2012).

Figure 5: Mobile device management style per industry In the next Section we will discuss how one can protect an organisation and its employees from the risks posed by mobile devices.

5. Addressing security concerns in MDM policy As was shown in the previous Section, a large portion of the organisations surveyed (70.3%) did not yet a have an operational BYOD policy or they simply don’t have one planned at all. This is rather disconcerting due to the threat mobile devices pose to an organisation, as discussed in Section 2. In this Section we discuss the security concerns that need to be taken into account when developing so of the MDM policies described in Section 3. Firstly before the MDM policy can be constructed the organisation needs to determine if there are no pre‐ existing industry regulations, client limitations and legal constraints which already addresses some of the governance issues of mobile devices within the organisation. Secondly before the MDM policies can be constructed one would first need to ascertain what type of mobile devices are being used within the organisation and at what frequency do they connect to the corporate network.

5.1 Parsing web access log files for mobile device user agent strings During our post survey interviews we learned that, very few of the traditional perimeter security software can distinguish between data originating from a mobile device versus that originating from a traditional corporate PC. Several of our correspondents did mention commercially available tools to monitor mobile devices, but seeing as this is an academic paper the authors thought it best to illustrate the use of simple log parsing, for user agent strings, to gain the same information. A user agent string is a piece of text placed in the Hypertext Transfer Protocol (HTTP) header by user agent software. This text contains information about the user agent device from which a HTTP request was generated. This information is generally used to help web server optimize HTTP request responses based on device capabilities (Fielding, et al., 1999). This information can also be used to identify mobile devices on a network as well as their capabilities. Figure 6 illustrates an example user agent string. From this piece of text

31


Ivan Burke and F. Mouton one can discern that the web page was accessed via a Mozilla browser with an Android device running Android version 4.1.1. The benefit of using user agent strings to determine prevalence of mobile devices on the corporate network, is that it can be used on log archives and it is not dependant on capturing live network traffic. This can also help track the increase of mobile device activity over time on the corporate network. Figure 6 : Example user agent string Using user agents strings one can determine what type of mobile devices accesses one's corporate network. But one of the largest concerns is protecting the devices when not connected or protected by the organisational safe guards. Due to the whole host of communication channels available to mobile devices, the organisation needs to insure data on the device remains safe when not connected to the corporate network.

5.2 Securing data contained on a mobile device Due to the mobile nature of mobile devices they are far more likely to be lost or stolen than traditional corporate equipment. As such the physical protection of mobile devices is of upmost protecting. Passwords act as the first line of defence and usually require the least amount of effort to setup or enforce (Fiberlink Communications Corporation, 2012). The Fiberlink Communications Corporation (2012) recommends a complex, alphanumeric password with special characters which is changed on a regular basis. This is common practice on corporate infrastructures but not on the mobile devices that access these infrastructures, as was learned from our surveys. The next step in securing the data would be to encrypt all data stored on the device. Apple's iOS provides block‐level encryption on all devices that are 3GS and higher. RIM's Blackberry devices provide a content protection service that encrypts all user data using a AES‐256 bit encryption. Android OS on the other hand only supports encryption from OS version 3.0 and up. This would mean that if encryption is enforced in the MDM policy certain devices may not be allowed on the corporate network. Setting up passwords or encryptions on user owned devices might not cause too much employee discomfort but certain industry regulations might force users to disallow certain device features such as Bluetooth, Wi‐Fi or GSM communications. They may even restrict the usage of acceptable application usage, such as cameras, voice recording or tracking software. These restrictions and the reasons for such restrictions need to be relayed to the workforce.

6. Conclusion In this paper the authors discussed the growing threat that mobile devices pose to an organisation. The paper also discussed some of the policies used to mitigate these risks. By use of surveys and post‐survey interviews it was ascertained that very few of the organisations surveyed actually had a strategy to manage mobile devices within their organisations. In most cases the policies were constructed within a single department. The surveys also revealed that the MDM management styles of these organisations was largely governed by their industry regulations and did not take security into consideration. The authors recommended some basic steps that could be taken to insure security is taken into account when constructing MDM policies.

References Bucki, J. (2004). Definition of Mobile Device, 2. Retrieved October 8, 2012, from http://operationstech.about.com/od/glossary/g/Definition‐Of‐Mobile‐Device.htm Chabrow, E. (2012). DoD Outlines Mobile Device Strategy. Retrieved October 15, 2012, from http://www.govinfosecurity.com/dod‐outlines‐mobile‐device‐strategy‐a‐4870 Chickowski, E. (2012). Survey: Gen Y Workers Want Mobile Devices; Prep Your BYOD Policies. Retrieved October 18, 2012, from Network Computing: http://www.networkcomputing.com/security/survey‐gen‐y‐workers‐want‐mobile‐ devices/240002634 Cooper, M., Dronsuth, R. W., Mikulski, A. J., Lynk Jr., C. N., Mikulski, J. J., Mitchell, J. F., et al. (1973). Patent No. 3,906,166. Crisp, D., & Terwoeds, L. (2012). Security in an era of BYOD. In I. Web (Ed.), ITWeb Security Summit. Pretoria. DokiSoft. (2011, November 12). The Mobile Networks Evolution – From 1G to 4G to 4G LTE. Retrieved October 5, 2012, from DokiSoft: http://www.dokisoft.com/the‐mobile‐networks‐evolution‐from‐1g‐to‐4g‐to‐4g‐lte/

32


Ivan Burke and F. Mouton Dunningan, J. (2010). Smart Phones Go To War. Retrieved October 20, 2012, from Strategy Page: http://www.strategypage.com/htmw/htiw/articles/20100713.aspx Fiberlink Communications Corporation. (2012, May 21). Mobile Device Management (MDM) Policies. Retrieved October 8, 2012, from Government Information Security: http://docs.govinfosecurity.com/files/whitepapers/pdf/613_BestPracticesforPolicies.pdf Fielding, R., U.C., I., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., et al. (1999, June). Hypertext Transfer Protocol ‐‐ HTTP/1.1. Section 14.43. Retrieved August 3, 2012, from RFC ‐ Request for comments: http://tools.ietf.org/html/rfc2616#section‐14.43 Hoffman, S. (2012). Study: Gen‐Y Would Break Rules For BYOD. Retrieved October 18, 2012, from FortiNet: http://blog.fortinet.com/study‐gen‐y‐would‐break‐rules‐for‐byod/ Jakhar, A. (2012). Cyber experts show vulnerability of GSM networks. Retrieved September 9, 2012, from http://www.matrixshell.com/products.html Knight, S. (2011). GSM security vulnerability affects 80 percent of mobile phones worldwide. Retrieved September 14, 2012, from http://www.techspot.com/news/46810‐gsm‐security‐vulnerability‐affects‐80‐percent‐of‐mobile‐phones‐ worldwide.html La Polla, M., Martinelli, F., & Daniele Sgandurra, D. (2012, December). A Survey on Security for Mobile Devices. IEEE COMMUNICATIONS SURVEYS & TUTORIALS . LeHong, H., & Jones, N. (2012). CIOs' Next‐Generation Mobile Strategy Checklist. Gartner. Miller, C. (2012). Exploring the NFC Attack Surface. Retrieved August 30, 2012, from DefCon: https://media.defcon.org/dc‐ 20/presentations/Miller/DEFCON‐20‐Miller‐NFC‐Attack‐Surface.pdf National Security Agency. (2008). Bluetooth Security. Retrieved October 20, 2012, from National Security Agency: http://www.nsa.gov/ia/_files/factsheets/I732‐016R‐07.pdf NetSurity. (2005). Wi‐Fi Networks In Jeopard. Security Survey, RSA, London. Osterman Research. (2012). Putting IT Back in Control of BYOD. Retrieved October 8, 2012, from Osterman Research: http://www.govinfosecurity.com/whitepapers/putting‐back‐in‐control‐byod‐w‐622 Pettey, C. (2011, November 8). Gartner Says Consumerization Will Drive At Least Four Mobile Management Styles. Retrieved October 12, 2012, from Gartner Newsroom: http://www.gartner.com/it/page.jsp?id=1842615 Piget, K. (2010). Practical Cellphone Spying. Retrieved September 14, 2012, from http://www.tombom.co.uk/blog/?p=262 Rowe, M., & Hurmann, T. (2004). Bluetooth Security: Issues, Threats and Consequences. Retrieved August 30, 2012, from Pentest : http://www.pentest.co.uk/documents/wbf_slides.pdf Sager, I. (2012). Before IPhone and Android Came Simon, the First Smartphone (ISSN 2162‐657X ed.). (Second, Ed.) Bloomsberg L.P. Swart, D. (2011). Security measures with regards to portable devices fitted with audio, visual and electronic data transfer/storage capabilities practice. Armscor. Pretoria: Armscor Ltd. Trim, P., Hadfield, R., Garlati, C., Smith, M., Austin, J., & Lee, Y.‐i. (2012). Understanding, explaining and counteracting inappropriate user behaviour: Insights and recommendations. Information Assurance Advisory Council , 37‐45. Willis, D. A. (2012). Bring Your Own Device: New Opportunities, New Challenges. Orlando, USA: Gartner.

33


A Taxonomy of Web Service Attacks Ka Fai Peter Chan 1, Martin Olivier 2 and Renier Pelser van Heerden 1 1 Council for Scientific and Industrial Research, Pretoria, South Africa 2 University of Pretoria, Pretoria, South Africa kchan@csir.co.za molivier@cs.up.ac.za rvheerden@csir.co.za Abstract: Web Services (WS) have become a popular application of Service Oriented Architecture (SOA) in many organisations for financial, governmental and military purposes. This is due to WS's ability to integrate seamlessly with other existing services and legacy systems in real time. This level of composition can create a chain of interdependencies between systems to address a complex transaction in real time. Such composition is possible using choreographies, orchestrations, dynamic invocations, and brokers. Messages are based on open standard web technologies, such as Simple Object Access Protocol (SOAP) and Extensible Markup Language (XML). As a result, WS can be deployed on any existing internet protocol. Unfortunately, such capability does not come without disadvantages. In addition to being exposed to internet protocol attacks, they are exposed to attacks that specifically target WS technologies. In the event of an attack, multiple organisations in the chain can be affected, resulting in services not being available and possible financial loss. In order to build more effective defence systems, one needs to understand the attacks and their effects. A taxonomy provides a way to understand attacks through its classification. However, there is a lack of standard classification of Web Service attacks. As such, a taxonomy of WS attacks is proposed. This paper begins by discussing possible WS attacks, supported by practical examples. The attacks are then grouped and classified based on three parameters: WS layer, attack methodology and effect. The resulting taxonomy helps to understand WS attacks. Furthermore, the proposed taxonomy provides the flexibility to classify new WS attacks in a SOA environment. Keywords: web services, service oriented architecture, web service attacks, taxonomy

1. Introduction Building a defence system in a distributed environment is a complex task. Complexity arises from communication between heterogeneous systems. Since each system follows a vendor specific implementation, there is a lack of common security terms. This makes mitigating threats in a distributed environment difficult. Such is the case with Web Services. A Web Service (WS), as defined by the World Wide Web Consortium (W3C) (Austin et al 2004) is “a software system identified by a URI, whose public interfaces and bindings are defined and described using XML. Its definition can be discovered by other software systems. These systems may then interact with the WS in a manner prescribed by its definition, using XML‐based messages conveyed by Internet Protocols. Furthermore, WS builds on a layer of open standards and is deployed on the Internet protocol (IP), namely HTTP and TCP/IP (Holler et al 2006). By building upon existing IP, WSs provide an ideal solution for organisations to implement new systems and integrate with existing ones. Not only can WSs interact with existing systems, but also with other services dynamically. Dynamic composition is often employed in a Service Oriented Architecture (SOA) – where services are requested in real time or on an ad hoc basis in order to address a complex transaction. Composition is performed without the need for human intervention. This technique provides high flexibility to extend a system’s capabilities. As such, many organisations have adopted WSs for the application of SOA for financial, governmental and military purposes. However, building upon multiple technologies exposes WSs to multiple attacks on different levels. WSs are not only exposed to common IP attacks, but also attacks that are WS specific. With the addition of dynamic composition, attacks can easily propagate to multiple services and systems. In order to build better defence for WS‐enabled systems, an understanding of the attacks in relation to WS technology layer is needed. This paper proposes a taxonomy of possible WS attacks and classifies them in relation to the WS layer. The attacks covered in the proposed taxonomy range from the composition layer to the data layer, but do not cover the IP attacks.

34


Ka Fai Peter Chan , Martin Olivier and Renier Pelser van Heerden The remainder of this paper is organised as follows: Section 2 provides an overview of existing attack taxonomies. Section 3 discusses the WS technology layer and presents the proposed taxonomy. Section 4 tests the proposed taxonomy, with Section 5 providing the conclusion and future work.

2. Related work Taxonomy is described as "the study of the common principle of scientific classification" (Ahmad 2012). It can be used to indicate the actual categorisation of objects. As such, the purpose of an attack taxonomy allows a way to describe attacks consistently through its classification. By offering a consistent description, it allows security teams, for example Computer Security Incident Response Team (CSIRT), to respond to an attack effectively. This becomes essential in building better defences and improving existing security (Hansman 2005). With the growing number of web threats, many researchers have done work in this area and proposed a number of taxonomies to address the possible attacks. This section provides an overview on the properties of a taxonomy and existing works.

2.1 Properties of a taxonomy Before examining existing taxonomies, it is important to define the properties of a good taxonomy. The properties will form the requirements for building the taxonomy in this paper. Hansman (Hansman 2003) listed the following properties as requirements for a good taxonomy:

Acceptable

Comprehensible

Completeness

Determinism

Mutually Exclusive

Repeatable

Constant and defined terminology

Unambiguous

Useful

However, Hansman also states that not all taxonomies can meet all the requirements. This depends on the scope of the taxonomy. The proposed taxonomy, in Section 3, will be tested against the above requirements.

2.2 Existing attack taxonomies There have been a number of taxonomies developed over the years. Many of them focus on computer systems and the network attacks. There has been very little research focusing on WSs. In the earlier works, taxonomies focused on system vulnerabilities, rather than attacks. Bisbey and Hollingworth (Bibsbey et al 1978), Abbott, Chin et al (Abbot 1976), and Aslam (Aslam et al 1995) focused on categorizing flaws in different classes. Bishop (Bishop et al 1996) performed an analysis of the taxonomies and pointed out that such categorization is not mutually exclusive. However, the concepts of these taxonomies provide the basis for current research. Howard and Alvarez’s taxonomies focused on the attack process rather than vulnerabilities. While Howard’s computer and network taxonomy focused on the process of how an attacker aims to gain unauthorized access (Howard 1997), Alvarez’s Web attack taxonomy (Alvarez 2003) focused on the life cycle of a web attack. Both these taxonomies view the attack from an attacker’s perspective. Lindqvist (Lindqvist 1997) first introduced the notion of classifying attacks in dimensions. The use of dimension became widely adopted as a way to classify attacks. Hansman (Hansman 2003) proposed a taxonomy with five dimensions: attack, attack target, vulnerabilities, payload or effect and other.

35


Ka Fai Peter Chan , Martin Olivier and Renier Pelser van Heerden Up to this point, the existing taxonomies attempts to present a broad scope of attacks, not focusing on any specific domain. Although this provides a solid foundation, the attacks are not mutually exclusive to each other – especially during a blended attack – and this causes ambiguity. Works that are more recent began to focus more on specific domains. Lai, Wu, Chen, Wu and Yang (Lai 2008) proposed a taxonomy of web attacks focusing on HTTP methods. HTTP methods, such as CONNECT and GET, each poses their own flaws. Attackers rely on these flaws to employ a specific attack type. Jensen, Gruschka and Herkenhoner (Jensen et al 2009) performed a survey of WS attacks. They further categorized the WS attacks using a list of parameters: category (confidentiality, integrity and availability), level, spreading, size, deviation, dependencies, countermeasures, and amplification. The only drawback is the attack category in their classification. According to Ahmad (Ahmad 2012), classification for exact technology does not help classifying dissimilar systems. This goes back to the problem of vendor specific implementation. Although their paper focused on selected attacks, it provided a good base for this research.

3. Towards a taxonomy of WS attacks While there have been many works in attack taxonomies, there is little research performed on WS attacks. Current works of WS attack categorization is based on the fundamentals of security; namely confidentiality, integrity, and availability. This type of classification causes ambiguity as most of the attack types fall under the availability category. As a result, additional categories need to be introduced in order to create a more accurate classification. The taxonomy is modelled based on the notion of using dimensions. Each dimension focuses on an aspect of the overall attack. The proposed taxonomy categorizes WS attacks into three dimensions: the target, attack methodology and the effects.

3.1 The target The first dimension of the taxonomy is the target. It is important to note the specifics of the target in order to properly respond or mitigate the attack. Instead of saying a WS was attacked, it is more meaningful to specify which component(s) was involved. Two main categories are identified, the WS stack and the Network Protocol. As mentioned in Section 1, WS is built using a stack of open standards. Each of these layers has their own unique set of vulnerabilities that attackers can exploit. It is important to identify at which layer the attack is targeted to prevent further pollution. The four layers of a WS stack are: 1. Composition: The upper most layer in the WS stack. Business Process Execution Language (BPEL) is used in the composition standard. One of the features that make BPEL such a dominant standard is its ability to use asynchronous messaging – allowing parties to resume communication after a disconnection. Composition of services occurs through BPEL processes, which execute in the BPEL engine. BPEL processes consist of activities that are in charge of communicating incoming and outgoing service invocations, structured activities for execution of tasks, and basic tasks for any additional task (Andrews et al 2003). 2. Service Discovery/Interface: WSs are black box systems – they do not expose their implementations and process. Users interact with a WS through an interface. Web Service Description Language (WSDL) is the common standard to specify the interface that a WS uses to interact. Universal Description, Discovery and Integration (UDDI) registry is required for discovering WSs. The UDDI points to the WSDL document. The WSDL document contains the types of variables that it can receive and send, as well as the address of the service. 3. WS‐*: This is a family of standards addressing the security concerns. WS‐Security is the common standard in the family that defines digital signature, tokens and provides message encryption. 4. Messaging: This layer can be broken further down into the protocol and data format:

36


Ka Fai Peter Chan , Martin Olivier and Renier Pelser van Heerden

Protocol: Simple Object Access Protocol (SOAP) is the protocol responsible for sending and receiving information. The SOAP structure consists of the envelope, header and body. The format of SOAP is based on XML format.

Data format: XML forms the fundamental part of a WS. It defines the structure of all the information used in a WS. XML provides a way to structure information so that it is both human and computer readable.

Figure 1 illustrates the WS communication stack. The arrows indicate the direction of information flow from service requester to service provider. The communication is between two services. It is worth noting that there may be an arbitrary number of WSs between two end points of a transaction. Note that a client can invoke a transaction. In such case, the client discovers the WS through its interface and communicates with it via SOAP messages.

Figure 1: Web service communication stack Table 1 shows the categories of the first dimension. Please note that the Network Protocol category only highlights the ones used by WS. The WS stack contains two sub‐categories: layer and component. The layer is described earlier, while the components are the actual implementation that houses the vulnerability that attackers exploit. The "..." denotes that additional components should be added to facilitate the identification of targets. The component should be layer dependent, but not standards dependent. For example, message parsers should not fall under the "interface". The specifics of the component can be used to identify known vulnerabilities in the Common Vulnerabilities and Exposure (CVE) database (MITRE Corporation 2012). Table 1: First dimension's categories Main Category

Sub category Layer

Transport

Component BPEL Engine ... WSDL ... Encryption/Decryption Engine ... XML Parser ... IP

Network

TCP

Composition Interface WS stack WS‐* Message Network Protocol

3.2 The attacks The second dimension is the WS attacks. A WS stack makes up of a number of standards. Unsurprisingly, there have been a number of attacks targeting each standard. Classifying the attacks based on the standard will allow users to focus on attacks separately. This is not to say that each attack only occurs on its separate layer. Table 3 provides a survey of WS attacks based on standards.

37


Ka Fai Peter Chan , Martin Olivier and Renier Pelser van Heerden Table 2: Survey of WS attacks Main Categories

Sub categories Implementation BPEL Instantiation flooding BPEL Indirect flooding BPEL Correlation Invalidation BPEL State Invalidation

Attack Type BPEL Flooding

BPEL attacks

BPEL State Deviation Address Spoofing

BPEL Rollback

WSDL Disclosure

WSDL Scanning

WSDL Spoofing

WSDL Parameter Tampering WSDL Policy spoofing

Reference Redirection

Signature Redirection Encryption Redirection

Renegotiation Attack

Chained Cryptographic Key Nested Encryption Blocks

Signature DOS

Signature Transformation Deny Signature Retrieval

Oversized Encryption

Oversized Document

Oversized SOAP Header Oversized SOAP Body Oversized SOAP Envelope

SOAP Parameter tampering

SOAPAction Spoofing

WSDL attacks

WS‐Security attacks

SOAP attacks

XML Injection

XML Long Names XML Namespace Prefixing XML Oversized Attribute Content CDATA

XML flooding

Distributed XML flooding Single XML flooding

Recursive payload

Recursive Empty tags

Service Misuse

Oversized XML Document XML attacks

Other

More information concerning the attacks can be found in (Stamos 2005) and (Jansen et al 2009). Many of these attacks can be carried out with minimal effort. To provide a practical example, consider the recursive payload attack. This attack involves repeating unclosed tags into an XML document to use up resources. <xml> <x> <x> <x> .... </xml> A similar attack is the oversize payload attack. Instead of having a large number of unclosed tags, it pollutes the document with legit tags that contains random values. For example, a large series of <data> xy </data>

38


Ka Fai Peter Chan , Martin Olivier and Renier Pelser van Heerden An advantage of viewing the attacks in separate layers is that the defender can allocate resources to the most critical layer. However, this becomes obsolete during a blended attack. A blended attack is an attack that employs multiple exploit vectors, for example, an attack that embeds SOAP parameters with recursive payload. Another limit of this form of classification is the lack of correlation to an attack process. Consider a typical attack process, identify target‐>gather information‐>deploy attack‐>exit, any WSDL attack will only form a part of the attack and not the entire attack. This also influences the usability of the taxonomy if the standards were to change. As such, Table 3 proposes a way to categorise attacks in such a more general way that can be expanded later on. Table 3: Second dimension's categories Main Categories

Sub categories Parameter Tampering Scanning Flooding attacks Injection attacks Oversize payload

Probing Attacks Coercive Parsing

Flooding Spoofing

External Reference Attacks

Reference Redirection State Deviation

Other

Service misuse

Probing attacks deal with service discovery and identifying weaknesses of a target. Characteristic of this category include scanning and basic parameter tampering. Coercive Parsing is the category that characterises attacks that aim to extract, inject or cause resource related impacts. Coercive parsing often leads to exhausting the resources of the targeted WS. External Reference Attacks exploits the WS processing and communication capabilities. This involves redirecting communications to a malicious third party (recall the number of arbitrary services between consumer and provider). Other refers to attacks that are not related to vulnerabilities in the WS stack, but relating more to violation of WS's terms of use. This way of classification focuses on the characteristic of the attack rather than concerning with the technology involved. The sub category in Table 3 allows for categorising technology specific attacks.

3.3 Effects The third dimension focuses on the effect(s) of an attack. The biggest impact of a WS attack is to disrupt the availability of the service, also known as Denial of Service (DoS). The impact expands upon the pyramid of Confidentiality, Integrity and Availability (CIA). Managing service availability is the highest priority. For example, imagine a customer arriving at a car wash and he/she is forced to wait behind hundreds of cars. The customer would go somewhere else to have the car washed. The same principal applies with WSs. If the service has low availability, the requester would just invoke a different service with similar functions. At the extreme, if the service provider can no longer provide the specified service due to maintenance or downtime, the service will lose revenue. Other categories of attack effect include: authentication, authorization, confidentiality, integrity and propagation. Authentication deals with the ability to tell if someone is really who they say they are. Attacks that affect authentication often leads to disclosure of sensitive information, leading to exposure of a function in WS context.

39


Ka Fai Peter Chan , Martin Olivier and Renier Pelser van Heerden Authorization deals with ability to tell if they have the authority to access certain data. State deviation can lead to an attacker gaining the authority to access backend function such as database and APIs. Confidentiality deals with the ability to prevent information disclosure. Successful scanning affects confidentiality. Integrity is the ability to maintain the correctness of information. This is important during transaction between multiple clients. Injection affects the integrity when new or malicious information is inserted into XML documents. The impact of this escalates if propagated. Propagation is the ability to control what is communicated. This deals with spreading of malicious code or the inability to prevent incorrect information to be sent. With dynamic composition of services, propagation of malicious data could cause a chain of service to become unavailable. This section presented the taxonomy using three dimensions, namely target, attack methodology and effect. The first two dimensions have sub categories to specify attacks. The next section evaluates the proposed taxonomy and provides an attack scenario.

4. Evaluating the proposed taxonomy This section briefly evaluates the proposed taxonomy. The evaluation criteria are the requirements specified in Section 2. Section 4.2 illustrates the use of the proposed taxonomy to formulate an attack scenario. The section ends off with future work for this project.

4.1 Meeting the requirements The proposed taxonomy in Section 3 will be evaluated against the requirements specified in Section 2.

Acceptable: In order for the proposed taxonomy to be acceptable, it has to be generally approved. The proposed taxonomy is built on the notion of using dimensions, which has been used in previous works.

Comprehensible: In order for a taxonomy to be comprehensible, it has to be understood by the general audience. The proposed taxonomy is designed to promote understanding. The first dimension (targets) begins with the WS layer and how the communication scheme works. However, this assumes that the audience is familiar with the standards used. Different versions of the standards also make understanding difficult. This work is more suited for audiences who have some general background in WSs.

Completeness: For a taxonomy to be complete, it has to cater for all possible attacks. This requirement is hard to fulfill in the field of WSs. Many standards are already on their second version such as WSDL2.0 and XML already on its fifth edition. With each new revision, it mitigates some vulnerabilities, but may also introduce new ones. However, the proposed taxonomy takes the change of attacks into account. The first dimension specifies the WS layers, rather than specific standard and versions. This allows flexibility to extend the taxonomy to meet new standards.

Determinism: The procedure of the classification must be clearly defined. In the proposed taxonomy, the procedure of classification is based on identifying the attack layer.

Mutually Exclusive: Each attack should only appear in at most one category. By using the WS layers to classify the attacks, this ensures that each attack is mutually exclusive.

Repeatable: In most cases, classifications are repeatable. It is very seldom that classifications are misunderstood. In that case, the classification of the attack would be wrong.

Constant and defined terminology: The attacks in the proposed taxonomy are based on existing works by Vorobiev (Vorobiev 2006) and Jensen et al. (2009).

Unambiguous: For a taxonomy to be unambiguous, it needs to show that each attack belongs to a unique class. The proposed taxonomy clearly defined the attacks according to three classes. To avoid ambiguity, the effects or impacts of the attacks have been separated.

Usefulness: For a taxonomy to be useful, it needs to be used in some way. The intended purpose of this taxonomy is creating understanding of the attacks, so better defense decisions can be made. This is yet to be proven, but it aims to be incorporated into other works done by the authors. The following section demonstrates the ability to create an attack tree based on the proposed taxonomy.

40


Ka Fai Peter Chan , Martin Olivier and Renier Pelser van Heerden

4.2 Using the taxonomy This section demonstrates the use of the proposed taxonomy. Similar to Lai's work (Lai et al 2008), Figure 2 demonstrates the classification of an attack process in a linear way.

Figure 2: Linear view of taxonomy The linear view illustrates an attack process viewed from a response team's perspective. It begins by identifying the attacker as either a consumer or a malicious WS. The following step shows the system composition of the target. Each layer in the composition contains components that may have vulnerabilities. Next, the team needs to identify the attack that exploited the vulnerability. To cater for blended attacks, the attack method classifies attacks with similar characteristics. Lastly, the impact or effect of the attack can be identified. An alternative view would be from the right to left, starting with what type of service was affected and ending with the origin of the attack. This section presented an example use of the proposed taxonomy to classify an attack on multiple levels. By viewing the taxonomy in a linear way, it can be used for decision making and systematically classifying the attack in multiple levels.

4.3 Future work The proposed taxonomy is not the “end point” for classifying WS attacks. Rather it is a starting point for developers and users to understand the attacks associated with each WS layer. This work aims to integrate with recent work by one of the authors, van Heerden (van Heerden et al 2012). Also based on the proposed taxonomy, a toolkit may be implemented for penetration testing purposes

5. Conclusion WSs allow organisations to easily build and extend their service capabilities. However, this amount of flexibility also leaves them vulnerable to a number of attacks. With more organisations moving towards WS‐enabled systems, it is vital to classify and understand the possible attacks. This paper proposed a taxonomy of possible WS attacks. The taxonomy is based on three dimensions, namely the targets, the attack methodology and the attack effects. The focus of the taxonomy is to classify the attacks in relation to the Web Service stack of technologies. Identifying at which level the attacks occur can aid in the allocation of resources to its defence. Furthermore, the proposed taxonomy was evaluated with a set of requirements. Although, the requirement for completeness was not met, it has the flexibility to be extended. With each standard being exposed to various attacks, it is important for WS developers and users to be aware of the attacks. Rather than adding to the list of possible attacks, mitigation techniques should be improved with each new version of the standards.

References Abbott, R. P., Chin, J. S., Donnelley, J. E., Konigsford, W. L., Tokubo,S. and Webb, D. A.. (1976). Security Analysis and Enhancements of Computer Operating Systems. Technical Report NBSIR 76 1041, Institute for Computer Sciences and Technology, National Bureau of Standards. Ahmad, W., Hayat, Z., Zafar, B., Khan, F., Din, F. ud, and Shah, I., (2012). A Survey on Taxonomies of Attacks and Vulnerabilities in Computer Systems. ijcst.org, 3(5). Retrieved from http://www.ijcst.org/Volume3/Issue5/p16_3_5.pdf Alvarez, G., Petrovic,S., (2003). Encoding a taxonomy of web attacks with different‐length vectors. Computers and Security,vol. 22, pp. 435–449.

41


Ka Fai Peter Chan , Martin Olivier and Renier Pelser van Heerden Andrews, T., Curbera, F., Dholakia, H., Goland, Y., Klein, J., Leymann, F., Liu, K., Roller, D., Smith, D., Thatte, S., Trickovic, I., Weerawarana, S.,(2003). Business Process Execution Language for Web Services Version 1.1. Oasis Standard. Aslam, T., (1995). A Taxonomy of Security Faults in the Unix Operating System, Master’s thesis, Purdue University. Austin, D., Barbir, A., Ferris, C., Garg, S., (2004). Web Services Architecture Requirements W3C Working Group Note 11 February 2004. Available at: http://www. w3.org/TR/2004/NOTE‐wsa‐reqs‐20040211 [Accessed 16 April 2012] Bisbey II,R., Hollingworth, D., (1978). Protection Analysis: Final Report. Technical report, University of Southern California. Bishop, M. and Bailey, D., (1996). A critical analysis of vulnerability taxonomies. Retrieved from http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA453251 Faust, S., (2003). SOAP Web Services Attacks: Are your web applications vulnerable?, SPI Dynamics. Hansman, S., Hunt, R., (2005). A taxonomy of network and computer attacks, Computers & Security, Amsterdam: Elsevier, vol. 24, iss. 1, pp. 31‐43. Hansmann, S., (2003). A Taxonomy of Network and Computer Attacks, Diplom Thesis, University of Canterbury, New Zealand. Hollar, R., Murphy, R., (2006). Enterprise Web Services Security, USA, CHARLES RIVERMEDIA, INC. Howard, J. D., (1997). An Analysis Of Security Incidents On The Internet 1989‐1995. PhD thesis, Carnegie Mellon University. Jensen, M., Gruschka, N. and Herkenhöner, R.,(2009). A survey of attacks on web services. Computer Science ‐ Research and Development, 24(4), pp.185‐197. Lai J., Wu, J., Chen, S., Wu, C., (2008). Designing a taxonomy of web attacks. International Conference on Convergence and Hybrid Information. Daejeon, Korea, 28‐29 August, Korea Lindqvist, U., Jonsson, E., (1997). How to Systematically Classify Computer Security Intrusions, IEEE Security and Privacy, Washington: IEEE Computer Society, pp. 154‐163 Stamos, A. and Stender, S., (2005). “Attacking Web Services: The Next Generation of Vulnerable Enterprise Apps”, BlackHat2005, USA. The MITRE Corporation, (2012), Common Vulnerability Exposure. [Online] Available at: http://www.cve.mitre.org/cve/index.html [Accessed 08 November 2012]. van Heerden, R., Burke, I., Irwin, B., (2012). Classifying Network Attack Scenarios Using an Ontology, 7th Internation Conference on Information Warfare,Seattle,22‐23 March, USAVorobiev, A. and Han, J.,2006. Security Attack Ontology for Web Services. 2006 Second International Conference on Semantics, Knowledge, and Grid (pp. 42‐42), 30‐02 November, China.

42


DUQU’S DILEMMA: The Ambiguity Assertion and the Futility of Sanitized Cyber War Matthew Crosston Bellevue University, USA matt.crosston@bellevue.edu Abstract: There is an intense debate about the applicability of international law to cyber war and the need for a cyber‐ specific international treaty. The problem, however, might be that this debate is irrelevant. Both camps misread how the structure of the cyber domain likely precludes strategically ‘piggy‐backing’ on conventional norms of war. There is a civilian/military ambiguity in the cyber domain that makes target differentiation unlikely if not impossible. Thus, Duqu’s dilemma: with the focus on establishing legitimate targets and setting limitations on allowable action the United States and its allies are engaged in a futile endeavor that cannot lead to improved cyber governance and likely only exposes them to vulnerabilities. Greater effort should be spent on accepting this structural ambiguity by developing strategy that aims to instill preemptive fear and produce reluctance to action. Keywords: cyber war, cyber deterrence, cyber theory, law of armed conflict, attribution

1. Introduction The debate over the applicability or non‐applicability of international law to cyber war and the need for a cyber‐specific international treaty might be irrelevant. Both camps, pro and con, argue about the need for cyber war to have the Law of Armed Conflict or some new international legal project properly cover the cyber domain. Both camps, however, misread how the structure of the cyber domain precludes strategically ‘piggy‐ backing’ on conventional norms of war. International laws on conventional war are effective because of the ability to differentiate between civilian and military sectors. There is a civilian/military ambiguity in the cyber domain that makes such differentiation unlikely if not impossible well into the future. Thus Duqu’s Dilemma: with the focus on establishing legitimate targets and setting limitations on allowable action, the United States and its allies expose themselves to vulnerabilities while engaging a futile endeavor that does not lead to improved cyber control. Just like the Duqu virus that dominated global discussion in 2011, the reality of cyber‐attacks and initiatives can be for information gathering or physical attack potentiality; they can be originated from a government effort but executed through major commercial assets; they can be aimed for political/military objectives yet facilitated by piggy‐backing on civilian systems. The effort to establish cyber rules akin to conventional norms is therefore fruitless as these rules are not enforceable or logical in truly dealing with this military/civilian ambiguity. Current efforts simply handcuff lawful states. This means greater effort should be spent on creating preemptive strategy that accepts the military/civilian ambiguity problem. The tendency of scholars and policymakers to strive for ‘sanitized’ cyber war by constraining damage through explicit target classification means cyber strategy remains absent true deterring power. In short, cyber defense specialists are exacerbating the dilemma. Whether one believes the Law of Armed Conflict can or cannot apply to the cyber domain, whether one pushes for an international cyber treaty or thinks such treaties will be meaningless, one aspect is constant: the desire for rules governing cyber war behavior. The problem is in attempting to create a code of cyber conduct that demands a distinct separation between civilian and military sectors. The cyber domain is not amenable to this separation as the aforementioned fusion, where participants, facilities and targets are hopelessly entangled between civilian and military institutions, has basically been a missing explanation as to why the global effort to enhance and clarify norms has remained uneven and inadequate.

2. The Ineffectiveness of international law As the East‐West Institute said in a 2011 report, “There is an urgent need for international cooperation on this most strategic of issues. If we fail on this task, global stability could be as threatened as it would be by a nuclear exchange.”(Leithauser 2011) International norms established with the Geneva and Hague conventions were meant to be explicit lines of protection for civilian populations when states engaged in war. That respect and preservation of civilian life is now held to be sacrosanct, regardless of what form or delivery method war may take. As such, there is an expectation that cyberspace can be brought under the discipline of conventional rules of war.

43


Matthew Crosston Others argue establishing these customary understandings into the cyber domain is one of the most important geopolitical battles today, going so far as to say it is Ground Zero for global diplomacy, national security work, and intelligence.(Gjelten 2010) The goal is to bring the principles of arms control into the cyber domain. Indeed, the most optimistic want voluntary agreements that create constraints on the development of cyber capabilities and ostensibly ameliorate behavior in cyberspace. Some, however, have acknowledged that there are potential dangers in trying to reach this achievement. Stewart Baker, a former general counsel at the national security agency and assistant secretary for policy at the Department of Homeland Security under Pres. George W. Bush, declared the obvious fear: the United States and its allies would obey whatever was written down and agreed to while no adversaries would.(Gjelten 2010) There may be a larger problem, however, than non‐compliance: conventional war has the distinct advantage, historically, of being fairly explicit over target classification. Most military networks that would initiate and enact a cyber‐attack depend upon and work within countless numbers of civilian networks. In addition, many of the actors that are part of the planning, initiation, and deployment of cyber‐attacks are not necessarily formal military but civilian employees of government agencies. In other words, the world of cyber conflict and cyber war is not a world that can achieve such explicit classification. In fact, future trends only show this fusion growing deeper and tighter in time. As such, any attempt to introduce norms and rules that are predicated upon knowledgeable differentiation will likely end up confused and ineffective. This ‘ambiguity assertion,’ for lack of a better term, has so far been relatively ignored in the various cyber debates. They tend to revolve around how loose or rigid, how informal or formal, how international or local such codes of constraint should be. Many of these proposed codes aim to constrain cyber behavior so as to protect banking, power, and other critical infrastructure networks ‘except when nations are engaged in war.’(Sternstein 2011) Without addressing the ambiguity problem, however, states find themselves facing a quandary: where are the lines of distinction between civilian and military drawn? Perhaps the biggest dilemma, therefore, is not the problem of figuring out attribution (who was the true trigger man?) but rather this futile attempt to clear up the inherent and purposeful ambiguity that characterizes the critical infrastructure used to house, develop, and utilize a state’s cyber capabilities. Many of the current cyber discussions are flawed by the manner in which they implicitly want to analogize conventional conflict with cyber conflict, to make cyber‐attacks equivalent to armed attacks. To do this, however, the conversation must turn to legal definitions and parameters: when does cyber conflict constitute the use of armed force or a formal act of war? What actions would constitute a war crime? How much damage triggers a necessary retaliatory response?(Liaropolous 2010) These questions are much more difficult to answer in the cyber realm because of the logistical nightmare provided by the ambiguity assertion. This fact is not emphasized to date appropriately and is not strategically addressed at all. Up to now questions have focused more around comparable lethality, damage estimates, and the aforementioned attribution problem. To a certain extent, however, all of these legitimate problems are enveloped by the civilian/military ambiguity issue. The inability to establish that separation means lethality could potentially be more deadly by being more than just military casualties, damage could be more devastating by being more than just military facilities, and attribution might not even be relevant: defining the WHO of an attack is not a resolution of the problem if the HOW behind the WHO is inextricably fused between government, military, and civilian properties and people. In other words, many assume figuring out the WHO in cyber war will solve most problems. The ambiguity assertion reminds everyone to be careful what is wished for: in cyber war the WHO will never be conveniently distinct because of the HOW. International law clearly does not alleviate the problem of civilian/military ambiguity in cyber conflict. Whether the discussion extends to codes of conduct, treaties, or international laws writ large, none of these potential documents attempt to address the inherent structural problem of modern societies and how they currently organize, conduct, and develop their cyber capabilities. Further confirming this is the equal amount of time, effort, and frustration in the sister projects of establishing terms and defining parameters. Examining that frustration will illustrate how impactful the ambiguity assertion is when contemplating how the world should deal with the rules for cyber war.

44


Matthew Crosston

3. The frustration of setting terms Part of the problem in getting international law efficiently covering the realm of cyberspace involves a long‐ standing failure to translate essential terms and parameters into something that would truly impact the cyber domain. Progress in moving beyond this problem has been extremely limited. Indeed, even a cursory glance across the literature over the past decade brings testimony to the fact that cyber war does not fit perfectly into the legal frameworks that already exist on war and use of force. (Anatolin‐Jenkins 2005) Despite this reality these terminological and doctrinal difficulties have been continually investigated with the aim to forcefully coordinate existing terms and doctrines into the cyber arena. This article argues that their lack of success is attributable to the unwillingness to engage the civilian/military fusion. The desire for explicit terms, parameters, definitions, laws, and treaties is based more on the worry that failure to produce such explicitness will leave cyber war outside the boundaries of rules that currently govern conventional war. The consequences are considered stark: critical civilian infrastructure could be targeted as well as basic necessities, such as agriculture, food, water, public health, emergency services, telecommunications, energy, banking and finance, etc. The ambiguity assertion, however, articulates the difficulty in establishing such explicitness: most if not all of a state’s cyber capability utilizes and depends upon critical civilian infrastructure that also provides many important civilian functions. No state to date has created a cyber operations capability that is wholly distinct and separate from civilian networks and civilian infrastructure. In other words, go after the ‘military’ targets and you will also de facto be going after ‘civilian’ targets. The literature to date seems to bypass this fact. Consequently, much of the literature engages in a false riddle, trying to force a theoretically precise answer to an empirically ambiguous reality. This is further confirmed by the number of respected scholars, diplomats, and policymakers that miss the relevance of the ambiguity assertion by demanding that the laws of cyber war should actually forbid the targeting of purely civilian infrastructure, indicating that cyber actors should try to respect the Geneva Conventions as much as conventional actors do.(Tennant 2009) The problem, of course, is that in cyber war purely civilian infrastructure is a category of diminishing returns. Indeed, given the obvious trend that sees only intensification and deepening of the civilian/military fusion, purely civilian infrastructure will end up more myth than reality. The failure to address this structural riddle has been matched by an over‐emphasis on agency. This manifests itself namely in the focus on limiting and controlling potential cyber actions from adversarial states. James Lewis of CSIS emphasizes how a state can reduce risks for everyone by imposing common standards, like moving from the Wild West to the rule of law.(Fallows 2010) Eugene Spafford concurred, citing how cyber security is a process, not a patch, requiring continual investment for the long‐term as well as the quick fix, without which states will always be applying solutions to problems too late.(Fallows 2010) These are some of the brightest and most respected names in the cyber discipline. Their warnings are not irrelevant but the emphasis on state actor agency, while failing to recognize the impact and importance of inherent cyber structure, leaves a vulnerability gap in cyber strategic thinking. Indeed, the contemporary failure to create explicit norm coordination should be seen as a demand to consider new strategy that can accept this structural incompatibility as inherent and not something to ‘overcome.’ For structural ambiguity is not only intrinsic: states are purposely deepening the ambiguity for its strategic advantage and economic efficiency. States, therefore, should not focus on how to force a distinct civilian/military separation but should rather develop new strategic thinking that accepts the ambiguity problem as a logistical reality that must be accounted for. For empirical confirmation of the futility in trying to address these problems of conventional norms and explicit parameters, look no further than the United States military over the past half‐dozen years. It is easy to produce a laundry list of frustration and unfulfilled hopes: Gen. Alexander of US Cyber Command mentioned that progress was being made but that the risks were still at present growing faster than the progress;(Curran 2012) Vice Adm. Michael Rogers, commander of the US Navy’s fleet cyber command, admitted to Congress that no agreement had been reached amongst the various commands on ironing out the rules of cyber conflict but hoped there would be positive developments ‘at some point in the near term’;(Baldor 2012) and even the Pentagon produced a cyber document that ultimately said that the laws of armed conflict do apply in cyberspace as in traditional warfare, even while admitting the basic terms ‘act of war’ and ‘use of force’ were still somewhat ill‐defined in the cyber domain.(Gorman and Barnes 2011) This shows the real‐term effects that the lack of new strategic thinking has when states do not address the ambiguity of civilian/military fusion.

45


Matthew Crosston

4. Turf wars and tightropes: Military discussion on cyber parameters Just as with scholars, policymakers, and diplomats, the military has been steadfastly committed to establishing strict rules of cyber engagement that are akin to the conventional rules of war.(Anonymous 2008) For several years there has been a pending revision of the military’s standing rules of engagement in the cyber realm.(Nakashima 2012) It seems that while the military hoped the scholarly and diplomatic communities would be able to help define much of the needed clarification, the two latter communities were themselves hoping to see the military lead the way with its revision. This responsibility obfuscation, however, is not as relevant as many observers and analysts might think: failure to address these issues is not so much a case of one community trying to pass the buck on to another but rather testimony to the confusion created when the ambiguity assertion about civilian/military fusion is not addressed. Gen. Alexander stated that in debating the rules of conflict in cyber operations the United States was trying to do the job right.(Nakashima 2012) Those debates, however, constantly oscillate back‐and‐forth between positions that do not address the primary innate structural concerns of the cyber domain. Consequently, the military has spent half a dozen years promising imminent progress that does not materialize. The Pentagon’s official report was itself described as ‘ducking’ a series of important basic fundamental questions, including defining such basic terms as ‘war,’ ‘force,’ and ‘appropriate response.’(Nakashima 2011) This is pointed out not to poke fun at the military. Quite to the contrary, this article makes the argument that given the reluctance of all parties concerned to engage the ambiguity assertion, with an eye to develop new strategy that embraces it rather than hopelessly using old strategy to overcome it, the military has had no real chance of making substantive progress in its effort to concisely define the parameters of cyber action. It is no coincidence that the American military has sincerely worked on issues such as administrative network control, cyber organization, force composition, cyber intelligence/operation differentiation, in addition to basic terminology parameters, without any major questions being considered definitively and comprehensively closed.(Andrues 2010) How, for example, can USCYBERCOM be expected to connect all the dots and be the competent arbiter in determining a case for action when it readily admits difficulty in even articulating who exactly makes up the fraternity of cyber warriors operating and defending home networks?(Andrues 2010) If the issues at hand were not so serious and not so far‐reaching on the future of cyber conflict it would be almost comical. Only recently has it seemed possible that relevant military bodies have started to reach the epiphany discussed here: “Although there are some noteworthy first steps toward establishing an international set of cyber norms ‐ evident in bodies such as the Convention on Cybercrime ‐ any global framework governing military response actions in cyberspace will surely materialize at an onerous pace. After all, how can the rules of war, built upon the tactile presence of combatants and weapons and sovereign territory, be retooled for a world where ‘troops’ can be dispatched in milliseconds from a multitude of states?”(Andrues 2010) At least the above quote begins to frame the discussion around the innate incompatibility between how war in cyberspace would likely be conducted and how that compares to all wars previous. It still, however, is emphasizing agency over structure: establishing an international set of cyber norms mainly to hallmark the division between civilian and military assets and mitigate action already undertaken. This might help explain why formal strategic documents about cyberspace end up being nothing but simple platitudes about how the United States intends to protect itself. Take for example the Department of Defense’s Strategy for Operating in Cyberspace, released in mid‐2011, and comprised of five ‘strategic initiatives’:

Strategic Initiative 1: treat cyberspace as an operational domain to organize, train, and equip so that the DoD can take full advantage of cyberspace’s potential.

Strategic Initiative 2: employ new defense operating concepts to protect domestic networks and systems.

Strategic Initiative 3: partner with other US government departments and agencies and the private sector to enable a whole‐of‐government cyber security strategy.

Strategic Initiative 4: build robust relationships with US allies and international partners to strengthen collective cyber security.

Strategic Initiative 5: leverage the nation’s ingenuity through an exceptional cyber workforce and rapid technological innovation.

46


Matthew Crosston Take full advantage; employ new concepts; partner with others; build robust relationships; leverage ingenuity. All of these phrases are wonderful slogans but they are not accompanied by any explicit new strategic thinking that could have the hope of actually establishing said initiatives. Trying to adapt conventional strategy slightly and then force the cyber domain into it is likely to remain a project bearing little to no fruit. Examining that conventional strategy and proposing new strategy that engages the structural dilemma is the final section of this paper.

5. Engaging ambiguity: Strategic thinking for the civilian/military cyber fusion The need for a new strategic approach is best illustrated when the arguments of two highly respected strategic thinkers, one military and one legal and who happen to fall on opposite sides of the law of armed conflict (LOAC) cyber debate, ignore the problem of civilian/military structural cyber fusion. Dunlap, while accepting the need for improvement, believes the tenets of the law of armed conflict are sufficient to address the most important issues of cyber war.(Dunlap 2011) The concern for distinguishing between legitimate military and civilian targets does not seem to bother Dunlap in its impact on the applicability of LOAC: “LOAC tolerates ‘incidental losses’ of civilians and civilian objects so long as they are ‘not excessive in relation to the concrete and direct military advantage anticipated.’ In determining the incidental losses, cyber strategists are required to consider those that may be reasonably foreseeable to be directly caused by the attack. Assessing second‐ and third‐order ‘reverberating’ effects may be a wise policy consideration, but it does not appear LOAC currently requires such further analysis.”(Dunlap 2011) This distinction made by Dunlap is actually quite important given the current intellectual climate: he has introduced some much‐needed realism into the debates by reminding people that LOAC has never been a flawless strategy that perfectly protects civilians and civilian objects. The problem being highlighted here, however, is that his concerns over military/civilian differentiation are misplaced. These pro‐LOAC arguments are effectively built around the fact that cyber war does not have to have a perfect record in delineating and then protecting civilians because LOAC does not either. But these arguments take as given that such delineation is generally possible. The future of cyber war is unlikely to be able to create such possibility because it has been long‐established how many of the military’s critical functions, assets, service providers, and supply chains all rely heavily on civilian traffic and networks.(Mudrinich 2012) As such, new strategy needs to be positioned so as to prevent the use of cyber weapons in general, because once they are used the likelihood of incurring civilian risk, damage, and casualties will be de facto. ‘Sanitizing’ the impact of cyber weapons once they are used by trying to constrain targeting choices will not work. The anti‐LOAC camp makes the same mistake when discussing why the law of armed conflict does not bring clarity to cyber war: “The laws of war are in place to ensure that parties to a conflict target combatants rather than civilians, and, if civilians are targeted, to ensure that such individuals have forfeited their protected status. To determine whether cyber‐attacks properly distinguish between civilian and military targets, one must understand [the] distinction.”(Gervais 2012) The opposition camp fails in the belief that such distinction can in fact be created in cyber. This camp does not see the strategic influence of the ambiguity assertion, focusing rather on the deficiencies within the law of armed conflict and other contemporary norms and treaties: in short, make better laws and the cyber world will come to heel. As such, this camp is even further from reality, ignoring a problem that is only going to deepen and intensify over time. The opposition camp, in essence, is a more liberal approach to conflict because the end goal is to create an atmosphere of trust that can minimize higher levels of violence and treachery.(Gervais 2012) This flies even more in the face of the current and future structure of cyber war. Both of these camps believe in being able to monitor and regulate and circumscribe cyber war after it has begun, as done successfully with conventional war. This is a false hope. The ability to monitor, regulate, and circumscribe cyber action is best done through strategy that can inculcate preemptive fear and thereby induce caution and hesitation. Current conventional strategies that aim for trust, target distinction, and minimizing noncombatant impact are simply inexplicably ignoring how cyber war is organized, structured, and operationalized.

47


Matthew Crosston Liberal thinking also dominates the legal community that is heavily leaned upon for law projects and the strategic thinking meant to infuse said projects for the cyber domain: “[An effective solution to the global challenge of cyber‐attacks] cannot be achieved by individual states acting alone. It will require global cooperation. We therefore outlined the key elements of the cyber treaty ‐ namely, codifying clear definitions of cyber warfare and cyber‐attack and providing guidelines for international cooperation on evidence collection and criminal prosecution ‐ that would provide a more comprehensive and long‐term solution to the emerging threat of cyber‐attacks.”(Hathaway 2012) The above review shows yet another camp focusing on mitigating risk and limiting damage in the cyber domain ex post facto. Regardless of philosophical standing, political agendas, or theoretical acumen, every camp that examines the problem of parameters and definitions in the cyber domain seems to exclude considerations of preemptive strategies built upon fear and inducing reluctance to action. Gen. Alexander of US Cyber Command cited the need to establish the lanes of the road for what governments can and cannot pursue and that establishing those lanes was the necessary first step to addressing the challenge of cyber‐attacks.(Hathaway 2012) What all of the camps examined here have in common is a tendency to give lip‐service to strategy but then really focus exclusively on ex post facto operations to establish progress. If the focus continues to be on agency action and not on structural deficiency, then progress will not simply remain slow: it will become non‐ existent. There has been a small beginning in the literature attempting to define this mindset‐change and its strategic importance, focusing on how the goal for major powers should not be the futile hope of developing a perfect defensive system of cyber deterrence, but rather the ability to instill deterrence based on a mutually shared fear of an offensive threat. The United States is better positioned by expanding to an open, transparent policy that seeks to compel deterrence from the efficacy of its offensive cyber capabilities. (Crosston 2011) There has been an even smaller start trying to define how deterring pre‐emptive cyber power works or what it strategically looks like. Ideally, this overt cyber strategy would create credibility in virtual weapons which employ disruptive cascading effects so powerful as to negate their use. The key would be in establishing plausible fear in the adversary. Given the recent revelations about Stuxnet and the effectiveness of the Duqu and Flame viruses—which quite possibly moved beyond Stuxnet capabilities—cyber weapons are rapidly obtaining that fearful reputation, and thus, deterrence via overt cyber strategy can no longer be considered pure fantasy. It is an important balancing argument for developing a fully encompassing strategy that allows for both covert and overt US cyber power.(Crosston 2012) In essence, adversaries need to be made to believe in rational self‐interest that good behavior will avoid massive debilitation and bad behavior carries severe consequences. Ironic as it may seem, perhaps the key to developing this overt cyber strategy of preemptive deterrence is to rely on old‐school realist strategy while simultaneously moving away from old‐school realist norms of conventional warfare. This new literature impacts the ambiguity assertion because this mindset‐ change and strategic shift is arguably the best method to fighting Duqu’s Dilemma: the only way to overcome the ambiguity is to avoid being put in a situation where the ambiguity has to be addressed. In other words, the current cyber reality and its foreseeable future make ex post facto strategies inherently inferior to preemptive ones.

6. Duqu’s dilemma: Why it matters This analysis has pinpointed flaws in the current thinking and efforts to establish clear definitions and parameters governing the rules and operations within cyber war. The emphasis placed here on inherent structural difficulties, namely the innate cyber civilian/military fusion, has shown the likely damaging and deadly consequences to societies when strategies do not focus on the effort to preemptively stop cyber action but instead focus on operational considerations after conflict has begun. Only now are there beginning to be isolated legal analyses highlighting these problems: “It is unlikely that a state such as the United States could take precautions against the effect of attacks on military objectives by separating military objectives from civilians and civilian objects in cyberspace. This is because of the interconnectedness of US government and civilian systems in the near complete government reliance on civilian companies for the supply, support, and maintenance of it cyber capabilities… Proportionality assessments likely will prove particularly precarious in cyberspace, where outcomes are more difficult to predict than in the physical world:

48


Matthew Crosston physical attacks at least have the advantage of physics and chemistry to work with. Because, say, the blast radius of a thousand pound bomb is fairly well understood, one can predict what definitely lies outside the blast radius and what definitely lies inside. Error bands and cyber‐ attacks are much wider and less well‐known… [Most reports do not explain how] these public‐ private partnerships could be constituted in a manner that adequately considers laws of war issues nor do [they] address the likely use of active defenses by the private sector.”(Lobel 2012) As illustrated above, this structural issue is more than just semantics. It literally covers who engages cyber war, what can be destroyed in cyber war, who can be a victim during cyber war, even the philosophical and ethical questions meant to be asked about cyber war itself. Duqu’s Dilemma is an entreaty to move away from unobtainable goals and idealistic dreams in a futile hope to create sanitized cyber war. Cyber war will never be sanitized. Consequently, contemporary strategic thinking about the cyber domain must start treating the ambiguity assertion with the same gravity that the more famous attribution problem receives.

References Anatolin‐Jenkins, Vida CDR, “Defining the Parameters of Cyberwar Operations: Looking for Law in All the Wrong Places?” Naval Law Review 51:132, 2005. Andrues, Wesley, “What US Cyber Command Must Do,” Joint Forces Quarterly, Issue 59, 4th quarter, 2010. Anonymous, “Syria’s Secret War Against the Cyber Dissidents,” The Daily Star (Beirut, Lebanon) Jul 12 2011. Anonymous, “Cyber War Warning,” Derby Evening Telegraph, (Derby UK) Feb 5 2011. Anonymous, “Military Ponders Cyber War Rules,” Los Angeles Times, (Los Angeles, USA) Apr 7 2008. August, Ray, “International Cyber‐jurisdiction: A Comparative Analysis,” American Business Law Journal, 39:4, Summer 2002. Baldor, Lolita, “Cyber Warriors,” Army Times, Aug 6 2012. Clarke, Richard, “The Coming Cyber Wars,” Boston Globe (Boston, USA) Jul 31 2011.

Crosston, Matthew, “Virtual Patriots and a New American Cyber Strategy: Breaking the Zero‐sum Game,” Strategic Studies Quarterly, Vol. 6, No. 4, Winter 2012. Crosston, Matthew, “World Gone Cyber M.A.D: How Mutually Assured Debilitation is the Best Hope for Cyber‐ deterrence,” Strategic Studies Quarterly, Vol. 5, No. 1, Spring 2011. Curran, John, “Updated Rules for Cyber Conflict Coming Soon, Defense Officials Say,” Cybersecurity Policy Report Mar 26 2012. Department of Defense, “Strategy for Operating in Cyberspace,” (Washington DC, USA) Jul 2012. Dunlap, Charles, “Perspectives for Cyber Strategists on Law for Cyberwar,” Strategic Studies Quarterly, Spring 2011. Fallows, James, “Cyber Warriors,” The Atlantic Monthly, 305;2, Mar 2010. Fryer‐Biggs, Zachary, “Turf War Slows New US Cyber Rules,” C4ISR, 12, Jun 1 2012. Gervais, Michael, “Cyber Attacks and the Laws of War,” Journal of Law and Cyber Warfare, 30;2, 2012. Gjelten, Tom, “Shadow Wars: Debating Cyber Disarmament,” World Affairs, 173;4, Nov/Dec 2010. Gorman, Siobhan and Julian Barnes, “Rules for Laws of War: US Decides Cyber Strike Can Trigger Attack,” The Australian, Jun 1 2011. Gross, Michael Joseph, “A Declaration of Cyber‐war,” Vanity Fair, 53;4, Apr 2011. Gutmann, Ethan, “Hacker Nation: China’s Cyber Assault, World Affairs, 173;1, May/Jun 2010. Hathaway, Oona, et al, “The Law of Cyber‐Attack,” California Law Review, 2012. Jarrett, Stephen, “Offensive Cyber Warfare,” Proceedings 137, (United States Naval Institute), Dec 2011. Jensen, Eric Talbot, “Sovereignty and Neutrality in Cyber Conflict,” Fordham International Law Journal, 35; 815, Match 2012. Leithauser, Tom, “Cyber War Rules Won’t Cover All Situations, DoD Official Says,” Cybersecurity Policy Report, May 17, 2010. Leithauser, Tom, “Rules of War Should Apply to Cyber Conflict,” Cybersecurity Policy Report, Feb 14, 2011. Liaropoulos, Andrew, “War and Ethics in Cyberspace: Cyber‐conflict and Just War Theory,” European Conference on Information Warfare and Security, 177‐XI, (Reading, UK), Jul 2010. Lin, Patrick, “War 2.0: Cyberweapons and Ethics,” Communications of the ACM, 55;3, March 2012. Lobel, Hannah, “Cyber War Inc: The Law of War Implications of the Private Sector’s Role in Cyber Conflict,” Texas International Journal of Law, 47;3, Summer 2012. Mavhunga, Clapperton, “The Glass Fortress: Zimbabwe’s Cyber‐Guerilla Warfare,” Journal of International Affairs, 62;2, Spring 2009. Mudrinich, Erik, “Cyber 3.0: The Department of Defense Strategy for Operating in Cyberspace and the Attribution Problem,” Air Force Law Review, 68, 2012. Nakashima, Ellen, “Pentagon: Cyber Offense Part of Strategy,” The Washington Post, (Washington DC, USA), Nov 16 2011. Nakashima, Ellen, “Pentagon Seeks to Engage Rules of Engagement in Cyber War,” The Herald, (Everett, Washington, USA), Aug 10 2012. Nye, Joseph, “Nuclear Lessons for Cyber Security?” Strategic Studies Quarterly, Winter 2011.

49


Matthew Crosston Schaap, Arie, “Cyber Warfare Operations: Development and Use Under International Law,” Air Force Law Review, 64, 2009. Schwartz, Matthew, “The Case for a Cyber Arms Treaty,” Informationweek, Aug 24 2012. Stanton, John, “Rules of Cyber War Baffle US Government Agencies,” National Defense, 84;555, Feb 2000. nd Stavridis, James and Elton Parker, “Sailing the Cyber Sea,” Joint Forces Quarterly, Issue 65, 2 quarter, 2012. Sternstein, Aliya, “Experts Recommend an International Code of Conduct for Cyberwar,” National Journal, Jun 10 2011. Temple, James, “In Cyber War, Be Careful How the Worm Turns,” San Francisco Chronicle, Jun 10 2012. Tennant, Don, “The Fog of (CYBER) War,” Computerworld, 43;16, Apr 27 2009. Tsirigotis, Anthimos Alexander, “Cyber Warfare: Virtual War among Virtual Societies,” European Conference on Information Warfare and Security, 389‐XII, (Reading, UK), Jul 2010. Zekos, Giorgios, “Cyber‐Territory and Jurisdiction of Nations,” Journal of Internet Law, 15;12, Jun 2012.

50


Hacking for the Homeland: Patriotic Hackers Versus Hacktivists Michael Dahan Departments of Public Policy and Public Administration and Communication, Sapir College, Israel dahanm@gmail.com Abstract: This paper discusses the phenomenon of "patriotic" hacking, i.e. cyber attacks that are mounted by hackers against states with which there is a prolonged national conflict such as: India‐Pakistan, China‐Taiwan, Russia‐Chechnya and of course Israel‐Muslim countries. The paper does not look at hacking perpetrated by countries themselves (or their proxies) in the form of cyber warfare but rather by individual hackers and hacker groups. These hackers are then compared to cosmopolitan hackers or Hacktivists, active in global and national arenas. Political motivations and ideology of both groups are explored. Case studies for comparison are drawn primarily from the Israeli‐Muslim cyber conflict, with an emphasis on the November 2012 (“Operation Pillar of Cloud”) conflict in Gaza, and its parallel arena in cyberspace. The Gaza case is unique in that patriotic hackers are joined by hacktivist groups such as Anonymous and LulzSec in mounting cyber attacks against Israeli institutions and individuals. Keywords: patriotic hacking, hacktivism, cyberwar, Israel, Palestine, Middle East

1. Introduction The recent and ongoing conflict in Gaza (November 2012) provides a portentous backdrop for the discussion of patriotic hacking as well hacktivism. A number of days into the conflict and Israeli Minister of Finance 1 , Yuval Steinitz, reports that over 44 million cyber attacks have been mounted against government institutions, the financial sector and the public sector 2 . In effect, cyberspace has become an additional front in armed conflict, a front where hackers, self styled or otherwise, replace or supplement combatants. Over the last decade there have been numerous salvos of hacker attacks between patriotically motivated hackers in Israel and the Muslim world, particularly at times of armed conflict (most notably the second Intifada, the Second Lebanon war, tensions between Israel and Iran). For example, this past year a Saudi based hacker broke into a number of Israeli web sites and released credit card information of almost half a million Israelis. Israeli hackers retaliated in kind by hacking Saudi web sites and releasing credit card and personal information. This minor hacking war 3 ended with the suspicious death of the Saudi hacker. While both groups are politically motivated, cosmopolitan hackers and hacktivists seek to advance political/social agendas that do not necessarily touch upon their country of citizenship or residence; rather they conduct attacks in order to promote, among other issues, freedom, free speech, human rights and information ethics. Hacktivism as such is the political extension of the original hacker credo. It is the new “new politics” 4 of the digital age, similar to traditional political activism (demonstrations, sit ins, civil disobedience, etc). Indeed Hacktivism is one of the tools used by some civil society organizations in order to advance their cause(s), and is often seen as synonymous with “direct action”. Patriotic or nationalist hackers have a different agenda and a different politics. They see themselves as irregular soldiers, or conscripts fighting a war for their country, a form of cyber militia. Rather than cosmopolitan in nature their world view tends to be narrow, nationalistic and parochial. Patriotic hackers always self identify themselves in nationalistic terms – Israeli, Palestinian, Iranian, etc. Attacks are motivated by strong feelings of patriotism and nationalism, reflected in the language and rhetoric used. Targets very often differ from those of political activism, and the actions of the patriotic hacker may result in serious damage to targeted systems. Many of these hackers note that they are representing and salvaging national pride in mounting these attacks or reprisals. In some cases patriotic hackers portray themselves as extensions of the state, acting where the state will not or cannot. 1

In Israel, the Ministry of Finance is responsible for the implementation of electronic government, and responsible for overseeing government websites. 2 http://www.haaretz.co.il/captain/net/1.1867635 (Hebrew). Retrieved 18/11/12. 3 “Saudi Hacker Dies of Asthma Attack”, http://english.alarabiya.net/articles/2012/04/22/209470.html Retrieved 18/11/12 4 “New politics” refers here to the strategies and tactics adopted by public interest groups and civil society organizations in the US and Europe in during the Vietnam war. These groups pioneered the use of sit ins, direct action and political mobilization of groups outside the political parties.

51


Michael Dahan

2. Hackers and the hacker ethic In 1974, Theodor (Ted) Nelson expressed many of the ideas and ideals that were to become part of the hacker ethic and would later feed ideas of hacktivism in his book Computer Lib/Dream Machines. Nelson believed that people should make use of computers in order to gain greater access and control over society. The term “hacker ethic”, is attributed to author Steven Levy (1984) and his seminal book, Hackers: Heroes of the Computer Revolution. Levy provides insight into the development of the hacker ethic in an almost ethnographic fashion. The ethic itself places an emphasis on access (broadly defined and not limited to data and computer networks), freedom of information, (a derivative of access), freedom/liberty, as well as improvement to quality of life (also broadly defined). Levy sums up the key points of the hacker ethic thus:

Access to computers—and anything which might teach you something about the way the world works— should be unlimited and total.

Always yield to the Hands‐on Imperative!

All information should be free.

Mistrust authority — promote decentralization.

Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race or position.

You can create art and beauty on a computer.

Computers can change your life for the better.

While Levy focuses on primarily North American hackers similar expressions are found among their European counterparts. The second edition of the New Hacker Dictionary (1993 218‐219), compiled by Eric Raymond, defines the hacker ethic as “1. the belief that information sharing is a powerful positive good and that it is the ethical duty of the hacker to share their expertise by writing free software and facilitating access to information and to computing resources wherever possible and 2. The belief that system cracking for fun and exploration are ethically OK as long as no theft, vandalism, or breaches of confidentiality are committed”. In 1986 the hacker known as “The Mentor” (Lloyd Blankenship) published a text following his arrest for hacking entitled The Conscience of a Hacker, popularly known as Manifesto of a Hacker. The text eventually came to serve as a guideline and moral compass for hackers. He writes that: …This is our world now... the world of the electron and the switch, the beauty of the baud. We make use of a service already existing without paying for what could be dirt‐cheap if it wasn't run by profiteering gluttons, and you call us criminals. We explore... and you call us criminals. We seek after knowledge... and you call us criminals. We exist without skin color, without nationality, without religious bias... and you call us criminals. You build atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe it's for our own good, yet we're the criminals. Yes, I am a criminal. My crime is that of curiosity. My crime is that of judging people by what they say and think, not what they look like. My crime is that of outsmarting you, something that you will never forgive me for. I am a hacker, and this is my manifesto. You may stop this individual, but you can't stop us all... 5 after all, we're all alike. Finnish philosopher P. Himanen (2001) in his work emphasizes the communal aspect of the hacker ethic, juxtaposing his ideas with those of Weber and the protestant work ethic. It is worth noting here that within hacker communities status is accorded via a form of meritocracy where status is conferred based on hacking prowess. Himanen notes four components of the hacker ethic (2001 140‐142): First, the hacker work ethic is defined as melding passion with freedom; 2. The motivation of money is replaced by the motivation of creating with and for the community; 3. The hacker network ethic, what Himanen calls “nethic”, is defined in terms of community and the desire for all to participate; finally, there is an emphasis on creativity in the hacker’s work. Closely related and linked to these ethics are those of the open source software movement, perhaps best personified by Richard Stallman, often called the last true hacker, head of the Free Software Foundation, (FSF). 5

http://www.phrack.org/issues.html?issue=7&id=3&mode=txt Retrieved 18/11/12.

52


Michael Dahan Stallman suggests four “freedoms” that are essential for software to fall under the free (as in liberty) software definition. These are: the freedom to run the program for any purpose; the freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this; the freedom to redistribute copies so you can help your neighbor; the freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this 6 . One may note the emphasis on three components also expressed in the hacker ethic: freedom/liberty, access, and community. Stallman’s political writings 7 are also revealing. One finds a strong commitment to liberalism, human rights, and support for the alternate globalization movements, matched with a sense of communitarianism. Stallman opposes anything that blocks the free flow of information – physical or virtual. For example, following a meeting with members of the Palestinian IT Association (PITA) in Ramallah in 2002 Stallman seemed most shocked by the wall that was being constructed by the Israelis to separate Israel from the Palestinian controlled territories. Stallman remarked to me then that he was offended by the wall and what it represented. Stallman’s hacker ethic extends into the real world politics of brick and mortar. When analyzing similar texts written by North American hackers we find complementary threads of liberalism and communitarianism, and at times libertarian approaches to politics. Barring a drawn out discussion of these political ideologies, the common denominator is their emphasis on liberty (both negative and positive) and derivatives thereof. Individual liberty is balanced with community commitment as noted earlier. European hackers on the other hand, particularly those from northern Europe, seem to be guided primarily by socialist and Marxist ideologies, yet share a similar commitment to liberty, access and community and have had some success in translating these into government policy regarding software and the Internet. Studies have shown that social democratic northern European countries are the largest contributors (relative to population) to open source projects (Soderberg 2002). The study shows a strong correlation between general welfare systems such as those in social democratic countries and non‐commercial projects. This is not surprising in that the gift economy inherent in the hacker ethic closely parallels the idea of “social surplus” prevalent in Marxist thought. (ibid.). Berry and Moss (2006) go so far as to suggest realizing radical democracy founded on the basis of hacker ethics and philosophy, or what they call the “libre culture” (ibid.).

3. Hacktivism Growing out of the hacker ethic is its active political expression – hacktivism. The term was first coined by Cult of the Dead Cow (cDc) members Oxblood Ruffin, Omega and Reid Fleming in 1998. The term was meant to refer to the use of technology in order to advance human rights and foster the open exchange of information (Delio 2004). Hacktivism is to the Internet what the “new” politics of the 1960s and 1970s was to the political system of the time. Hacktivism brings direct action to cyberspace and enables grass roots resistance through technology. Ronald Deibert, head of University of Toronto’s Citizen Lab notes that “the combination of hacking in the traditional sense of the term – not accepting technologies at face value, opening them up, understanding how they work beneath the surface, and exploring the limits and constraints they impose on human communications – and social and political activism is a potent combination” (Delio 2004). Simply put, hacktivism is hacking for a political purpose. Hacktivist groups like cDc, Anonymous and Lulzsec often claim that in their actions they are advancing Article 19 of the United Nations Universal Declaration of Human Rights which states that: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media 8 9 and regardless of frontiers." The “Hacktivismo Declaration” authored by the cDc incorporates Article 19 into its basic tenets. According to Jordan (2009), Hacktivists are political activists, most often associated with the alter–globalization movement, who utilize hacking techniques to create grassroots activist political campaigns. Others make tactical use of the media (Lovink 2008). Hacktivists produce both ephemeral electronic civil disobedience actions, such as blocking online sites with mass electronic action, and they try to create infrastructures of secure anonymous communication often to support human rights workers (Jordan and Taylor, 2004). For 6

http://www.gnu.org/philosophy/free‐sw.html Retrieved 18/11/12. http://www.stallman.org/archives/2012‐sep‐dec.html Retrieved 18/11/12. 8 http://www.un.org/en/documents/udhr/index.shtml#a19 Retrieved 18/11/12. 9 http://hacktivismo.com/public/declarations/en.php Retrieved 18/11/12. 7

53


Michael Dahan example the open source Tor Project 10 is dedicated to providing a defense from network surveillance via internet traffic analysis conducted by states. The software was used by activists during the protests against the election results in 2009‐2010 in Iran and throughout the Middle East during the so called Arab Spring in an attempt to minimize government surveillance during the protests. The tactics used by hacktivists include but are not limited to: website defacement; DNS hijacking; redirects, denial of service (DOS and DDOS) attacks; information theft and dissemination; website spoofing; virtual sit ins and virtual sabotage. Often in the Israeli context, non destructive Trojans are employed against government websites and email systems to direct users to sites sympathetic to the Palestinian cause. Government sites, intelligence agencies, the military, politicians, financial institutions, and large corporations are the primary targets of these groups, though at times individuals and political parties are targeted. Traditional media are engaged in order to amplify hacks. Often, and prior to an attack, these groups will disseminate a press release, sometimes via YouTube, detailing the motivations for a particular attack. While individual hacktivists crave anonymity, hacktivist groups have a well planned media strategy.

4. The patriotic hacker As opposed to the hacker groups and hacktivists mentioned earlier, the patriotic hacker enjoys a different set of motivations than those mentioned above and tends to be closer in nature to the cyber criminal or cyber terrorist. Patriotic hacking refers to actions by private citizens of a country acting on their own initiative against a perceived threat by an enemy of the state or attacking countries involved in a conflict with their own. Examples range from China (surrounding issues related to Tibet and Tibetan independence), to India and Pakistan and Russia and Chechnya. This paper focuses on one of the hotbeds of patriotic hacking – Israel and the Muslim world. Israel has suffered both attacks (for example, at the time of this writing and following a week of conflict between Israel and the Hamas in the Gaza Strip, there have been over 44 million attacks (primarily denial of service attacks) mounted against Israeli government sites and the private sector) and Israeli citizens have mounted attacks against Arab and Muslim countries. Most notably, Israeli hackers, in retaliation against a 11 Saudi hacker , (who in January 2012 had hacked almost 500,000 Israeli credit cards and published them on the web) hacked Saudi and Arab websites and released credit card information of citizens in Saudi Arabia 12 . One of the earliest instances of patriotic hacking conducted against Israel occurred during the second Intifada in 2001. At that time, individuals and groups from throughout the Middle East conducted what I have termed in the past an Interfada 13 against Israeli websites in support of the Palestinian people. This primarily took the form of website defacing and spoofing, DOS and DDOS attacks, redirection and attempts to steal personal information from websites and online databases. Primary targets were government and military institutions though all Israeli web sites were considered a fair target. Israeli hackers and script kiddies organized in an attempt to counterattack Arab and Muslim websites. A favorite hack was redirecting Islamic websites to porn sites. Patriotic hackers often see themselves as a digital extension of the state, correcting a perceived wrong or attack committed against their country. As opposed to the hacker and the hacktivist, political ideology tends toward conservatism and nationalism. Patriotic hackers tend to be on the “right” of the political spectrum. In the Israeli case for example, patriotic hackers tend to be secular and nationalist, right wing politically. When not involved in patriotic hacking some are involved in cybercrime. An example of this is the Israeli self styled patriotic hacker was known as the “Analyzer” (Ehud Tannenbaum) who came to national prominence after hacking the FBI website as a teen. Tannenbaum later went on to be a well known figure in the news and claimed to mount attacks against Arab and Muslim websites. Tannenbaum was eventually indicted for credit card and ATM fraud in the US in 2009. Motivations among patriotic hackers, as such, tend to be retribution for an attack or a perceived attack, defense of national pride, proactive attacks against civilian and military targets. Motivation for the Israeli patriotic hacker is primarily political. The Arab or Muslim patriotic hacker also tends to be nationalistic and/or to be committed to forms of pan Arabism with a strong religious element in some 10

https://www.torproject.org/about/overview.html.en Retrieved 18/11/12. http://www.ynetnews.com/articles/0,7340,L‐4170465,00.html Retrieved 18/11/12. 12 http://www.ynetnews.com/articles/0,7340,L‐4173264,00.html Retrieved 18/11/12. 13 http://www.wired.com/politics/law/news/2001/01/41154 Retrieved 18/11/12. 11

54


Michael Dahan cases. Motivation in this case is both political and religious at times, with an overarching degree of support for the Palestinian cause. Attacks spike significantly during times of conflict. Mutual attacks peaked during the second Intifada, operations “Defensive Shield” and “Cast Lead” and the second Lebanon war. The current operation in Gaza, “Pillar of Cloud”, has lead to an unprecedented amount of attacks by both sides. Cyber attacks by the patriotic hacker very often reflect and mirror real world political tensions and conflict. In Israel and in the Arab world the patriotic hacker is often seen as a hero by the public, defending the national honor. A successful hack of an Israeli web site or an Arab website will inevitably lead to interviews and press reports and generate public interest and support. The hacker becomes an instant celebrity and is often called on in news reports to provide commentary on hacking related reports (See for example Denning 1999, and this 14 recent interview on Israeli TV with Israeli patriotic hacker Mickey Bouzaglou ). This stands in stark contrast to the hacktivist who seeks anonymity. Patriotic hackers tend to see themselves as acting “behind enemy lines” in the interest of the state. For the Arab or Muslim hacker, a successful hack against an Israeli site is particularly appealing in light of Israel’s reported high tech prowess and capability. It allows the Arab or Muslim patriotic hacker to portray themselves as an Arab “David” to Israel’s perceived technological Goliath. Aouragh (2012) sees cyber attacks mounted against Israeli targets as a response to what she terms “cybercide” being committed against the Palestinians by Israel. Table 1: Key differences between hacker types

Hacktivists

“Patriotic Hackers”

Ideology

Generally cosmopolitan in nature; Liberal (at times Libertarian) ideology in US; Socialist roots in Europe; aspects of communitarianism; generally left of political spectrum; individual liberties balanced by community commitment Advancement of political causes; advocacy; human rights; open access to information Fluid but stable group or network structure; sense of community; membership. Meritocracy. DoS/DDoS; web defacing; web redirects; DNS hijacking; information theft and dissemination; web spoofs; virtual sit‐ins; non lethal trojans; Attacks often humorous. Goals are political/social change, form of “direct action”. Minimal damage. Primarily international; not limited by the nation‐state. Anonymity of actual hacker members; groups seek traditional media attention to echo hacking for maximum effect, and to maximize exposure. Often planned as “media events” with press releases and YouTube videos prior to attacks.

Parochial; Nationalism and patriotism; generally right of political spectrum; little to no cohesive ideology. Self identify by nationality.

Motivation

Structure

Type and Goals of Attacks

Area of Operations

Media Strategy

Defense of homeland; national pride/patriotism; occasional religious motivation

Generally individuals; may form ad hoc groups during times of conflict. No permanent structure, membership, community DoS/DDoS; trojans; worms; cyber theft; identity theft; attempts to damage/compromise infrastructures. May serve as state proxy in mounting attacks. Goals are to cause maximum damage. Attacks reflect actual political tensions.

Primarily national or regional (in the framework of the Arab‐Israel conflict) No coherent media strategy; individual hackers seek media attention; often appear on TV; self aggrandizing.

Operation “Pillar of Cloud” or “Pillar of Defense” is the name given by the Israeli army to the military operation in the Gaza strip launched following the targeted assassination by Israel of Hamas military leader Ahmed Jabari in November 2012. Almost immediately Israeli websites were attacked by Arab and Muslim hackers, with a strong representation of North African hacker groups involved in the attacks 15 . Hackers managed to steal lists

14

http://www.youtube.com/watch?v=HqlsJ_eQwN8&feature=youtu.be Retrieved 19/11/12 An excellent resource for monitoring these attacks can be found at: http://www.operationpillarofclouds.com/ and is maintained by Dr. Tal Pavel. 15

55


Michael Dahan of soldiers and reservists, including contact information. Text messages and emails were then sent to these soldiers and reservists warning them against participating in the conflict. These tactics mimic those of Israel during operation Cast Lead and the second Lebanon war where Israel sent out text messages and cell broadcasts to civilians. News websites were hacked and email addresses that were stolen were used to send out messages to Israeli citizens warning them against support for the conflict, and poorly conceived propaganda videos were posted on YouTube 16 . Israeli patriotic hackers responded in kind, hacking the primary Palestinian ISP, Palnet, (hacked by Yuricanne) releasing user accounts, passwords and credit card information publicly. Another Israeli hacker went so far as to form the Israel Internet Force (IIF), composed of “volunteer hackers” and dedicated to retaliating against Arab and Muslim hackers for daring to attack Israel 17 . The hacker is praised publicly during an interview on state TV for activity that is patently illegal in Israel. He repeats again and again that he is doing this for Israel and to protect the state and its citizens. Rhetoric used is very nationalistic, right wing and patriotic. The patriotic hacker often describe themselves as being capable of doing things that the government cannot – either for lack of technical expertise or lack of political will. A rather unique aspect of the current Gaza conflict is the “joining together” of hacktivist groups like Anonymous and LulzSec with patriotic hackers in defense of Palestine and in a clear position against Israel. 18 Anonymous has threatened to take down Israeli government sites if hostilities did not cease immediately . The campaign, coined #opIsrael, has lead to an unprecedented number of attacks against Israeli websites, resulting in little real damage beyond defaced websites, downed servers and compromised personal information. Much more significant and structural damage has been inflicted by a Worm apparently created by Iran in the framework of the cyber war between the two countries.

5. Conclusions The phenomenon of patriotic hacking requires further research. There is a dearth of research on the topic, and what is available is largely anecdotal. A partial cause of course is the somewhat fluid identity of the patriotic hacker: “White hat”? “Black Hat”? “Grey Hat”? Lacking any cohesive or identifiable ideology beyond nationalistic rhetoric, it is difficult to pin down beyond basic motivation and drive. The phenomenon is more prevalent in developing countries or those with strong degrees of nationalism, rarer in post industrial societies. The patriotic hacker tends to be more willing to “cross the line” into criminal activity, the purpose being to inflict as much damage as possible to the enemy. The patriotic hacker seems to drift between “defender of the homeland” to cyber criminal and back. There is no “ethic” for the patriotic hacker beyond that of nationalism, no manifesto beyond that of patriotism, no code of conduct beyond maximum damage. Patriotic hacker groups tend to form ad hoc during a given conflict and go dormant during periods of quiet, lacking a permanent structure, membership or community, coalescing with the next conflict. In theory, patriotic hackers can conceivably ignite a hacking war between rival countries which has the potential of drawing these states into actual armed conflict. Such attacks habitually mirror real world political tensions and violence, adding fuel to an already volatile situation. It is also plausible that these groups can be used as “proxies” to mount cyber attacks by nation states (particularly by pariah states e.g. North Korea or Iran) providing these countries with a degree of “plausible deniability”. The inherent danger of these groups is thus clear. In comparison, the hacker and the hacktivist are guided by any number of manifestos, ethics and declarations. The hacktivist marries hacking and political activism in order to impact on issues beyond themselves and quite often not in their own countries. Attacks are restricted in terms of damage, and there is almost always a humorous aspect to these attacks, a “wink and nod” on the part of the hacktivist to the public, together with a wry smile. The purpose of the attack is often explained in a detailed communiqué prior to or immediately following the attack. The goal is always political or social change, political protest. The groups are noted for a somewhat loose yet semi permanent structure, fairly stable membership and community, with commitment to a common ethos. While groups may form ad hoc during campaigns, larger groups like cDc, the Electronic Disturbance Theater (EDT), Anonymous, LulzSec and others are constantly active. Hacktivism and patriotic hacking are rapidly becoming a standard component of nongovernmental and civil society based political action, and in some cases, governmental actions. They have already become part and 16

http://www.youtube.com/watch?v=SoSbXrp2ZFA&feature=youtu.be Retrieved 18/11/12 http://www.youtube.com/watch?v=HqlsJ_eQwN8&feature=youtu.be Retrieved 19/11/12 18 http://bits.blogs.nytimes.com/2012/11/20/cyber‐attacks‐from‐iran‐and‐gaza‐on‐israel‐more‐threatening‐than‐anonymouss‐efforts/ Retrieved 20/11/12. 17

56


Michael Dahan parcel of political communication, where political messages and meanings are transmitted through hacking, seeking to influence large publics. In some cases, patriotic hackers and hacktivists will attack similar targets (as was the case with the Anonymous group and Arab/Muslim patriotic hackers in Israel). States will continue to take advantage of patriotic hackers (as China, Russia, North Korea and Iran have) in mounting cyber wars, at the very least in order to allow a degree of plausible deniability and to supplement government cyber capabilities. This underscores the need for and importance of further research.

References Aouragh, Miriyam. (In press). “Revolutionary Manoeuvrings: Palestinian Activism between Cybercide and Cyber Intifada”, in Jayyausi L. (ed.) Media and Politics in the Contemporary Arab World. Muwatin Press. Berry, David M. and Moss, Giles. (2006). “The Politics of the Libre Commons”, First Monday, volume 11, number 9 (September 2006), http://firstmonday.org/issues/issue11_9/berry/index.html Retrieved 18/11/12 Delio, Michelle (2004) “Hacktivism and how it got here”, Wired Magazine, July 7th 2004. http://www.wired.com/techbiz/it/news/2004/07/64193?currentPage=1 Retrieved 18/11/12 Denning, Dorothy. (1999). "Activism, Hacktivism, and Cyberterrorism: The Internet as a Tool for Influencing Foreign Policy," Washington D.C.: Nautilus, at http://www.iwar.org.uk/cyberterror/resources/denning.htm, Retrieved 18/11/12 Himanen, Pekka. (2001). The Hacker Ethic and the Spirit of the Information Age. New York: Random House. Jordan, Tim and Taylor, P. (2004). Hacktivism and Cyberwars: Rebels with a Cause? London: Routledge. Jordan, Tim. (2009). “Hacking and Power: Social and Technological Determinism in the Digital Age”, First Monday, Volume 14, Number 7 ‐ 6 July 2009 http://www.firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/viewArticle/2417/2240 Retrieved 18/11/12 Levy, Steven. (1984). Hackers: Heroes of the Computer Revolution, New York: Penguin Books. Lovink, Geert. (2008). Zero Comments: Blogging and Critical Internet Culture. London: Routledge. Nelson, Theodor. (1974). Computer Lib/Dream Machines. Self Published. nd Raymond, Eric. S. (1993). The New Hackers Dictionary, 2 Edition. Cambridge, Mass.: MIT Press Söderberg, Johan. (2002). “Copyleft vs. Copyright: A Marxist Critique”, First Monday, Volume 7 Number 3 ‐ 4 March 2002 http://www.firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/viewArticle/938/860 Retrieved 18/11/12

57


Consequences of Diminishing Trust in Cyberspace Dipankar Dasgupta and Denise Ferebee Center for Information Assurance (CAE‐R), Department of Computer Science, The University of Memphis, USA dasgupta@memphis.edu Abstract: The cyberspace has become an integral part of modern day life—social, economic, political, religious, medical and other aspects. Without the availability of the Internet today’s businesses, government and society cannot function properly. Moreover, different online social media and blogosphere are bringing people together, providing platforms to share their ideas and allowing their voices to be heard. Ideally, the cyberspace has no political, geographical or social boundaries; as a result it is promoting globalization and uniting people from all over the world. While the potential benefits of this interconnectivity are unlimited, this virtual world is also becoming hackers’ playground, underworld’s marketplace, nation‐states’ battle ground, and a vehicle for propaganda and misinformation. In this paper, we argue that with the growing threat of coordinated attacks, release of complex malware and gradually diminished trust in freely‐available information, the openness of the web and its global connectivity will no longer exist. Specifically, if this trend continues, the Internet will be partitioned, users will rely on information and news only through membership‐based services, the information flow will be limited to geographical and political jurisdictions and will be highly regulated by governments, online businesses and critical knowledge will only be shared among alliance of friendly nations. Keywords: cyberspace, cyber trust, misinformation, hacking, targeted malware, cyberwar

1. Cyberspace: Security, privacy and trust issues The use of cyberspace has significantly increased in last two decades providing unimaginable benefit to the humanity—brings people together, makes the world seems smaller, opens up new opportunities, increases exchange of ideas in rapid innovation, etc. At the same time, cyberspace is also opening the possibility for greater danger of harm on a larger scale. The cyber technology has made fraud, identity theft easier for hackers and criminals, and such activities increasing with the rising popularity of online businesses and its other uses in the society. This paper focuses only on the consequences of diminishing trust in the openness of the cyberspace. Figure 1 shows how different cyber entities continuously trying to damage, diminish, misguide, misuse, and abuse various components and subcomponents (both software and hardware) of cyber‐systems resulting in growing mistrust in different segments of the Internet users (Schneider, 1999). Online social networks (OSNs) have become part of our daily life; almost everybody has a social presence in blogosphere. Different blog sites attract different people for participating in discussions, share information and forming groups and online community. While blogging, tweeting and other social and business networking usage growing, studies show that most OSN users are vulnerable to identity theft, target of third‐party information tracking (via cookies used by data aggregators). This allows the aggregator to track the user’s movements across multiple websites, their navigation pattern and frequently visited sites. For example, Twitter has archived every tweet (250 million a day) and has agreed a deal allowing the UK‐based company ‘Datasift’ to mine through data posted since January 2010. The company will use the information (users’ history, GPS information) to help firms with marketing campaigns and target influential users (Gladdis, 2012). Also OSNs are being used to spread rumors, misinformation and hatred with various intents. Advertisers use different forms of spam for marketing via emails and blogs to promote products or services. It is also used to entice users familiar with the service to exploit search‐engine reputation of the hosted service; to attract traffic from “neighboring” blogs, etc. (Tucker, 2012). Some companies are using third parties for mining usage/access data of employees and customers to know their online behavior and loyalties. For example, a Firefox add‐on (Collusion) allows seeing all the third‐party entities tracking users browsing pattern (Greengard, 2012). For identifying market niches and customer types, some companies are not only seeking market data analysis of their own products and services but also of their competitors’ to have the commutative edge, however, such a broad data digging venture may cross the legal and ethical trade boundaries, leading to industrial espionage.

58


Dipankar Dasgupta and Denise Ferebee

Cyber

Users

Hacker

Personal Info/ Misinformatio

Targeted Attacks

Spreading Viruses/Worms

Disruption

Finance

Advertisers/ Marketers

Cyber

Hactivists Advertisers/ Marketers

Figure 1: An Illustration of how the cyberspace is under continuous assault from various entities Cyber criminals now use sophisticated keyloggers and ransomeware (e.g. Citadel) to steal Personally Identifiable Information (PII) and financial data (Kitten, 2013). They can easily engage in identity theft as there are many web businesses providing personal information that have been cross‐referenced with publicly available information. These businesses are known as “people search engines”. One such search engine, peoplefinders, sells personal data profiles that include: names, addresses, date of birth, family member names, marriage records, bankruptcies and liens, etc. Other web services like, Spokeo, even merge relationships from social network sites with personal data (i.e. email address, marital status, etc.); while these are valid businesses and useful resources for verifying someone’s background, however, possibilities exist for misuse or exploit (Jones, 2012). Cyber criminals are also taking advantage of the trust relationship formed among online social network users to steal personnel data (Sherry, 2012). Figure 2 highlights the evolution of cyber threats and cybercrimes over the years‐‐the use of a variety sophisticated malware and social engineering techniques to launch attacks, exfiltrate data, and steals intellectual property. Hacktivist, a group of loosely‐connected hackers such as Anonymous, while opposing to censorship of the Internet, are continuously challenging the big and small businesses, causing disruption in their operation. In 2012, these groups claimed DDOS attacks against some leading banks for drawing international attention (Kitten, 2013). Web sites like Wikileak (Vijavan, 2010a) every now and then releasing sensitive information to expose government activities to the public. While such whistle‐blowing may try to expose corruptions, injustice, etc. such leakages in cyberspace appear to have far reaching consequences. Cyber hackers are developing techniques for updating their malware more quickly with sophisticated scripting tools; for example, some are now using automated morphing strategies that allow the malicious code to evade standard cyber security tools. One such tactic, says security firm FireEye, is the use of "throwaway" domains for spear phishing e‐mails, in order to keep technologies that rely on domain reputation analysis from sniffing out the sender’s intentions.

59


Dipankar Dasgupta and Denise Ferebee

Figure 2: Trend Micro’s 2012 Global Threat Report (http://blog.trendmicro.com) illustrates the increased sophistication in attack techniques to compromise and infiltrate the cyberspace. Malware are now injected in hardware (logic circuits are manipulated to include redundant malicious functionalities, triggered via remotely controlled benign operations) so these are hard to remove even after the reinstallation of software and are beyond users’ and system administrators’ reach (BBC News, 2012). A new breed of customized, tightly targeted, and byte‐level obfuscated malicious code disguises its appearance to evade reactive security measures of all kinds, specifically signature‐based antivirus solutions. However, security analysts noted that though each new instance of such a malware "family" may appear different, its behavior can be determined through long‐term observations (Durkota and Dormann, 2008). Table 1: Sample cyber threat statistics (McAfee Lab, 2012) Category Mobile Malware Android Device Malware Ransomeware Messaging Threats

2012 3rd Quarter Statistics More than 20,000 samples More than 8,000 new samples More than 200,000 samples More than 0.8 trillion spam messages globally and continuing to decrease in volume

The first waves of recent malware were designed to steal financial assets, intellectual property, or sensitive information broadly distributed a few threats to compromise the most vulnerable among millions of targets. But a second generation of customized, precisely‐targeted threats is a far greater challenge to today’s security technologies and the businesses that depend on them (Symantec WP, 2012). In late October 2012, two new mobile device Trojans (Loozfon and Fishner) were found which were designed to remotely control and monitor and compromise Android devices (Kitten, 2013). Table 1 provides some statistics on growing malware threats in mobile devices (McAfee Labs, 2012). DNS servers are operated by the Internet Service Provider (ISP) and are included in our computer’s network configuration. The DNS and DNS Servers are a critical component of our computer’s operating environment— without them, we would not be able to access websites, send e‐mail, or use any other internet services. Hackers have learned that if they can control the user’s DNS servers, they can easily control what sites the user connects to on the Internet. By controlling the DNS queries, a criminal can get an unsuspecting user to connect to a fraudulent website or interfere with that user’s online web browsing. One way criminals do this is by infecting computers with a class of malware, called DNSChanger. In this scenario, the criminal uses the malware to change the user’s DNS server settings to replace the ISP’s good DNS server with a rogue DNS server operated (via botnet) by the criminal (Paul, 2012; Wawro, 2012). All these targeted attacks can be divided into two major classes (Abrams, 2010):

60


Dipankar Dasgupta and Denise Ferebee

Targeting a specific company or organization ‐ this type of attack is directed at a specific organization and the aim of an intruder is unauthorized access to confidential information such as commercial secrets (as with the Aurora attack).

Targeting specific software or IT infrastructure ‐ this type of attack is not directed at a specific company and its target is the data associated with a certain kind of software. As this class pre‐supposes a long term attack, it is designed to circumvent protection systems (as with the Stuxnet attack).

Cyberwar, Cyber Terrorism and Cyber Espionage: Richard A. Clarke (Clarke and Knake, 2010), in his book on “Cyber War” defined Cyberwar as actions by a nation‐state to break into another nation's computers or networks for the purposes of causing damage or disruption and to access intellectual property and state secrets. These Nation‐States may use information gained to seek an economic, political, and military edge (Kitten, 2013). According to Joe St. Sauver (Sauver, 2008), the following are illustrative actions of cyberwar:

Low‐intensity advanced persistent threats (APTs), asymmetric economic cyber attacks, such as spam, phishing.

Cyber attacks on fundamental Internet protocols such as DNS (the domain name system) or BGP (the Internet’s wide area routing protocols).

Kinetic ("physical") attacks on high value Internet “choke points” such as cable landing sites or Internet exchange points.

Operations conducted against critical civilian infrastructure such as industrial control systems (so‐called “SCADA” systems).

Strategic high altitude strikes aimed at destroying or disrupting national infrastructure on a wide‐scale through electromagnetic pulse (EMP) effects.

On April 07, 2010 Computerworld magazine reported state‐sponsored cyber espionage where more than 30 companies including Google were hacked using custom malware to get around people's antivirus protection (Vijavan 2012). Many security experts believe that cyberwar originates with Russia and Georgia conflict in 2008 when Georgia's Internet infrastructure was severely disrupted by cyber attacks carried out by Russia through an array of botnets, distributed denial‐of‐service attacks, logic bombs and other online offensives (Swaine, 2008). Stuxnet discovered in June 2010, it is believed to be the first malware targeted specifically at critical infrastructure systems (Hurwitz, 2012). Since then a series of targeted malware emerged including Duqu (September 2011), a new espionage or surveillance toolkit “Gauss”, Kaspersky says it seems to come from the same group who developed Stuxnet, Duqu, and Flame. The data‐stealing Mahdi Trojan, discovered in February 2012 and publicly disclosed in July, is believed to have been used for espionage since December 2011. Mahdi records keystrokes, screenshots, and audio and steals text and image files. Flame was discovered in May 2012 during Kaspersky Lab's investigation into a virus that had hit Iranian Oil Ministry computers in April. Kaspersky believes that the malware, which is designed for intelligence gathering, had been in the wild since February 2010, but CrySyS Lab in Budapest says it could have been around as far back as December 2007. This malware wipes data from hard drives, placing high priority on those with a .pnf extension, which are the type of files Stuxnet and Duqu used, and has other behavioral similarities; it deletes all traces of itself. The Shamoon virus discovered earlier in August 2012 attacks Windows computers and is designed for espionage (Mills, 2012). These cyber‐attacks are taking the form of economic warfare, however, yet to realize how these will disrupt business confidence and world economy. Cyberwar vs. physical war: These two wars are fundamentally different—one is pursued secretly and the other starts usually with official declaration (unless proxy) but in many cases one type follows the other or both may be pursued in parallel case of serious conflicts. However, there are distinct differences in cyberwar because:

In cyberspace, there is no physical boundary (as territorial control is hard to manage) in particular with public cloud computing environment and global data centers (Levy, 2012) which opens door for cyber espionage.

Cyberwar may be happening all‐the‐time: There is no notion of wartime and peace period and defined battle field; also the rules of engagement of cyber warfare are yet to evolve (Heichler, 2012).

61


Dipankar Dasgupta and Denise Ferebee

Identifying Friend and Foe: Determining who friends are and who enemies are hard; a cyber‐attack and its source is very difficult to trace since the cyber evidence is hard to collect and verify even for experts.

Physical warfare can be kept under control (lock and key) and also can track of their existence but cyber hacking tools once released remain in the wild. So developers need to device solutions to contain such a tool so that it does not back fire.

Cyber superiority will be very difficult to achieve as the technology is rapidly changing, collective multi‐ national intelligence will be essential to quickly detect Advanced Persistent Threats (APT) (FireEye WP, 2012) and win the cyberwar.

According to a Computerworld report (Vijavan, 2012), “If a full‐fledged cyberwar were to break out, the nation's economy would be hit hard. Banks might not be able function, electricity, water and other utilities could be shut off, air travel would almost certainly be disrupted, and communications would be spotty at best‐ ‐in a word, chaos”.

2. Consequences of diminishing trust Free flow of unbiased information and knowledge is essential to rapid innovation and for the well‐being of mankind. Accordingly, highly interdisciplinary interaction of experts (specialized in their own domain) hold pieces of technological puzzles; when they put their collective knowledge in the right order an innovation emerges. Future creation may no longer be an individual effort rather collective knowledge will move the technological advances in different direction in a short period of time. Cyberspace is serving as a vehicle for long‐distance collaboration and interaction. Also people world‐wide are increasingly showing the signs of their virtual presence, which has gradually became the social norm. Cyberspace has transformed the notion of reference points, accuracy, truth and trust. In physical world, we observe the characteristics of a tangible object from its proximity and perceive consequences of our actions, performance, and our bodies as reference points to check perceptions. These reference points become attenuated in cyberspace, sometimes disappearing (Tompkins 2003). However, all the benefits of openness of the Internet and it global connectivity are continually being exploited by hackers, criminals and nation‐states. If the current trend of cyber exploitation continues, it will be challenging to identify:

Information and misinformation

News and propaganda

Software vs. Malware

If the cyber espionage continues, the global IT supply chain of components, products and services will be at high‐risk. Already some hardware components are found to be infected with malware during production and multi‐national companies starting carefully scrutinizing these electronic products and getting suspicious of the place (country) where these are being manufactured (BBC News, 2012). As the trust is continuously diminishing and when the triggering point gets crossed, the Internet will be hard partitioned and will be highly controlled/regulated by states, affecting HW/SW global market for businesses. It is not clear how multi‐ national companies will do their business in the global market place in future. It is to be noted that soft partitioning of the web has already started through web of trust, filtering of rouge IPs and MAC addresses, content‐based filtering, domain‐based site blocking, etc. Hard partitioning is also visible in many countries as certain websites are not accessible from their geographical boundaries. In particular, cyber policies for the Internet blackout, control of web contents, communication channels and blocking of social networking sites, are increasingly been adopted by some countries (Segal, et al., 2011; Hurwitz, 2012). For these governments, cyber security primarily means controlling of content and communication or social networking tools that may threaten their regimes’ stability (Segal et al., 2011). However, for now people are finding ways to subvert such blocking, and communicating anonymously via Tor networks and maintaining their cyber presence.

3. Summary The cyberspace has no political, geographical or social boundaries which can unite people from all over the world and allow their voices to be heard; it also provides a platform for international collaboration and helps

62


Dipankar Dasgupta and Denise Ferebee to solve complex problems. However, with the ever growing cyber threats, the Internet users are becoming suspicious about storing and transferring their sensitive information, they are worried to find out who is tracking them from behind the scene (server end), where their personal information is going, etc. In order to maintain a secure and resilience computing environment, trustworthy software and hardware need to be installed. Vendors should take more responsibility in assuring the integrity and bare liabilities‐‐if the product fails to meet the security standards or if it gets compromised. To remove product weaknesses and vulnerabilities, software and hardware should go through more rigorous testing and evaluation process before get released. The software patching practice that is prevalent today neither solves the security problem nor restores trust among customers. Moreover, the patching process and auto‐update mechanism are being abused to inject spyware. To alleviate the situation, the developers should strive to be highly qualified in secure design and coding, and all products should be security certified before release. It is the time now for engaging the world in keeping the cyberspace safe and secure, and build a mutual trust so that the benefit of openness of the Internet can be preserved and cultivated. However, if countries and organization waste their time, efforts and resources on protecting and defending their cyber assets then technological innovation will definitely hamper or slow down. Importantly, individual user trust even if get diminished, there may not be any major change in the Internet openness but if cyber‐attacks, industrial espionage, and so‐called cyberwar continue, it will diminish the trust relationship among countries and organizations, and will have devastating consequences in science, commerce and free flow of information and knowledge. As the companies and governments are failing to defend from sophisticated cyber‐attacks and infiltration, experts believe that any durable cyber security solution must be transnational. Recently, there are several calls for a treaty on International Code of Conduct for Information Security, which not only addresses cyber security but also calls on states to curb the dissemination of information (Segal, et al., 2011). Several countries are pushing for more United Nations control of the Internet at an International Telecommunications Union meeting in December 2012; however, USA is advocating for keeping the Internet free from any government control (Gross, 2012). The paper highlights the major causes of diminishing trust in cyber ecosystem and encourages all involved to engage in discussion and dialog to build cyber trust and to avoid any cyber conflict.

References (Abrams, 2010) Randy Abrams (interview). Should I Rely on My ISP to Keep Me Safe From Malware Attacks? August 28, 2010. http://go.eset.com/us/press‐center/radio/interviews/august‐28‐2010/ (BBC News, 2012) Malware inserted on PC production lines, says study. BBC News Technology, September 13, 2012. http://www.bbc.com/news/technology‐19585433 (Clarke and Knake, 2010) Richard A. Clarke and Robert Knake. Cyber War: The Next Threat to National Security and What to Do About It (Book) by Harper‐Collins Publishers, 2010. (Durkota and Dormann, 2008) Michael D. Durkota and Will Dormann. Recovering from a Trojan Horse or Virus. Information regarding malicious software removal can be found at the website of the United States Computer Emergency Readiness Team, 2008: https://www.us‐cert.gov/reading_room/trojan‐recovery.pdf. (FireEye WP, 2012) Cyber Attacks on Government: How APT Attacks are Compromising Federal Agencies and How to Stop Them. A White Paper by FireEye, 2012 (WP.FED.052012). http://www.meritalk.com/uploads_resources/000088_5630.pdf (Gladdis, 2012). Keith Gladdis. Twitter secrets for sale. http://www.dailymail.co.uk/sciencetech/article‐2107693/Twitter‐ sells‐years‐everyones‐old‐vanished‐Tweets‐online‐marketing‐companies.html (Greengard, 2012) Samuel Greengard. Advertising Gets Personal. Communications of the ACM, Vol. 55 No. 8, Pages 18‐20, August 2012 (Govt. report, 2011) National Strategy for Trusted Identities in Cyberspace (NSTIC), Whitehouse, April 2011 http://www.whitehouse.gov/sites/default/files/rss_viewer/NSTICstrategy_041511.pdf (Gross, 2012) Grant Gross, US House to ITU: Hands off the Internet. PC world, August 3, 2012 http://www.pcworld.com/article/260299/us_house_to_itu_hands_off_the_internet.html (Heichler, 2012) Elizabeth Heichler, Cyberwarefare evolves faster than rules of engagement. Computerworld Magazine, November 12, 2012. http://www.computerworld.com/s/article/9233524/Cyberwarfare_evolves_faster_than_rules_of_engagement (Hurwitz , 2012) Roger Hurwitz. Depleted Trust in the Cyber Commons, 2012 http://www.au.af.mil/au/ssq/2012/fall/hurwitz.pdf (Jones, 2012) Willie Jones. This Week in Cybercrime: Hackers Say “If You Can’t Beat ‘Em, Evade ‘Em” http://spectrum.ieee.org/riskfactor/telecom/security/this‐week‐in‐cybercrime‐hackers‐say‐if‐you‐cant‐beat‐em‐ evade‐em/?utm_source=computerwise&utm_medium=email&utm_campaign=090512

63


Dipankar Dasgupta and Denise Ferebee (Kitten, 2013) Tracy Kitten. Top Threats: The 2013 Outlook. http://www.careersinfosecurity.com/top‐threats‐2013‐ outlook‐a‐5388 (Langner, 2011) R. Langner. Stuxnet: Dissecting Cyber warfare Weapon. In IEEE Security and Privacy, Volume 9, issue 3, pages 49‐51, May‐June 2011 http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5772960 (Levy, 2012) Steven Levy. Google Throws Open Doors to Its Top‐Secret Data Center. WIRED Magazine, posted on October 17, 2012. http://www.wired.com/wiredenterprise/2012/10/ff‐inside‐google‐data‐center/all/ (McAfee Labs, 2012) McAfee Labs. McAfee Threats Report: Third Quarter 2012, http://www.mcafee.com/us/resources/reports/rp‐quarterly‐threat‐q3‐2012.pdf (Mills, 2012) Elinor Mills. Cnet News. A Who's Who of Mideast‐Targeted Malware: What do Stuxnet, Duqu, Gauss, Mahdi, Flame, Wiper, and Shamoon have in common? Also in ACM News, August 31, 2012. http://news.cnet.com/8301‐1009_3‐57503949‐83/a‐ whos‐who‐of‐mideast‐targeted‐malware/ (Paul, 2012) Ian Paul. DNSChanger Malware Set to Knock Thousands Off Internet on Monday. PCWorld Magazine, July 5, 2012. http://www.pcworld.com/article/258796/dnschanger_malware_set_to_knock_thousands_off_internet_on_mond ay.html (Sauver, 2008) Joe St Sauver. Cyber War, Cyber Terrorism and Cyber Espionage. Presented at IT Security Conference, Fargo, ND October 21‐22, 2008 (http://pages.uoregon.edu/joe/cyberwar/cyberwar.pdf) (Schneider, 1999) Fred B. Schneider, Editor, Committee on Information Systems Trustworthiness, Trust in Cyberspace, Computer Science and Telecommunications Board (CSTB).The National Academies Press, 1999 http://www.nap.edu/openbook.php?record_id=6161&page=R19 (Segal, et al., 2011) Adam Segal, Maurice R. Greenberg and Matthew C. Waxman. Why a Cybersecurity Treaty Is a Pipe Dream, Council on Foreign Relations, October 27, 2011. http://www.cfr.org/cybersecurity/why‐cybersecurity‐ treaty‐pipe‐dream/p26325 (Sherry, 2012) J.D. Sherry and T. Kellermann. Continuous Monitoring in a Virtual Environment. A White paper by Trend Micro Incorporated, August 2012. For details http://blog.trendmicro.com. (Swaine, 2008) Jon Swaine. Georgia: Russia 'conducting cyber war The Telegraph, August 11, 2008. http://www.telegraph.co.uk/news/worldnews/europe/georgia/2539157/Georgia‐Russia‐conducting‐cyber‐ war.html (Symantec WP, 2012) Adaptive Behavior‐Based Malware Protection, Real‐time protection against targeted attacks, accessed on December 4, 2012 <http://www.findwhitepapers.com/force‐download.php?id=22807 >. (Tompkins 2003) Paula Tompkins. Truth and trust in cyberspace. Based on a paper presented at the conference on Communication Ethics and Virtual Reality, 31 October‐3 November 2003, co‐sponsored by Brigham Young University, University of Illinois and WACC. http://www.waccglobal.org/en/20032‐science‐it‐and‐society/643‐Truth‐and‐trust‐in‐ cyberspace.html (Tucker, 2012) C. E. Tucker, The economics of advertising and privacy, International Journal of Industrial Organization 30, 3, May 2012. (Vijavan, 2010) Jaikumar Vijayan. After Google‐China Dust‐Up, Cyberwar Emerges As a Threat. IT solutionjournal, April 7, 2010. http://www.itsj.com/launchpage.aspx?CID=348103&NUOSID=100356855 (Vijavan, 2010a) Jaikumar Vijavan. White House orders security review in wake of WikiLeaks disclosure. Computer World Magazine, November 29, 2010. (Wawro, 2012) Alex Wawro. Protect Yourself From DNSChanger. PCWorld Magazine, May 8, 2012. http://www.pcworld.com/article/255137/protect_yourself_from_dnschanger.html

64


Towards a Theory of Just Cyberwar Klaus‐Gerd Giesen Université d’Auvergne, Clermont‐Ferrand, France klaus@giesen.fr Abstract: The text applies just war theory to cyberwar from a philosophical perspective. After defining the concept of cyberwar it discusses the ethical criteria of the traditional jus ad bellum and jus in bello, before emphasizing the need for a Kantian jus post bellum. The aim is to reach several ethical norms which may ultimately lead to new international legal norms (an international treaty inspired by jus post bellum) or allow to assess the adaptation of existing legal norms. Keywords: cyberwar; just war, international ethics, jus post bellum

1. Introduction Some parts of the paper will be published in French in a chapter of a French book. The author keeps the copyright of the French text. This paper addresses the relationship between ethics and military action in cyberspace. All societies on earth are more and more interconnected in large computer networks. Therefore, any attack on these networks, or on material objects connected to them (the Internet of Things), can cause serious military, economic, political and social problems. For instance, on October 21, 2002 occurred an attack against several root servers on which the Internet domain system is based and without which it cannot function. Obviously, after land, sea, air and outer space, cyberspace becomes the fifth military dimension. Therefore, it seems important to regulate cyber warfare. It is not excluded that in the not too distant future cyberwars cause a lot of damage and human casualties. While it is exaggerated to write, as Jeffrey Carr does, that "cyberattacks represent a condundrum for legal scholars" (Carr 2010: 57), it is true that “there are no common, codified, legal standards regarding cyberaggression” (Beidleman 2009: 2). On the other hand, over the last two decades much effort has been devoted to apply the law of armed conflicts (LOAC), as well as international criminal law and international communication law, to cyberwar (Tikk 2008: 18‐20). Much of it has been developed within the so‐called “Schmitt analysis” (see infra). Many legal scholars, especially in the USA, think that the existing international law suffices, i.e. that “existing LOAC provisions provide ready analogies” (Dunlap 2011: 85), while others plead for new multilateral treaties (Arimatsu 2012; Schneier 2010). The reason why the United States are reluctant is, among other motivations, that America has the most advanced cyberwar capability and that any new agreement or norm would likely oblige it “to accept deep constraints on its use of cyber weapons and techniques” (Gjelten 2010:). In what follows I will try to outline an ethics of cyberwar which leads to a middle ground: it will be argued that as far as jus ad bellum and jus in bello provisions are concerned, the existing law of armed conflict suffices. However, a new Kantian jus post bellum will be introduced, and it morally requires new codified legal norms (a multilateral agreement). The argumentation is philosophical, not legal. While many legal scholars and social scientists do not make any substantial difference between ethical and legal norms, most philosophers avoid the “naturalistic fallacy”, or deduction of an “ought” from an “is” (Frankena 1939). In other words, I will try to outline ethical norms which could allow to ultimately assess existing legal standards (or their absence). I hereby rely on a double epistemological stance: 1. Reasoning, as far as possible, by analogy with other spheres of war; 2. Using an ethical approach which is flexible enough to easily deal with new technologies.

2. Conceptualization 2.1 What is cyberwar ? From the ethical viewpoint it is important to differentiate between an act of cyberwar and an act which may be wrong, but does not fall under the category of war. Unlike many other authors (Einzinger 2011; Micewski 2011) I would like to plead for a rather restrictive definition in order not to overload the concept. One of the

65


Klaus‐Gerd Giesen problems lies in the fact that intrusions on the national territory are not done by soldiers or objects (tanks, aircrafts, etc.). In this respect, some misconceptions should be put into perspective:

Cyberwar as such can only take place directly between two or more states. However, contrary to what believes Sean Watts (2012), strict state affiliation should not be the sole criterion for combatant status, i.e. the otherwise restrictive definition should also include non‐state actors which are subordinated to the will of a state, as for instance non‐governmental groups of so‐called ‘patriotic hackers’ in Russia, China, Israel and elsewhere, which work closely together with the national armies and which are actually controlled by them (Ventre 2011). As Michael Schmitt emphasizes, the existing international law provides some interesting analogies to be applied (the Tadić case of the International Criminal for the Former Yugoslavia, the Iranian hostage crisis in 1979, the Hezbollah case in 2066, etc.) (Schmitt 2011: 579). Such a definition exludes also non‐state territorial units, such as the Turkish Republic of Northern Cyprus, Palestine, Transnistria, etc..

Unlike Marie Stella (2003), it seems to me that the principle of territoriality, as an essential attribute of sovereignty, should be an integral part of the definition, despite the fact that, due to the decentralized nature of the Internet, any malware can actually cross many borders within a fraction of a second before finding its target (Hare 2009). What matters here are the effects of any cyberattack on a national territory.

The principle of armed aggression required to justify any entry into war (art. 51 of the UN Charter) should be maintained, except that the meaning of what can legitimately be considered as a weapon must evolve. A targeted, powerful and destructive computer worm can perfectly match the definition of a weapon (Delbasis 2009: 97). Here again, it all depends on the effect. After all, a plane can also be used to transport food or to bomb cities. Cyberwar requires information technologies to be used for destructive purposes.

The specialized literature celebrates the resurgence of asymmetric warfare in cyberspace (Schröfl 2011): facing a state with a powerful cyberarmy, such as the United States, Israel, China or Russia, all other countries may have, to different degrees, some offensive or defensive cybercapacities and may be tempted to harass them. However, the balance of power leaves for the moment no doubt about the outcome of such an asymmetric conflict. It must nevertheless be admitted that neither total victory nor total defeat are likely in cyberspace.

One of the peculiarities of cyberwarfare is the possibility of a sub rosa conflict. In this case neither the attacker nor even the defender wishes to make public, including in the eyes of their own people, the existence of a cyberclash – either in order not to lose face in the event of defeat (for the attacked state), or out for fear of the international public opinion (for the aggressor state), or (for both) to avoid an escalating conflict by a spillover effect on other military spheres (conventional or nuclear warfare), or to avoid the panic of populations (Libicki 2009: 128‐129). The sub rosa conflict poses the dilemma of democratic legitimacy of any major military decision versus technocratic efficiency by experts. It is clear that from the standpoint of international ethics the greatest possible transparency must be required. Thereby, waging a sub rosa cyberwar should at least be discussed and authorized behind closed doors by the relevant parliamentary defense committees.

Following these prerequisites one can quickly dismiss:

Cybercrime, even by non‐state groups, such as the Russian mafia. The Council of Europe is the only international organization to have regulated cybercrime activities.

Cyberpropaganda and hacktivism, even if they may include DDoS attacks against government websites.

A one‐time cybersabotage by a state: the Stuxnet virus remains thus significantly below the threshold which reasonably defines cyberwar.

Cyberespionage: As a matter of fact, espionage through new technologies is as old as the relations between states. The hacking of government computers, or implants such as the Flame worm, or the theft of data, do not make any exception.

Cyberterrorism and cyberguerrilla are the result of non‐state groups against one or several states (some scholars believe that the attack on October, 21 2002 against the internet domain name root servers has been perpetuated by Al‐Qaeda), and fall therefore not within the category of interstate conflict.

Thus, the dividing lines between different malicious activities taking place on the Internet are actually not so blurred.

66


Klaus‐Gerd Giesen

2.2 Just war theory I now quickly turn to the question of the proper basis for an ethical (not legal) approach which could deal with the issue of cyberwar. My preference goes to the just war theory, which historically stems from natural law, precisely because it is an old theory (from Cicero to Walzer). Gradually, over the centuries, the just war theory was able to adapt to all technological revolutions. For instance, Vitoria introduced in the 16th century the important distinction between combatants and civilians, with the concomitant notion of collateral damage, as a result of the emergence of artillery technology on the battlefields. Or in the 1940s and 1950s, John Ford, Paul Ramsey and James Turner Johnson, among others, discussed the highly relevant question if a defensive nuclear war can be just. Just war theory is thus very flexible ‐ almost a casuistry ‐ and adaptable to new technologies of warfare (Giesen 1992: 123‐150, 267‐277). However, the classical just war theory will be amended here by reference to Immanuel Kant, in the sense that it seems logical to add to the traditional jus ad bellum and jus in bello a Kantian jus post bellum (Kant 1797: §§58‐60). As I have tried to demonstrate elsewhere (Giesen 1997), Immanuel Kant was himself a just war theorist, except that his ultimate philosophical foundation is provided by the subject and not by a metaphysical natural order (as in natural law).

3. Applications Let’s now turn to the application of the three dimensions of the just war theory (jus ad bellum, jus in bello, and jus post bellum) to cyberwar, as it is defined above. Each dimension has a catalog of various formal criteria. And each catalog is cumulative, which means that all criteria must be met if a given cyberwar is to be considered as a just war.

3.1 Jus ad bellum 3.1.1 The ultimate aim of war: a more perfect peace (than before the war) This first criterion is difficult to fulfill, simply because cyberwars tend not to stop, i.e. to continue almost endlessly, interspersed with more or less long intermissions, possibly at the sub rosa level. However, a war can be just only if he there is an end to it and if the plans for the post‐war order correct some deficiencies properly identified prior to the conflict. This means that such a cyberwar can only be a response to a kinetic aggression or a cyberassault from another state, and only in the case when it is designed to eradicate the harmful potential of the opponent. 3.1.2 The authority of the prince: the declaration of war Here we are faced with two challenges: time and attribution. Due to the high speed of cyberwar flows, the formal diplomatic declaration of war must be reduced to the minimum, i.e. to a computer signal sent a few moments before replying to the aggression, by analogy with the warning shot by an individual in an emergency situation. On the other hand, the problem of attribution lies in the fact that in cyberspace it is highly problematic to identify with certainty the attacker, particularly because of the possible presence of other actors in the virtual battlefield (Wheeler/Larsen 2007), and also because of the likely use of botnets (third‐party servers), as it was the case during the attack against Estonia with the diversion of at least one million computers. While absolute certainty is never possible in cyberspace, we can, however, morally require a very high probability of 99%. In other words: I plead for a probabilistic approach. This criterion automatically excludes hackers and private contractors which are not submitted to state authority (for instance by sub‐contracting), the wannabe states such as Puntland and Abkhazia, the cyberguerrilleros, as well as terrorist groups, unless they are protected by a state which has knowledge of their actions and does not intervene. Here comes into the picture the analogy with the invasion in November 2001 of Afghanistan by the United States and its allies: the Taliban were not aware of the preparation of the September 11 attacks, but subsequently refused to expel Al Qaeda from Afghanistan. Thus, a state which refuses to take action against aggressive non‐state actors on its territory may itself become the legitimate target of a cyberresponse by the assaulted state, because it bears indirect responsibility (Tikk 2008 : 22).

67


Klaus‐Gerd Giesen 3.1.3 The proportionality of fault and punishment Here comes the question of the threshold at which the response may start. Obviously, a simple DDoS is not enough. It is necessary that the cyberaggression causes human victims (through the Internet of Things) ‐ for example from nuclear radiation or harmful emissions of chemical plants, or through malfunctions in hospitals – or targets vital key interests of the state (distribution of electricity and water, stock markets and financial systems, conventional or nuclear defense, social security, aviation system, etc.). In order to reach higher precision – which is not within the scope of this paper – it is very helpful to use the so‐called “Schmitt analysis” in law, in which a qualitative one‐to‐ten scale is applied to seven criteria (Schmitt 1999; Michael 2003: 2; Wingfield: 11‐12). The great advantage of cyberweapons lies in the precision with which the counterattack can be designed at different levels and in various fields. Furthermore, since a pure cyberwar ‐ without the involvement of other national armed forces ‐ is rather unlikely above a certain level of aggression, the counterattack can also be made by using the multiplier effect from a close coordination between the cyberarmy and land, air and naval forces. In other words, a gradual build‐up of war intensity is quite feasible through the phasing of the cyberattack with more traditional means of war (Sharma 2010: 63‐67). 3.1.4 A just cause Beyond self‐defense against an armed attack (an ethical principle which is legally enshrined in Art. 51 of the UN Charter), which applies a fortiori in case of an attack by real‐world objects (assuming a first response by cyberweapons against, for example, the occupation of part of the national territory), two other ethically acceptable scenarios seem to be possible: a humanitarian intervention (to be duly authorized by the UN Security Council), and a preemptive strike in case of a very serious threat from abroad which potentially endangers the survival of a country. In a not too far future the analogy is with Michael Walzer’s concept of supreme emergency applied to the Israeli‐Arab war which started on 5 June 1967 by a preemptive strike (Walzer 1977: chapter 16). 3.1.5 A right intention One has to admit that his problem cannot be addressed correctly from a philosophical perspective, because especially in cyberspace any given actor can easily disguise his evil intentions, partly because some actions are not immediately visible to everyone. As a result, we must insist on the greatest possible transparency, and remain attentive to the testimony of outside observers (NGO watchdogs, neutral states, etc.). 3.1.6 War as last resort After a cyberattack there is insufficient time for real diplomatic negotiations in due form. The moral minimum is to ensure that the aggression did not happen by accident, for example by inadvertently spreading a virus that the attacker himself did not notice. It is therefore necessary to carry out double checks. A first step in this direction was taken in 2011 with the installation, as in the good old days of the Cold War, of a hotline between Washington and Moscow to rule out any « cyber‐misunderstanding ». 3.1.7 A reasonable hope of success The temptation to conduct an asymmetric cyberwar ‐ that is to say, a low‐level and low frequency harassment ‐ remains strong for weak states vis‐à‐vis one of the few cyberpowers. However, even if all other six criteria of the jus ad bellum are met, it requires the abandonment of any response if there is a high risk of failure, or of an even stronger counter‐response with negative effects for the civilian population; or if it may contribute to an escalation involving superior kinetic forces of the enemy. Especially in cyberspace a minimum symmetry of forces is required. Thus, even if cyberattacked by, say, China, Vietnam has no interest whatsoever to reply. The same applies, for the time being, to Saudi Arabia against Israel. It is the precautionary principle: in these cases it rather seems morally required to bring the case before international organizations, such as the UN Security Council, and/or to ask for assistance and/or protection by a cyberpower.

3.2 Jus in bello Just war theory states that even when all seven jus ad bellum conditions are met there is still more to be taken into moral consideration once the war action has started. Three criteria apply:

68


Klaus‐Gerd Giesen 3.2.1 The authorization of ruses This is about deceiving the enemy by false appearances. It is already mentioned by Aquinas in his Summa Theologica. One could imagine that in order to deter its enemy a state makes in a counter‐attack somehow believe that it has far reaching cybernetic abilities, which is not true. Such a behaviour seems morally permissible as much as cyberpropaganda in times of cyberwarfare, for example by diverting media aggressor websites for spreading false information, or even cyberespionage. 3.2.2 The proportionality of means In this context an approach by successive levels is needed. It is important to first define them in a coherent doctrine. For instance, a cyberattack that causes hundreds of deaths by dysfunctioning the civil aviation systems should, of course, cause a less severe response than several nuclear explosions with important radiation effects on a large scale, requiring the evacuation of part of the territory for many years. This criterion is therefore in its structure almost utilitarian: a true calculation of consequences is essential. A quantitative “Schmitt analysis” in law could do an excellent job to formalize the field, which remains beyond the scope of this paper. 3.2.3 The discrimination between combatants and non‐combatants It is even more difficult to operate this distinction in cyberspace than in the conventional battlefield. Fortunately, Vitoria gave us a casuistic concept par excellence : the collateral damage, which is allowed if it is not directly intentioned. This means that the cyberforce general who supervises a response and perfectly knows that it will also affect civilian populations is morally « clean » if his action is first and foremost aimed at a military target, such as adverse computer servers or conventional military facilities (e.g. the communication systems between adverse army units). This means that “only weaponry (cyber or kinetic) capable of discrimination (i.e., directed against legitimate targets) can be used: However, cyberstrategists should know that legitimate targets can include civilian objects – especially those having cyber aspects – that have dual military and civilian use” (Dunlap 2011: 89). The ethics of just war as well as the law of armed conflict require both that targeteers “do everything possible” to ensure the target is a proper military objective.

3.3 Jus post bellum Just war ethics does not need to determine if the ethical norms should be implemented by codified legal norms or by the development of existing provisions of the law of armed conflict, as long as they can be implemented correctly. Therefore, new legal agreements are, ethically speaking, not compulsory. The vast legal literature over the last years has shown that jus ad bellum and jus in bello norms can be applied to the law of armed cyberconflict by drawing legal analogies from the UN Charter and from existing customary law. However, it seems necessary to amend the traditional just war theory, which is limited to jus in bello and jus ad bellum, by adding the Kantian jus post bellum. And it will be demonstrated that the ethical jus post bellum norms must be implemented through a new international treaty. As far as I know, nobody has yet tempted to adapt the Kantian jus post bellum to cyberwar. Most authors using the just war theory either do it in law (Denning 2007; Roscini 2010, Dipert 2010) and/or entirely ignore the Kantian jus post bellum. The very few authors who deal with it (DiMeglio 2005; Ohrend 2000; Ohrend 2005) actually get mixed up with two jus ad bellum provisions (supra the two criteria of 3.1.1. and 3.1.3.) which they mistakenly take for jus post bellum norms. They are exclusively concerned by the way war is terminated and how the transition from war to peace is to be organized. Some even write mistakenly that “although he recognized the need to identify and discuss jus post bellum, Kant did not specify criteria for the category” (DiMeglio 2005: 133). Kant was not concerned with war termination or the transition from war to peace, except as prospective jus ad bellum provisions. Otherwise it was not his problem as a philosopher. His concern was rather on a more abstract level about the consequences of a particular war act for all or most countries of the international system of his time. In addition, the Prussian philosopher did not state his jus post bellum in his Perpetual Peace (1795), but two years later in the Metaphysics of Morals (specifically §§58‐60 of the Doctrine of Law). We can draw two criteria:

69


Klaus‐Gerd Giesen Firstly, Kant is very much concerned by the "violation of [international] public agreements, which presumably are of interest to all peoples, since their freedom is threatened" (Kant 1797: §60). Applied to cyberspace this disposition can be interpreted in the following way: the "bombing" and decommissioning of all thirteen root servers, meaning the implosion of the entire internet for at least some time, constitutes a breach of the agreement that connects all nations of the world to ICANN. Although the latter is formally a private firm in California, its role is to ensure the free movement of data through the constant and real‐time update of the single global registry of domain names. The implosion of the internet (including the web and email), even for only a few days, would cause such economic and social damage, that it seems justified to morally ban it. Kant provides us with a second jus post bellum norm: an unjust enemy is "one whose publicly expressed will [...] reflects a maxim according to which, if it were a universal rule, no peace is possible between peoples, while on the contrary the state of nature becomes eternal" (Kant 1797: §60). Here we recognize easily one form of the categorical imperative. Such a return to the (political) state of nature seems possible in one case scenario: a malware which destroys in a very short time and permanently all or most artefacts connected to cyberspace: computers, mobile phones, tablets, servers, satellite systems, GPS, TV, digital radio, etc.. with unimaginable consequences on the global economy, the relations between states, and the internal cohesion of societies. For sure, in Malawi or Kiribati the consequences would be relatively minor, but most developed states would experience shocks on an unprecedented scale, so that at least for a while no stable peace would be possible, and a return to a sort of state of nature would appear as inevitable. Our societies have become just too dependent on cyberspace. The two Kantian jus post bellum criteria of §60 of the Metaphysics of Morals may raise concern about a sort of virtual Armageddon in which the existing electromagnetic spectrum is used to destroy many parts of the cyberspace as such and many objects linked to the Internet of Things. Despite the fact that both are artefacts they can nowadays be labeled as global commons. At least the most developed and emerging countries of the world heavily rely on them each single minute. The cyberspace and the Internet of Things have actually become the center of gravity for the globalized world (Schreier 2012: 13). By analogy with the biosphere one may call it the infosphere, and it almost total informational entropy can morally be considered to be the ultimate evil in cyberconflict (Taddeo 2011). It is the common duty of all nations to prevent and to outlaw any actor who may try to interrupt the peaceful flow of data in the international system and to bring the world back to a pre‐cyber age. Especially the vulnerable developed countries should fear such a debilitation equally. Unfortunately, it cannot be totally ruled out that a rogue state – such as North Korea – launches one day an attack against the entire cyberspace and/or the Internet of Things. In addition, transnational actors – such as jihadist groups – may acquire sufficient technical competence to destroy at least part of the Internet. We don’t know what will be technically possible in, say, ten years. Therefore, it seems of outmost ethical importance to demonstrate a common, universal (or almost universal) consensus on these issues. Experts of international law should be mandated, if possible by the UN Security Council, to find law provisions which clearly outlaw any attempt to destroy the cyberspace and the Internet of Things. Possibly they could qualify it even as a crime against humanity, because it targets one of the global commons as such.

4. Conclusion In the foregoing I have attempted to superficially clear the ground. All the different just war criteria deserve considerably deeper discussion. I have shortly summarized the jus ad bellum and jus in bello norms, which have been already much discussed in the specialized literature. It was important to clarify several provisions, especially of the jus ad bellum, as some of them are frequently mixed up with the Kantian jus post bellum. In 3.4. I then introduced the latter. My main conclusions are: 1. The Kantian jus post bellum has by far not attracted enough attention as far as cyberwar is concerned; 2. While the jus ad bellum and jus in bello can be implemented by adopting and developing the existing UN Charter and customary law, this seems not to be possible for the jus post bellum. Here an international treaty is needed, for the simple reason that any other legal solution may only arrive

70


Klaus‐Gerd Giesen when it is already much too late. The United States of America, which until now has been rather reluctant to adopt any treaty on cyberconflict, should take the lead to reach an universal treaty banning once for all any attempt to destroy the cyberspace and the Internet of Things, because as the most vulnerable cybernation it is in its own national interest. The other NATO member states, as well as the major cyberpowers (Russia, China, Israel, etc.) should follow.

Acknowledgements The author would like to thank the anonymous reviewer for helpful suggestions on an earlier draft of this paper.

References Arimatsu, L. (2012) “A Treaty for Governing Cyber‐Weapons: Potential Benefits and Practical Limitation”, in: Czossek (2012). Beidleman, S. (2009) Defining and Deterring CyberWar, Carlisle Barracks, US Army War College. Carr, J. (2010) Inside Cyber Warfare, Sebastopol, O'Reilly. th Czossek, C. et al. (eds.) (2012), 4 International Conference on Cyber Conflict, Tallinn, NATO CCD COE. Delbasis, D. (2009) "Information Warfare Concept of Operations Within The Individual Self‐Defense", in: Karatzgianni, A. (ed.), Cyber Conflict and Global Politics, Abingdon, Routledge. Denning, D. (2007) The Ethics of Cyber Conflict, Draft of March 27, 2007. DiMeglio, R. (2005) “The Evolution of the Just War Tradition: Defining Jus Post Bellum”, Military Law Review, Vol. 186, pp. 116‐163. Dipert, R. (2010) "The Ethics of Cyberwarfare", Journal of Military Ethics, Vol. 9, No. 4, pp 384‐410. Dunlap, C. (2011) “Perspectives for Cyber Startegists on Law of Cyberwar”, Strategic Studies Quarterly, Vol. 5, No. 1. Einzinger, K. (2011) “Cyber Warfare 2.0 – The Undertow of the Internet”, in: Schröfl (2011). Frankena, W. (1939) “The Naturalistic Fallacy”, Mind, Vol. 48. Giesen, K.‐G. (1992) L’éthique des relations internationales, Brussels, Bruylant. Giesen, K.‐G. (1997) "Kant et la guerre de masse", in: Union scientifique franco‐hellenique (ed.), Droit et vertu chez Kant, Athens: Société hellénique d’études philosophiques, pp 331‐341. Gjelten, T. (2010) “Shadow Wars: Debating Cyber ‘Disarmament’”, World Affairs, November/December (www.worldaffairsjournal.org/article/shadow‐wars‐debating‐cyber‐disarmament). Hare, F. (2009) "Borders in Cyberspace: Can Sovereignty Adapt to the Challenges of Cybersecurity?" in: Czossek, C., Geers, K. (eds.), The Virtual Battlefield: Perspectives on Cyber Warfare, Amsterdam, IOS Press, pp 88‐105. Kant, I. (1797) Metaphysik der Sitten, Berlin, Akademie‐Ausgabe. Libicki, M. (2009) Cyberdeterrence and Cyberwar, Santa Monica, RAND. Micewski, E. (2011), “Cyber Warfare and Staretgic Cultures – Information Technology and the Human Factor”, in: Schröfl (2011). Michael, J. et al. (2003) “Measured Responses to cyber Attacks Using Schmitt Analysis: A Case Study of Attack Scenarios for a Software‐Intensive System”, in: Proceedings Twenty‐seventh Annual International Computer Software and Applications Conference, Dallas. Orend, B. (2000) War and International Justice: A Kantian Perspective, Waterloo, Wilfried Laurier University Press. Orend, B. (2005) "War Effective Justice," Ethics & International Affairs, Vol. 16, Issue 1, pp. 43‐56. Roscini, M. (20010) “World Wide Warfare – Jus ad Bellum and the Use of Cyberforce”, in: Bogdandy, A, Wolfram R. (eds.), Max Planck Yearbook of United Nations Law, Vol. 14, pp. 85‐130. Schmitt, M. (1999) Computer Network Attack and the Use of Force in International Law: Thoughts on a Normative Framework, USAF Academy, Institute of Information Technology. Schmitt, M. (2011) “Cyber Operations and the Jus ad Bellum Revisited”, Villanova Law Review, Vol. 56, pp. 568‐605. Schneier, B. (2010), “Time for a Treaty”, Defense News, 18 October. Schreier, F. (2012) On Cyberwarfare, Geneva, DCAF Horizon 2015 Working Paper No. 7. Schröfl J. et al. (2011), Hybrid and Cyber War as Consequences of the Asymmetry, Frankfurt, Peter Lang. Sharma, A. (2010) "Cyber Wars: A Paradigm Shift from Means to Ends," Strategic Analysis, Vol. 34, No. 1, pp 63‐67. Stella, M. (2003) "La menace déterritorialisée et désétatisée : le cyberconflit", Revue internationale et stratégique, No. 49, pp 165‐171. Taddeo, M. (2011) “Information Warfare: A Philosophical Analysis”, Philosophy and Technology, Vol. 25, No. 1, pp. 105‐120. Tikk, E. et al. (2008) Cyber Attacks Against Georgia: Legal Lessons Identified, Tallinn, CCDCOE. Ventre, D. (2011), Cyberespace et acteurs du cyberconflit, Paris, Hermes. Walzer, M. (1977) Just and Unjust Wars, New York: Basic Books. Watts, S. (2012) “The Notion of Combatancy in Cyber Warfare”, in: Czossek (2012). Wheeler, D. and Larsen, N. (2007) Techniques for Cyber Attack Attribution, Alexandria, Institute for Defense Analysis. Wingfield, T. et al. (2004) An Introduction to Legal Aspects of Operations in Cyberspace, Monterey, Naval Postgraduate School.

71


Defamation in Cyber Space: Who do you sue? Samiksha Godara Shamsher Bahadur Saxena College of Law, Rohtak, India sami_bishnoi@yahoo.co.in Abstract: ‐ The right to freedom of speech and expression is probably the most important universally accepted human right in a democratic society. An extension of this right is the right to know i.e freedom of information which can be enjoyed only if there are sources from which the information can flow. Here comes into role the print media as well as audio‐visual media. In the present era, with the advancement of Information and Communication Technology (ICT), there comes into picture machines like computers and facilities like internet. Nowadays, cyberspace is put to maximum use for exercising the freedom of expression. Due to anonymous nature of cyberspace one can exercise the freedom of expression to the extent of defaming another. But one must understand that this freedom is not absolute and reasonable restrictions can be imposed upon it on certain grounds like defamation, privacy, decency, public order etc. The law of defamation addresses harm to a person's reputation or good name through slander and libel. The Internet has made it easier than ever before to disseminate defamatory statements to a worldwide audience with impunity. For quite some time, courts have been struggling with remedies for online defamation. The problem has been magnified by the difficulty in identifying the perpetrator, and the degree to which Internet Service Providers (ISP's) should be held accountable for facilitating the defamatory activity. This research paper contains a comprehensive study of laws of various countries dealing with cyber crimes in general and cyber defamation in particular. For example, Indian Information Technology Act, 2000 as amended by the IT (Amendment) Act, 2008; US Communication Decency Act, 1996; UK Defamation Act, 1996 An attempt has also been made to study the jurisdictional riddles involved in cases of internet defamation because internet is a global medium transcending across boundaries. It also focuses on the recent judicial pronouncements of High Courts and Supreme Court of various nations which have delivered landmark judgments for curbing the menace of cyber defamation. Keywords: defamation, cyber space, internet service provider, online, jurisdiction

1. Introduction The internet, as a global network of computers, has revolutionized the fundamental right to freedom of speech and expression (Constitution of India, 1950). To author an article, book or poem and getting it published were the privilege of few in the pre‐internet era. The multitude could never exercise their right to freedom of speech and expression in its true perspective in that era. The internet, on the other hand, is a global medium of expression. It provides limitless opportunities and ways of expression to its netizens, before a global audience. The fundamental right to freedom of speech and expression has found a global medium that is truly democratic and luxuriously easy to use. Invisibility and anonymity are significant features of internet that lend fearlessness to speech and expression. As a medium of speech and expression, the internet is equally powerful for use as well as misuse. Apart from its advantages, nowadays the internet is increasingly being used for committing various cyber crimes like cyber defamation, hacking, pornography, stalking, squatting, fraud, terrorism etc. (Sood, 2010, pp. 122‐23).

2. Meaning of defamation Defamation is defined as “an intentional false communication, either published or publicly spoken, that injures another’s reputation or good name.” (Black, 1990) Defamation includes the common law torts of libel (involving written or printed statements) and slander (involving oral statements). Significantly both libel as well as slander could be committed via internet medium. Highlighting the importance of ‘what is in a name?’ Shakespeare (Tragedy of Othello, The Moore of Venice, 1622) has rightly said: “Good name in man and woman, dear my lord, is the immediate jewel of their souls; Who steals my purse, steals trash, ‘tis something, nothing. ‘T was mine, ‘tis his, and has been slave to thousands; But he that filches from me my good name, Robs me of that which not enriches him, And makes me poor indeed.”

72


Samiksha Godara

3. Ingredients of defamation Defamation is actual or presumed damage to reputation flowing from publication. In a traditional libel case Kenneth Love v. William Morrow & Co., (1993) it was held that “publication” is generally referred to as “the date on which the libelous work was placed on sale or became generally available to the public.” Defamation has following ingredients:

Publication of a statement;

Statement makes reference to the plaintiff;

Statement is communicated to some person or persons other than the plaintiff himself;

Statement reaches the plaintiff; and

Statement causes actual or presumed damage to the plaintiff.

The question is, does one encounter similar ‘ingredients’ when defamation occurs in internet medium? Here, the only difference is that the tort of defamation occurs when the defamatory imputation is published in electronic form, everything else remains the same. In the present scenario there is a plethora of online defamation cases. Every now and then, one’s comment th defames another. The aware victim seeks legal remedy while the others ignore such incidents. Recently, on 4 February, 2013 a renowned Saudi novelist and journalist Samar Al‐Muqren has won a case after filing a complaint against a writer and website owners for online defamation. Similarly in October, 2012 a 42‐year‐old Australian man has reportedly won a landmark defamation case against Google after images of him were published alongside gangland figures in the firm's search results. In July, 2009 the world's largest search engine was caught up in another Indian legal battle, one of many ongoing around the globe. A leading cardiologist Dr. Ashwin Mehta from Mumbai has accused Google blogs saying that they carry the matter which defame him. In November, 2012 a US Conservative Party official Alistair McAlpine received huge compensation amount from BBC and Twitter for defaming him by wrongly implicating him in a case of child sex abuse. Above incidents are glaring examples of how the cases of online defamation are mushrooming at the global level.

4. Various legal issues in online defamation 4.1 Time of occurrence Publication occurs when the contents of the publication are seen and heard by the reader or hearer. For plaintiff, the process of publication is complete, when the communication reaches him. In Godfrey v. Demon Internet Ltd., (1999) the defendant ISP carried the newsgroup ‘soc.culture.thai’ and stored postings within that hierarchy for about a fortnight during which time the posting was available to be read by its customers. On 13 January, 1997 someone unknown made a posting in the US in the newsgroup. This posting was squalid, obscene and defamatory of the plaintiff who was resident in England. On 17 January, 1997 the plaintiff sent a letter by fax to the defendants, requesting them to remove the posting from their Usenet news server. The defendants could have obliterated the posting after receiving the plaintiff’s request, but it remained available until its expiry on or about 27 January, 1997. The plaintiff claimed damages for libel in respect of the posting after 17 January, 1997‐the time when he affirmed to the ISP that the communication had indeed reached him. Morland, J. ruled: “In my judgment, the defendant, whenever it transmits and whenever there is transmission from the storage of its news server, a defamatory posting publish that posting to any subscriber to its ISP who accesses the newsgroup containing that posting. Thus, every time one of the defendant’s customers accesses ‘soc.culture.thai’ and sees that posting defamatory of the plaintiff, there is a publication to that customer.”

4.2 Publication and mode of publication It looks into the mode of publication or transmission ‐ whether audio, video, textual or multimedia. Internet publishing is in ‘electronic form’. Instances of defamation in ‘electronic form’ include generating, sending or receiving ‘defamatory’ e‐mails, online bulletin board messages, chat room messages, music downloads, audio files, screaming videos, digital photographs, etc. on the internet.

73


Samiksha Godara

4.3 Place of publication and jurisdiction Where the publication has occurred, is not easy to define, as a defamatory statement can be “published” anywhere in the world where there is access to the internet. Here, the issue is whether due process of law would be served by hauling a defendant into a particular jurisdiction simply because he has posted information that can be accessed anywhere in the world. In the context of internet, it is not necessary for the plaintiff in all cases to prove directly that the defamatory statement was brought to the actual knowledge of anyone (some person or persons other than the plaintiff himself), publication is only established if the plaintiff makes it a matter of reasonable inference that the publication was accessible in the said jurisdiction. In contrast, with the internet it is not at all probable that every website will be accessed in every jurisdiction where it can theoretically be accessed. So, as a matter of reasonable inference, it cannot be assumed that any site put on the internet and theoretically accessible from anywhere is in fact accessed everywhere. (Kohl, 2000, p.126‐27) In Dow Jones & Company Inc v. Gutnick, (2002) an alleged defamatory article appeared in Barron’s Online, the online version of Dow Jones’s print publication Barron’s, which was available to subscribers of wsj.com. Joseph Gutnick, a resident of the Australian state of Victoria, brought a defamation action against Dow Jones in a Victoria Court. Dow Jones argued that Barron’s Online was published in New Jersey, the location of the servers hosting the wsj.com website. From this it would follow that the substantive law to be applied in deciding the case is New Jersey law, which would make the Victorian Court a clearly inappropriate forum. The Court held that the article was published, with respect to Gutnick’s cause of action, not when Dow Jones placed it on its web server, but only when subscribers in Victoria accessed it. Thus, the defamation occurred in Victoria, and that Victorian law governed. The court concluded for centuries that the law in defamation cases has been that publication takes place when and where the contents of the publication, oral or spoken, are seen and heard, and comprehended by the reader or hearer.

4.4 Liability of internet service provider (ISP) or website promoter for publication An ISP represents an interactive network service. It may provide access to the internet only or offer a range of additional services. Depending upon its functional attributes, an ISP may act as an ‘information distributor’ or ‘information publisher’. The former merely acts as a carrier of information, transmitting ‘electronic message’ from one place to another, without examining its content. The function of latter is not only to publish and transmit information but also take reasonable care in relation to the said publication. In Cubby, Inc. v. CompuServe, Inc., (1991) CompuServe is an online company providing access to over 150 special interest forums comprised of electronic bulletin boards, interactive online conferences, etc. A newsletter called Rumorville was made available via the bulletin board. The plaintiff sued CompuServe for libel after allegedly defamatory statements were disseminated through the newsletter against it. Cubby argued that the court should consider CompuServe to be a “publisher” of the allegedly defamatory statements, and thus hold it liable for the statement. The court held that CompuServe had “no more editorial control over such a publication than does a public library or bookstore.” The court instead found CompuServe to be more akin to a “distributor” rather than a “publisher.” Thus, because it was undisputed that CompuServe did not have knowledge of or reason to know of the allegedly defamatory statements made in the publication, especially given the large number of publications it carries and the speed with which publications are uploaded into its computer banks and made available to CompuServe subscribers, the Court held that CompuServe could not be held liable to Cubby for the defamatory statements. The Court noted that to impose on CompuServe the duty to examine every publication it carries for defamatory statements would “impose an undue burden on the free flow of information”. In Stratton Oakmont, Inc. v. Prodigy Servs. Co., (1995) plaintiffs, a securities investment banking firm sued Prodigy Services Company, an interactive computer service, for defamatory comments made by an unidentified party on one of Prodigy’s bulletin boards against the firm. The court held Prodigy to the strict liability standard normally applied to original publishers of defamatory statements, rejecting Prodigy’s claims that it should be held only to the lower ”knowledge” standard usually reserved for distributors. The Court reasoned that prodigy acted more like an original publisher than a distributor, both because it advertised its practice of controlling content on its service and because it actively screened and edited messages posted on its bulletin boards using customized software.

74


Samiksha Godara

5. Law relating to defamation in various countries 5.1 Position in US The US Congress enacted the Communication Decency Act, 1996 not to treat providers of interactive computer services like other information providers such as newspapers, magazines or television and radio stations, all of which may be held liable for publishing or distributing obscene or defamatory material written or prepared by others. It opted not to hold interactive computer services liable for their failure to edit withhold or restrict access to offensive material disseminated through their medium. The statutory emphasis has been to protect and strengthen the ISPs business model. The various provisions of Communication Decency Act, 1996 provides the following:

Section 230 (c) (1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Section 223: “Any person who puts information on the web which is obscene, lewd, lascivious, filthy or indecent, with intent to annoy, abuse, threaten, or harass another person, will be punished either with imprisonment or with fine.”

Section 230 (c) (2): “As far as civil liability is concerned, no provider or user of an interactive computer service shall be held liable on account of:

Any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, filthy, excessively violent, harassing or otherwise objectionable, whether or not such material is constitutionally protected; or

Any action taken to enable or make available to information content providers or others the technical means to restrict access to materials described above.”

In Zeran v. America Online Inc., (1997) on April 25, 1995 an unidentified person posted a message on an American Online (“AOL”) bulletin board advertising “Naughty Oklahoma T‐Shirts”. The posting described the sale of shirts featuring offensive and tasteless slogans related to the April 19, 1995 bombing of the federal building in Oklahoma City. Those interested in purchasing the shirts were instructed to call “Ken” at Kenneth Zeran’s phone number in Seattle, Washington. As a result of this prank, Zeran received a high volume of phone calls, comprised primarily of angry and derogatory messages and death threats. He informed and requested AOL to remove the offensive message from its bulletin board, but of no avail. Zeran consequently brought an action against AOL and argued that the AOL unreasonably delayed in removing the defamatory messages posted by the unidentified person and failed to screen off similar postings thereafter. In an attempt to circumvent the protections afforded to an “Interactive Computer Service” under section 230(e) (2) of the CDA the plaintiff argued that AOL’s knowledge of the defamatory nature of the posting exposed it to liability as a distributor and therefore placed it outside the ambit of the CDA’s protections. The Fourth Circuit rejected this argument, and held that under section 230 of the CDA, AOL is immune from liability for the information that originated with a third party. In Lunney v. Prodigy Services Company,(1999) an imposter opened a number of accounts with Prodigy Services Company, an ISP, and proceeded to post vulgar messages in Lunney’s name on a Prodigy bulletin board and send a profane e‐mail in Lunney’s name to a third party. Lunney sued Prodigy for defamation. With respect to the e‐mail message, the Court found that because Prodigy was only a conduit for the message, and it did not exercise control over the content of the transmitted communication, it should be given the same privilege accorded to telephone and telegraph companies. With respect to the bulletin board message, the Court concluded that while prodigy does reserve the right to screen its bulletin board messages, this would not alter its passive character in “the millions of other messages in whose transmissions it did not participate”. Thus, the Court refused to cast an electronic bulletin board operator, such as Prodigy, in the role of a publisher. In Schneider v. Amazon.com Inc., (2001) an author sued Amazon.com for a negative and allegedly defamatory book review posted by a third party. The plaintiff argued that Amazon.com was not a provider of an “interactive computer service,” since it did not enable access to the internet: to visit the site, a user must already be online through some other service provider. The Court held that Amazon’s web site enables visitors to the site to comment about author and their work, thus providing an information service and hence could be referred to as an interactive computer service under section 230 (e) (2) of the CDA.

75


Samiksha Godara

5.2 Position in UK The U.K. Defamation Act, 1996 is the specific legislation in UK to deal with the concept of defamation. In Emmens v. Pottrle, (1885) it was held that the Act of 1996 contains an “innocent dissemination” defense. In Vizetelly v. Mudie’s Select Library Ltd., (1900) the Hon’ble Court observed that the defendant would not be liable for defamation if he has no intention to defame. The various provisions of U.K. Defamation Act, 1996 provides the following:

Section 1 (1): “In defamation proceedings, a person has a defense if he shows that‐

he was not the author, editor or publisher of the statement complained of;

he took reasonable care in relation to its publication, and

he did not know, and had no reason to believe, that what he did caused or contributed to the publication of a defamatory statement.”

Section 1 (3) (c): A person shall not be considered the author, editor or publisher of a statement if he is only involved in processing, distributing or selling any electronic medium in or on which the statement is recorded, or in operating or providing any equipment or service by means of which the statement is retrieved, copied, distributed or made available in electronic form.

Section 1 (5): In determining whether a person took reasonable care, or had reason to believe that what he did caused the publication of the defamatory statement, regard shall be made to‐

the extent of his responsibility for the content of statement or the decision to publish it;

the nature or circumstances of the publication; and

the previous conduct or character of the author, editor or publisher.

In defamation proceedings, a person has a defense if he shows that he took reasonable care in relation to its publication. (Dolding & Dzioban, 1997) Presumably, the U.K. Defamation Act, 1996 offer some protection to ISPs, and there is evidence in the legislative record that the Govt. intended ISP to fall under section 1(3). (Macpherson & Cooper, 1999)

5.3 Position in Canada As with other Commonwealth countries, Canada also follows UK law on defamation issues. Recently, the Supreme Court of Canada in the case, Hill v. Church of Scientology of Toronto, (1995) has reviewed the relationship of the common law of libel and its relation to the Canadian Charter of Rights and Freedoms.

5.4 Online defamation: An Indian perspective In India, defamation issue has been dealt under Sections 499‐502 of the Indian Penal Code, 1860. It makes no distinction between slander and libel. The Code provides: “Whoever by words, either spoken or intended to be read, or by signs or by visible representations, makes or publishes any imputation concerning any person intending to harm, or knowing or having reason to believe that such imputation will harm, the reputation of such person, is said to defame that person.” (IPC, 1860; Section 499) Further the Code provides that “Whoever defames another shall be punished with simple imprisonment for a term, which may extend to two years, or with fine, or with both”. (IPC, 1860; Section 500) Following are the main three ingredients of defamation:

Making or publishing any imputation concerning any person

Such imputation must have been made by words, either spoken or intended to be read; or

signs; or visible representations

Such imputation must have been made with the intention of harming or with knowledge or reason to believe that it will harm the reputation of the person concerning whom it is made. (Ratanlal & Dhirajlal, 2010)

76


Samiksha Godara The Information Technology Act, 2000 also provides punishment for sending offensive messages through communication service, etc. It says that imprisonment upto 3 years and fine will be imposed upon any person who sends, by means of a computer resource or a communication device‐

any information that is grossly offensive or has menacing character; or

any information which he knows to be false, but for the purpose of causing annoyance, inconvenience, danger, obstruction, insult, injury, criminal intimidation, enmity, hatred, or ill will, persistently makes by making use of such computer resource or a communication device;

any electronic mail or message for the purpose of causing annoyance or inconvenience or to deceive or to mislead the addressee or recipient about the origin of such messages. (IT Act, 2000; Section 66A)

A claim for damage to reputation will warrant an award of damages, only if the plaintiff has a reputation in the place where the publication is made. This has always been the accepted legal principle in the common law countries. In India, the same principle has been adopted and the publication takes place when and where the contents of the publication, oral, spoken or written are seen and heard, and comprehended by the reader or hearer. (Schachter, 2002) The landmark case of S.M.C. Pneumatics (India) Pvt. Ltd. v. Jogesh Kwatra, (2003) was the first case of cyber defamation in India. The Delhi High Court passed an order of ex‐parte injunction against the defendant restraining him from sending the defamatory e‐mails thereby damaging the reputation of the corporate entity, of which he was an ex‐employee. The accused Jogesh Kwatra was sending defamatory, derogatory, obscene, vulgar, filthy and abusive e‐mails to his employer Mr. R.K. Malhotra and other colleagues in order to malign the reputation of the company and its subsidiaries all over India and abroad. The plaintiffs contended that such defamatory e‐mails by the defendant were blatant violation of their legal right and the motive of the defendant in sending such defamatory e‐mails was to retaliate against the termination of his services by the management of the company. On the basis of the evidence produced before the court, the defendant was found guilty of cyber defamation. Recently in Tata Sons Limited v. Greenpeace International & Anr., (2011) a plea of cyber libel was raised by Tata Co. against Greenpeace (an environmental NGO) for designing and publishing “TATA v. Turtle” online pacman style game. Tata alleged that the said game has maligned their reputation as well as infringed their trademark. They sought interim injunction against Greenpeace on the ground that internet has wider and faster impact. Greenpeace pleaded that the game was launched in 2010 to spread awareness about the threat which Tata’s Dhamra port in Orissa poses to a sensitive ecosystem as well as the endangered Olive Ridley Turtles. The Delhi High Court refused to grant interim injunction saying that it will stifle freedom of speech. It noted that though the internet has a wider reach and potential for injury, traditional standards for the grant of injunctions in cases of libel will be applicable. ‘Publication’ is a comprehensive term, embracing all forms and mediums–including the Internet. That an internet publication has wider viewership, or a degree of permanence, and greater accessibility, than other tangible medium of expression does not alter the essential part, i.e. it is a forum or medium. However, the Court ruled that Internet publication of a libel, because of the libel’s wider reach and viewership, has to be considered as an additional factor, while assessing damages in Internet defamation cases. Before declaring whether an online ‘publisher’ or ‘distributor’ is liable for defamation one should take cognizance of the I.T. Act, 2000 which expresses the legislative intent of granting immunity to the network service provider. The immunity is absolute only if he proves for any third party information that:

he had no knowledge that the information content it is transmitting is unlawful; or

he had exercised all due diligence to prevent transmission (or publication) of unlawful information content. (IT Act, 2000; Section 79)

6. Factors to be taken into consideration for determining liability ISPs and website hosts or owners must take care to control, as far as possible, the information published on their websites. (Vishwanathan, 2004) Factors which could possibly be taken into account in determining whether an ISP or a website host/ owner have exercised reasonable care are as follows:

77


Samiksha Godara

The nature and purpose of the site containing the defamatory material and the relationship of the defendant thereto, i.e whether the defendant is bulletin board operator or an ISP or simply a company controlling its own website;

Whether the monitoring system which was put into place is proportionate to the size of the site?

The amount and characteristics of information flowing through the site;

d. The characteristics of the site users;

e. Whether or not the site attracted repeat offenders and, if so, why the site was then not removed?

f. Whether defamatory material is removed immediately upon request by the person being defamed?

7. Conclusion Communication is an art that has developed immensely over the past few centuries and an art that will continue to reinvent itself to unimaginable technological advance. Starting with the advent of the printing press in the 19th century, to the era of the Internet that we are living in today, communication has become astoundingly simple and continues to become simpler by the day. The Law, howsoever developed it may be, is not growing at the same rate as the Internet is. The law regulating the various facets of Internet usage is still very much in an embryonic phase. There is no way that the law can keep up with the pace at which technology is developing. We are, however, certain of one thing and that is the scope of every cyber consumer’s exposure to liability for defamation is global. The present trend of legislation and also the judicial approach appears to be such that these offences are treated lightly and the punishments are not adequate having regard to the gravity of such offences. Therefore, the need of hour is that all the three– the Government, the Internet Service Providers and the Internet users should understand their respective duties in curbing crimes like cyber defamation. First, the Government should take up the brave task of analyzing such crimes, which are at the threshold, and come up with recommendations in order to equip the existing legal machinery against such offences while maintaining a balance between the freedom of speech and right to privacy. For the said purpose, necessary amendments could be brought to the Information Technology Act, 2000 and also to the Indian Penal Code, 1860 by expressly bringing within their ambit the special offences such as defamation in cyber space, which is certainly a unique type of socio‐economic offence. Secondly, the courts should award exemplary punishments and damages in the cases of cyber defamation so as to deter the potential offenders. Thirdly, the ISPs can also take following measures for reducing their liability for online defamation: (a) the posting of clear warning to potential users of the site not to put libelous material onto the site; (b) periodic monitoring of sites and bulletin boards with a view to delete any problematic material; (c) the introduction of systems to facilitate the speedy publication of apologies in respect of any statements published on the sites which are found to contain libelous allegations; (d) making access to the site conditional upon the provision by any user of its name, address and other specified identifying data so that the author of any defamatory statement may later be traced and disclosed to a potential defendant, if a claim for defamation is threatened. Fourthly, the internet users should also be aware so as to protect their privacy and reputation by adopting following measures: (a) not accepting friend request from strangers on the social networking sites like facebook, twitter, orkut etc.; (b) instead of ignoring such cases, the victims should report the matter to the police; (c) changing the password periodically and not to disclose their password to anyone. In the nutshell, a collective and comprehensive effort of the government machinery, the ISPs and the individual users is required to curb the menace of newly evolved crimes like cyber defamation.

References Amar Singh v. Budalia K.S. (1965) 2 Criminal Law Journal 6593 (Patna) Bennett Coleman & Co. v Union of India, (1972) 2 SCC 788 Black, (1990) Black’s Law Dictionary, 6th ed Constitution of India 1950; Article 19 (1) (a) Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y. 1991) Dolding, L. and Dzioban, S. (1997) “Electronic Communication and the Defamation Act, 1996: Clarity or Confusion?” Information and Communication Technology Law Review, Vol. 6, p. 55 Dow Jones & Company Inc v. Gutnick, [2002] HCA 56 (Austl) Emmens v. Pottrle, (1885) 16 QBD 354, 357 Godfrey v. Demon Internet Ltd., (1999) 4 All ER 342 th Hill v. Church of Scientology of Toronto, (1995) 20 DLR (4 ) 190 Indian Information Technology Act, 2000 as amended by the IT (Amendment) Act, 2008; Section 66A, 79 Indian Penal Code, 1860; Section 499‐500

78


Samiksha Godara Kenneth Love v. William Morrow & Co., (2d Dep’t 1993)193 A.D.2d 586, 597 N.Y.S.2d 424 Kohl, U (2000) “Defamation on the internet ‐ a duty free zone after all?” Sydney Law Review, Vol. 22 Pp.126‐27 Lunney v. Prodigy Services Company, 94 N.Y.2d 242, 723 N.E.2d 539, 701 N.Y. S.2d 684 (1999) Macpherson, V. & Cooper, R. (1999) “Universities Defamation & the Internet”, Modern Law Review, Vol. 62, Pp. 58‐78 Ratanlal & Dhirajlal, (2010) The Indian Penal Code, 28th ed., p.686 Schachter, M. (2002) Law of Internet Speech, p. 134 Schneider v. Amazon.com Inc., 31 P.3d 37 (Wash. Ct. App. 2001) Shakespeare, W., 1622, Tragedy of Othello, The Moore of Venice, Act III, Scene 3, Line No. 167, London: Thomas Walkley S.M.C. Pneumatics (India) Pvt. Ltd. v. Jogesh Kwatra, Delhi High Court, Petition No.1276/2001, decided in 2003 Sood, V. (2010) Cyber Crimes, Electronic Evidence & Investigation: Legal Issues, Pp. 122‐23 Stratton Oakmont, Inc. v. Prodigy Servs. Co.,1995 WL 323710 (N.Y.Sup. Ct. May 24, 1995) th Tata Sons Limited v. Greenpeace International & Anr., Delhi High Court, decided on 28 January, 2011 U.K. Defamation Act, 1996 U.K. Obscene Publications Act, 1959 U.S. Communication Decency Act, 1996 Vishwanathan, T.K. (2004) Defamation in Cyberspace, Amity Law Review, Pp. 1‐6. Vizetelly v. Mudie’s Select Library Ltd., (1900) 2 QB 170, 179 th Zeran v. America Online Inc., 129 F.3d 327 (4 Cir. 1997) http://www.arabnews.com/saudi‐arabia/saudi‐woman‐writer‐wins‐online‐defamation‐complaint‐case Haddad, Marwa (2013) “Saudi woman writer wins online defamation complaint case” http://in.news.yahoo.com/oz‐man‐wins‐landmark‐defamation‐case‐against‐google‐095447949.html Australian News (2012) “Oz man wins 'landmark defamation case' against Google over images published online” “http://www.atimes.com/atimes/South_Asia/KG01Df01.html Murthy, Raja (2013) “Google challenged in India” http://www.nytimes.com/2012/11/26/technology/26iht‐twitter26.html?_r=0 Pfanner, Eric (2012) “Libel Case That Snared BBC Widens to Twitter”

79


Identifying Tools and Technologies for Professional Offensive Cyber Operations Tim Grant 1 and Ronald Prins 2 1 R‐BAR, Benschop, The Netherlands 2 Fox‐IT, Delft, The Netherlands tim.grant.work@gmail.com prins@fox‐it.com Abstract: Since 2008, several countries have published new national cyber security strategies that allow for the possibility of offensive cyber operations. Typically, national strategies call for the establishment of a cyber operations unit capable of computer network defence, exploitation, and, in some nations, attack. The cyber operations unit will be manned by professionals and operate under government authority compliant with national and international law. Our research focuses on offensive cyber operations (i.e. computer network exploitation and attack). The cyber unit must be provided with the right resources, in the form of accommodation, computing and networking infrastructure, tools and technologies, doctrine, and training. The open literature gives an unbalanced view of what tools and technologies a professional group needs because it emphasizes malware and, to a lesser extent, the delivery media used by cyber criminals. Hence, the purpose of this paper is to identify systematically the tools and technologies needed for professional, offensive cyber operations. A canonical model of the cyber attack process was enhanced by adding control inputs and mechanisms, and tools and technologies were extracted from these mechanisms. Both the enhanced model and the set of tools and technologies have been checked by a subject matter expert. Keywords: offensive cyber operations; attack; canonical process model; tools; technologies; SADT

1. Introduction 1.1 Background Since 2008, several countries have published new national cyber security strategies that allow for the possibility of defensive and offensive cyber operations. For example, the Netherlands’ Defence Cyber Strategy (MinDef, 2012) lists defence, offence, and intelligence as spearpoints and calls for the establishment of a Defence Cyber Command (DCC). The DCC will be manned by professionals and operate under government authority compliant with national and international law. The centre will need to be provided with resources, in the form of personnel, accommodation, computing and networking infrastructure, tools and technologies, doctrine, and training. The Netherlands is one of ten to twelve nations developing offensive cyber capabilities (Lewis & Timlin, 2011). The research reported in this paper focuses on the tools and technologies needed for offensive cyber operations, i.e. the combination of Computer Network Exploitation (intelligence) and Computer Network Attack. A quick look at the information available on the Internet shows that there are many lists of tools used by cyber criminals and, to a lesser extent, by ethical hackers. The emphasis is on malware. For example, the SANS institute – a well‐known cooperative research and education organization for security professionals – identifies worms, rootkits, exploits, Trojans, and backdoors (SANS, 2012). The MalwareInfo site (MalwareInfo, 2012) – provided by a consortium of anti‐malware tool suppliers to inform Dutch home computer users – lists virus, worm, spyware and adware, keylogger, tracking cookie, browser hijacker, Trojan, dropper, dialler, rootkit, backdoor, and rogueware tools. There are at least three reasons why such lists give an unbalanced view of the tools and technologies that a professional team operating under government authority would need. Firstly, cyber criminals and ethical hackers often operate as individuals, and not as a professional group. Rivalry among criminals hinders cooperation. It is not unusual for one criminal to take over the target or botnet of another. While criminals may be part of a loose group, this is more to exchange knowledge on specific vulnerabilities, targets, or attack technologies than to attack a target together as a team. Ethical hackers tend to concentrate on penetration testing and on reporting what target information is at risk, rather than on the whole attack process. Secondly, the lists are unlikely to emphasize the mundane tools supporting the attack process “logistics”. For example, intercepts show that cyber criminals use chat for communicating with one another (Honeynet, 2008), but this

80


Tim Grant and Ronald Prins technology does not appear in the lists. Thirdly, cyber professionals from nations with an operational offensive capability are loath to reveal their capabilities (McAfee, 2011). Hence, another way must be found to identify tools and technologies. Ways in which this could be done include:

Case study. Researchers could observe a set of cyber attacks and note the tools and technologies that the attackers used.

Software engineering. The attack process could be modelled using software engineering techniques and the tools and technologies extracted from the analysis.

Literature survey. A canonical list of tools and technologies could be constructed by comparing the multiple lists, taxonomies, and ontologies to be found in the open literature. There are three sources of literature: experienced hackers, the vendors of cyber security software products (e.g. anti‐virus (AV) packages, firewalls (FW), and intrusion detection systems (IDS)) and services, and scientific publications.

Besides the questionable ethicality of the case study approach, it is doubtful if this would be representative of offensive cyber operations performed by a professional group. A literature survey is likely to over‐emphasize malware. Hence, this paper takes the software engineering approach. In the research reported here, the canonical process model from Grant, Burke & van Heerden (2012) was enhanced by adding control inputs and mechanisms. A set of tools and technologies was extracted from the mechanisms. Both the enhanced model and the set of tools and technologies were checked by a subject matter expert.

1.2 Purpose, scope, and paper structure The purpose of this paper is to identify the tools and technologies needed for professional offensive cyber operations, based on software engineering analysis. The scope of our research is restricted to operations:

Performed by a professional group of specialists operating under governmental authority;

Compliant with national and international law and with the prevailing doctrine and RoEs;

In response to an incoming attack or the impending threat of an incoming attack;

Where the response requires a penetrative counter‐attack on a new target; and

Where the infrastructure (including Command & Control) for offensive cyber operations is already in place.

Legal issues are outside the scope of our research. This paper consists of five sections. After an introductory section, section 2 outlines the research methodology. Section 3 describes the enhancement of the canonical attack process model. Tools and technologies are extracted from the enhanced model in section 4. Section 5 draws conclusions, states the contributions and limitations of the research, and outlines further research needed.

1.3 Operational context We assume that the Dutch national and defence cyber strategy documents (MinJus, 2011) (MinDef, 2012) accurately define a professional group capable of offensive cyber operations. The national strategy depends on collaboration between public and private organizations, between ministries, and with other nations. At the governmental level, cyber security activities are led by the Ministry of Security & Justice in partnership with the Ministries of Economic Affairs, Agriculture & Innovation, Defence, and Internal Affairs. Offensive operations are the responsibility of the Ministry of Defence, with the support of military intelligence (Schnitger & Folmer, 2011). An offensive cyber operation can take three possible forms:

Counter‐attack when the nation has already suffered an attack;

Pro‐active defence when an impending attack threatens the nation;

An attack on opposing forces with and without associated conventional military action.

81


Tim Grant and Ronald Prins Although our research focuses on the first two, the organization and attack process should be compliant with the existing arrangements for conventional military action to make the third form possible when necessary. This implies that the professional group would include the following specialist sub‐groups (see Figure 1):

Strategists would determine whether an incoming attack or threat of an impending attack is grave enough to require a military response and, when authorized, would determine the goals and the rules of engagement (RoEs) for the counter‐attack.

Intelligence analysts would select and gather information about the target organizations and their computer‐based systems, both before (e.g. attribution and reconnaissance) and after (e.g. battle damage assessment) the counter‐attack.

Planners would plan the counter‐attack in detail, obtain and prepare the resources needed, and test the plan in a simulated environment.

Weaponeers would prepare cyber weapons to the planners’ specifications by integrating payloads (e.g. exploits) into delivery platforms (e.g. USB stick, virus).

Cyber operatives would rehearse and execute the testing plan using prepared resources, aiming to achieve the attack goals defined by the strategists.

Figure 1: Operation in context The professional group has access to a variety of resources. These include overt and covert sources of information, an archive of data and documents (e.g. from previous operations), a repository of tools and technologies, a computing and communications infrastructure, and a set of facilities (i.e. buildings or other accommodation). The operation would be monitored and controlled by a military commander. Separate from the professional group, governmental authorities would provide governance at the political and national level, approving each phase in the operation if justified by the information obtained up to that point. C2 and governance processes are outside the scope of our research.

2. Methodology The research reported in this paper uses rational reconstruction and the Structured Analysis & Design Technique (SADT). In philosophy, rational reconstruction (RR) is defined as “a philosophical and linguistic method that systematically translates intuitive knowledge of rules into a logical form” (Habermas, 1976). The canonical process model for an attack was obtained by analysis, i.e. breaking down the various process models found in the literature into its component steps. Then the “best‐of‐breed” steps were synthesized into the canonical model. SADT notation was used to represent the text describing how the output produced by one

82


Tim Grant and Ronald Prins step was consumed as an input into one or more subsequent steps. Using a formalisation like SADT enforces systematic analysis, and the graphical notation provides Habermas’ (1976) “logical form”. SADT (Marca & McGowen, 1988) is a software engineering technique that is highly suited to specifying the behaviour of systems in terms of functional processes. The graphical notation represents the system as a network of boxes (known as “nodes” and representing processes) interconnected by arrows (known as “ICOMs”, i.e. inputs, controls, outputs, and mechanisms). Arrows entering a box from the left represent data input, and arrows exiting a box from the right represent data output. Arrows entering a box from above represent control inputs, constraining or guiding the process. Arrows entering a box from below represent the mechanisms or resources needed to perform the process. Details of the SADT methodology and validation rules may be found in (NIST, 1993). The analysis reported in this paper was supported by the IDEF shareware tool, which facilitates creating SADT diagrams and validating them according to the NIST rules.

3. Enhancing the Canonical process model In earlier work, Grant, Burke and van Heerden (2012) developed a canonical model of the attack process by rationally reconstructing a set of seven process models found in the literature. Each model was analysed using SADT without tool support. The canonical model was then constructed by linking inputs to outputs, but controls and mechanisms were not identified. Since then, we have enhanced the earlier model by systematically adding controls and mechanisms. Compliance with the NIST (1993) methodology was aided by using the IDEF tool. Figure 2 shows the resulting process breakdown for an offensive cyber counter‐attack or pro‐active defensive operation. Each process has been numbered following the NIST conventions.

Figure 2: Canonical model ‐ process breakdown Analysis started with a context diagram (not shown). Key inputs to Operation are the actual or threatened effects of the attack on the victim systems. The victim systems themselves are included as a mechanism because access to these systems is needed to establish how the attack was carried out and who was responsible. Likewise, access is needed to the target system(s), i.e. the ones at which the counter‐attack will be directed, and their environment. Other mechanisms include the professional group and their resources. Control inputs include “Authorisation”, showing that the operation can only be executed with approval from the authorities at the political / national level, and “Law” representing not only national and international law but also the prevailing doctrine for offensive cyber operations. There are several outputs from Operation. There are requests to the governmental or political authorities to proceed with the next phase in the attack. A wide variety of reports are generated during the course of the attack. The cyber weapons used are themselves an output, together with two flags to indicate what level of success has been reached in executing the attack. The “embedded” flag indicates that the cyber weapons have been embedded into the target system. The “C2 open” flag indicates that the target system has linked up to the professional group’s C2 servers and is ready to receive commands. The “Commands (incoming)” input and the “Commands (outgoing)” output apply when the counter‐attack is aimed at converting the target system into a bot to be included in a botnet under the professional group’s control. The “effects” output represents

83


Tim Grant and Ronald Prins the actual effects of attacking the target system, which may differ from the attack goals. Effects may be unintended and even undesirable, e.g. collateral damage to other systems in the target’s environment. Figure 3 shows how Operation is decomposed. Splitting the operation into five phases makes it consistent both with the NATO standard approach to operations planning and with the assumed specialisations within the professional group. As can be seen, the Professional Group mechanism is split into Strategists, Analysts, Weaponeers, Planners, and Operatives.

Figure 3: Operation (node A0) as five phases Determine Goals (Phase 1) splits into four processes (not shown). In Monitor Threats, the Analysts continually monitor selected parts of cyberspace for signs of an impending incoming attack. They use sniffers, packet diversion tools, data extraction tools, and Advanced Persistent Threats (APTs) if these have been previously inserted into the potential attackers’ systems. The outcome is a threat report. In Assess Effects, the Analysts react to an incoming attack, using forensic tools and reverse engineering techniques to establish what happened and who was responsible. They need access to the victim system(s). They report the impact of the attack to the Strategists. In Obtain Authority, the Strategists receive the threat and/or impact reports, and establish the severity of the threat and/or incoming attack. If this breaches legal criteria, then the Strategists request authorisation from the authorities at political and national level to initiate a (pre‐emptive) counter‐ attack. They need secure communications with the authorities, perhaps separate (e.g. at a higher security level) from the communications linking the professional group. If the Strategists receive authorisation, then the Define Goals & RoE process starts. Based on the threat and/or impact reports, the Strategists define the counter‐attack goals and the RoEs. They need access to Office Automation (OA) tools, such as word‐ processing, spreadsheet, database, email, and presentation software. Finally, all the processes generate an event‐log that will eventually be used in evaluating the operation. Select Targets (Phase 2) splits into four processes (not shown). In Footprint Organizations, the Analysts gather information about the organization(s) to be attacked when given authorization by the political / national authorities. The aim is to identify and localize the organization(s)’ computer‐based systems and key persons who could be useful as the targets of social engineering techniques. The Analysts use open‐source information

84


Tim Grant and Ronald Prins sources, such as the organization(s)’ public websites, any reports that may have been published by or about the organization, and other information that can be obtained by searching the web, including social networking sites. Where necessary, this information could be supplemented by information obtained from covert sources. The resulting footprint is a database of relevant information about the organization(s), their computer systems, and key persons. In Recce System(s), the focus of the Analysts’ attention is on these computer systems. They fill a database describing the topologies of these systems and possible paths to access them, e.g. to deliver a cyber weapon. The Analysts need access to the (potential) target system(s) and their environment. Moreover, the Analysts need tools to scan and map the target system(s) and to detect the presence of FW and AV software, IDS, sniffers, and honeypots. Scanning includes enumerating the make, type, and update status of the hardware and software in the target system(s). To hide their reconnaissance activities, the Analysts need DNS zone transfer tools. Since reconnaissance may involve manipulating key persons in the target organization(s) and their suppliers, the Analysts also need social engineering skills. In Target List, the Analysts use the information gained from footprinting and reconnaissance to draw up a list of the target system(s) to be attacked in order to achieve the attack goals. In Identify Vulnerabilities, the Analysts identify what vulnerabilities are known to exist in the target system(s)’ hardware and software. This may require access to the target system(s), but most information is available on the web in vulnerability databases, on the websites of the hardware and software suppliers, on Computer Emergency Response Team (CERT) websites, and from hacker fora. If the professional group has access to or can generate information on zero‐day vulnerabilities, this covert information would be added to the vulnerabilities database. Plan (Phase 3) splits into three processes (not shown). In Plan Attack, the Planners use the target list and the information on the target system(s)’ topologies, access paths, and vulnerabilities to draw up an attack plan designed to achieve the counter‐attack goals. The Planners need a planning tool, a plan template, and databases listing the payloads and delivery platforms available to the group. Associated with the resulting attack plan will be a set of cyber weapon specifications. In Prepare Weapons, the Weaponeers prepare the cyber weapons according to the Planners’ specifications. They need access to the repository of payloads and platforms. Integration would be performed using a software development environment (SDE). To avoid detection, the weapons would be encrypted and tested against the AV, FWs, and IDSs detected in the target system(s). In Test Plan, the Planners test the prepared cyber weapons in a simulation of the target system(s). When tested successfully, the tested plan is output.

Figure 4: Phase 4: Counter‐attack (node A4)

85


Tim Grant and Ronald Prins Counter‐attack (Phase 4) splits into four processes (Figure 4). In Distribute Plan, the tested plan is distributed using secure communications to the Operatives who will execute it and they are briefed on the counter‐attack goals and RoEs. To preserve operational security, this is done “just in time” when authorized by the Authorities. In Rehearse, the Operatives practise executing the tested plan using the prepared cyber weapons in a simulator. When they are ready, the Operatives execute the plan in the Penetrate & Control process, using the prepared cyber weapons, a set of penetration and control (P&C) tools, and access to the target system(s). During the course of this process, the Operatives will emit the “embedded” flag when they succeed in embedding the weapons into the target system(s). The “C2 open” flag will be emitted if the target system(s) successfully join the professional group’s botnet. The Penetrate & Control process is decomposed further in Figure 5. In Violate System(s), the Operatives attempt to achieve the counter‐attack goals. This may call for one or more of the security principles to be violated. Data may be extracted from the target system(s) to violate their confidentiality. Their integrity may be violated by modifying or deleting information stored within or passing through the target system(s). The availability of the target system(s) may be violated by disconnecting the users, by denying them access to some or all of the services and information the system(s) provide, and/or by delaying the provision of information to users. Penetrate & Control has been decomposed further into four sub‐processes (Figure ). In the Penetrate sub‐ process, the Operatives exploit a vulnerability to gain access within the target system(s)’ firewalls. This may be done using a firewall tunnelling tool or social engineering techniques. Log editors/wipers are needed to erase the Operatives’ actions. When the target system(s) are penetrated, then the Take Control sub‐process can begin. The Operatives use a variety of tools and techniques (e.g. a rootkit, password crackers, and/or social engineering) to raise their access privileges to root or superuser. If necessary, the Operatives may install their own command interpreter on the target system(s). In Embed Weapon(s), the Operatives exploit their control over the target system(s) to embed backdoors, enabling direct access to the system(s) in the future, network mappers to expand their view on the targets’ environment, and email/chat servers to facilitate data extraction and communication with the target system(s). In Connect to C2, the Operatives connect the target system(s) to the professional group’s botnet via the C2 channel, so that the target system(s) can receive incoming commands and send outgoing commands to other bots.

Figure 5: Decomposition of Penetrate & Control (node A43) Lessons learned (Phase 5) splits into four processes. In Assess Damage, the Analysts may access the target system(s) again when authorized, to establish what lasting damage has actually been achieved by the counter‐ attack. The tools needed are largely the same as used in reconnaissance. In Unintended Effects, the Analysts

86


Tim Grant and Ronald Prins explore the target system(s)’ environment and seek information from the public media, the Internet, and collateral systems to establish what unintended effects, if any, the counter‐attack has caused. In Evaluate Operation, the whole professional group reviews the logged events, the actual damage achieved, and the unintended effects against the goals and RoE for the operation, perhaps replaying some of the events in the simulator. They prepare an evaluation report using the report template, specifically identifying any new lessons learned (LLs) that are not already in the LL database. In Disseminate LL, the group disseminates the new lessons learned to those who need to know them (e.g. the authorities, other Ministries, and/or other professional groups) by means of secure communications.

4. Extracting tools and technologies The tools and technologies were extracted from the SADT diagrams by enumerating all the Resources used per phase at the lowest level of decomposition. These resources were then grouped into the categories given in the operational context diagram, as shown in the following table: Resource Information sources: Internet Public media Target organization’s website Target organization’s reports Web search tools Open‐source information Covert information sources Suppliers’ websites CERT website Hacker fora Victim system Target system(s) Target’s environment Archive: Vulnerability database (DB) Payload database Platform database Plan template Report template Lessons learned database Repository: Sniffers Packet diversion tools Extraction tools Encryption tools Forensic tools Reverse engineering tools Data editor Software development environments (SDEs) Payloads (exploits) Delivery platforms (incl distribution points & malware droppers) Anti‐virus (AV) products Firewalls Intrusion Detection Systems Vulnerability scanners Port scanners Network mapping tools Sniffer detectors IDS detectors Honeypot detectors Zone transfer tools Planning tool

Malware? Yes

Phase 1 A12 A11 A11 A11 A12 A12

Phase 2 A21 A21 A21 A21 A21 A24 A24 A24 A22, A24 A22 A24

Phase 3 A31 A31 A31 A32 A32

Phase 4 A43, A44 A44 A44 A44 A44 A44

Phase 5 A52 A52 A51 A52 A53 A53 A51 A51 A51

Yes Yes

A32 A32

Maybe Maybe Yes Yes Maybe

A22 A22 A22 A22 A22 A22 A22

A32 A32 A32 A31

A433

87


Tim Grant and Ronald Prins Resource Information sources: Firewall tunnel Log editors/wipers Rootkit Command interpreter Password cracker Privilege tools Backdoors Email server Chat server C2 server C2 (subliminal) channels & backconnects/tunnelling Botnet Advanced Persistent Threat Techniques: Social engineering

Malware? Yes Yes Yes Yes Yes Yes Yes Yes

Phase 1

Phase 2

Phase 3

Phase 4 A431 A431 A432 A432 A432 A432 A433 A433 A433 A434 A434

Phase 5

Yes Yes Yes

A11 A11

A21, A22

A51

Infrastructure: Operations centre & ops area Software development area Laboratory area Secure communications Office automation products Content / database management systems Self‐defence measures Simulator (a.k.a. test range)

(All) (All) (All) (All) (All) (All)

(All) (All) (All) (All) (All) (All)

(All) (All) (All) (All) (All) (All)

A434 A431, A432, A44 (All) (All) (All) (All) (All) (All)

(All) (All) (All) (All) (All) (All)

(All)

(All)

(All) A33

(All) A42

(All) A53

Two types of resource – communications and OA tools – were identified explicitly in a handful of processes, but found to be implied in many other processes. For example, for the Analysts to send the threat and impact reports to the Strategists (in Phase 1) there would have to be communications system connecting them. Moreover, the Analysts would need OA tools to prepare the reports. Therefore, we considered such resources to be ubiquitous, and assigned them to the category Infrastructure. While not identified explicitly, we considered it obvious that the cyber operations centre would need to have strong self‐defence measures (FW, AV, IDS, honeypot, etc), because it would be an attractive target for a pre‐emptive or a counter‐counter‐ attack.

5. Conclusions and further work Several countries, including The Netherlands, are in the process of establishing a Defence Cyber Command (DCC) capable of offensive cyber operations. Combining Computer Network Exploitation and Computer Network Attack, offensive cyber operations would be performed by a professional group of specialists under government authority in compliance with national and international law. The purpose of this paper is to identify the tools and technologies that a DCC would need. The research reported here builds on earlier work in creating a canonical process model for offensive cyber operations (Grant et al, 2012). We enhanced the earlier model by adding SADT control inputs and mechanisms following the NIST (1993) methodology, aided by the IDEF0 shareware tool. Tools and technologies were extracted from the mechanisms. A subject matter expert checked the canonical process model and the extracted set of tools and technologies. Independently, the canonical process model has been checked by using it to “walk through” a real cyber incident. The resulting set of tools and technologies includes ten information sources, six databases, a repository of some thirty software tools, and social engineering techniques. The DCC would need access to the system attacked or threatened by the incoming attack, to the intended target system(s) to be counter‐attacked, and the systems in the targets’ environment. Finally, the DCC would need an infrastructure consisting of working areas, secure communications, office automation and content/database management software, strong self‐ defence measures, and a simulation environment. It is noteworthy that malware represents a small fraction of the software tools identified.

88


Tim Grant and Ronald Prins The key contribution of this paper has been to identify a set of tools and technologies for professional offensive cyber operations. This should be a help for those authorities responsible for establishing DCCs. Nevertheless, the research has several limitations. Most importantly, the canonical process model on which this research is based lacks any representation of temporal or other quantitative aspects. Timing is very important in cyber operations. By contrast with conventional military operations, they can be over in seconds or minutes, rather than weeks, months or even years. Consequently, many of the processes shown in the canonical process model will have to be automated. This brings challenges, especially in the first two phases of determining the goals and planning the counter‐attack. Another major challenge will be an organizational one. The various specialities making up the professional group will have to work extremely closely together. Given that there is traditionally a “Chinese wall” between the intelligence services (Analysts) and the military services (the other specialisations), some way must be found to break down or tunnel through this wall. Various possibilities would be to ensure each group includes at least one representative of each specialism, to co‐ locate the group, to cross‐train each member of the group in another specialisation, and to train the group together using exercises and past incidents. A similar challenge exists in the interplay between the professional group and the authorities. The authorities must understand offensive cyber operations, without succumbing to “regulatory capture” by the professional group. There are many directions in which further work could go. For example, the canonical process model clearly needs to be subjected to a “reality check” by using it in simulated operations, exercises, and eventually live operations. Moreover, additional research is needed into whether the process model also applies to the third form of operations, namely an attack on opposing forces with and without associated kinetic military action. Furthermore, what activities the DCC should perform in the periods between operations – the “interbellum” – needs to be studied. Clearly, such activities would include training, monitoring possible opposing forces, gathering intelligence about their computer‐based systems, contingency planning, and developing assets, such as finding zero‐day vulnerabilities.

References Denning, P.J. & Denning, D.E. (2010) “Discussing Cyber Attack”. Communications of the ACM, Vol 53, no 9, pp 29‐31. Grant, T.J., Burke, I., & van Heerden, R.P. (2012) “Comparing Models of Offensive Cyber Operations”, Proceedings, 7th International Conference on Information Warfare & Security (ICIW 2012), Seattle, WA, USA, March. Habermas, J. (1976) Communication and the evolution of society. Beacon Press, Toronto. Honeynet. (2008) Know your Enemy: Tracking Botnets, Appendix C: Chatlog – watching attackers at their work. The Honeynet Project. Accessed from http://www.honeynet.org/papers/bots, 29 December 2011. Lewis, J.A. & Timlin, K. (2011) Cybersecurity and Cyberwarfare: Preliminary assessment of national doctrine and organization. Center for Strategic and International Studies, Washington DC, USA. Lin, H. (2009) “Lifting the Veil on Cyber Offense”. IEEE Security & Privacy, Vol 7, No 4, pp 15‐21. MalwareInfo. (2012) Soorten Malware. Malware Information and Prevention. (In Dutch: Types of malware.) Accessed from http://malwareinfo.nl/malware‐2/soorten‐malware/, 11 October 2012. Marca, D. & McGowen, C.L. (1988) SADT: Structured Analysis and Design Technique. McGraw‐Hill, NY. McAfee. (2011) 2012 Threats Predictions. McAfee Labs, Santa Clara, CA, USA. MinDef. (2012) Brochure Defensie Cyber Strategie. Ministry of Defence, The Hague, The Netherlands, published 27 June 2012. (In Dutch: Defence Cyber Strategy brochure.) Accessed from http://www.rijksoverheid.nl/documenten‐en‐ publicaties/brochures/2012/06/27/brochure‐defensie‐cyber‐strategie.html, 11 October 2012. MinJus. (2011) Dutch National Cyber Security Strategy. Ministry of Security & Justice, The Hague, The Netherlands, published 23 February 2011. (In English.) Downloadable from http://www.govcert.nl/english/service‐ provision/knowledge‐and‐publications/factsheets/national‐cyber‐security‐strategy‐launched.html, accessed 11 October 2012. NIST. (1993) Integration Definition for Function Modeling (IDEF0). Federal Information Processing Standard Publication 183, 21 December 1993. SANS. (2012) Malware FAQ. SANS Institute. Accessed from http://www.sans.org/security‐resources/malwarefaq/, 11 October 2012. Schnitger, S. & Folmer, H. (2011) “Cyber Ontwikkelingen bij Defensie”. Intercom, 2011‐4, pp.17‐20. (In Dutch: Cyber Developments in the Ministry of Defence.)

89


The Emergence of Cyber Activity as a Gateway to Human Trafficking Virginia Greiman1, 2 and Christina Bain2 1 Boston University, Boston, USA 2 Harvard Kennedy School, Carr Center for Human Rights Policy, Program on Human Trafficking and Modern Slavery, Cambridge, USA ggreiman@bu.edu ggreiman@law.harvard.edu Christina_Bain@Harvard.edu Abstract: Human trafficking is a worldwide crisis and the U.S. Department of State’s 2012 Trafficking in Persons Report highlights the critical need to address this issue both home and abroad. Today, it is estimated as many as 27 million people around the world are victims of trafficking into the sex trade and other forms of servitude known as modern slavery or trafficking in persons. “Trafficking in persons” and “human trafficking” has been used by the U.S. Department of State and other governmental and multinational organizations as umbrella terms for the act of recruiting, harboring, transporting, providing, or obtaining a person for compelled labor or commercial sex acts through the use of force, fraud, or coercion. Recent research reflects that the exploitation of people through trafficking, is being channeled heavily through cyber activity such as Internet service, local bulletin board service, or any device capable of electronic data storage or transmission including social networking sites like Craig’s List, Facebook, MySpace, and email instant messaging, text messaging, fictitious employment advertisements, immigration assistance and online bride websites. The goals of the research include: (1) identifying some of the key present challenges in cybertrafficking investigations, (2) understanding the impact of cybertrafficking in our society, locally, nationally and globally; and (3) assessing the role of the private sector in regulating the Internet for human trafficking activity. Presently, human trafficking scholarship and education is in its early stages particularly as it relates to understanding victim protection and assistance, technological, evidentiary and surveillance issues and international legal frameworks for the prevention of human trafficking. Greater awareness and education is needed to assist in the challenges faced by our Executive and Legislative Branches as they address important issues of national security and the growing incidence of cybercrime. This paper will introduce common law legal doctrines, procedural and evidentiary tools, forensic analysis, and case studies that will assist in creating a deeper understanding of the impact of cyber activity on the human trafficking industry in the effort to find greater solutions for the prevention, prosecution and protection of the innocent from the growing incidence of cyber activity as it relates to human trafficking around the globe. Keywords: human trafficking, cyberlaw, cybercrime, modern slavery

1. Introduction to human trafficking and modern slavery The United States, Trafficking Victims Protection ACT (TVPA) of 2000 (Pub. L. 106‐386), as amended, and the United Nations Palermo Protocol to Prevent, Suppress and Punish Trafficking in Persons, describe human trafficking using a number of different terms. Under United States federal law, “severe forms of trafficking in persons” includes both sex trafficking and labor trafficking as defined below: Sex trafficking is the recruitment, harboring, transportation, provision, or obtaining of a person for the purposes of a commercial sex act, in which the commercial sex act is induced by force, fraud, or coercion, or in which the person induced to perform such an act has not attained 18 years of age, (22 USC § 7102; 8 CFR § 214.11(a)). Labor trafficking is the recruitment, harboring, transportation, provision, or obtaining of a person for labor or services, through the use of force, fraud, or coercion for the purposes of subjection to involuntary servitude, peonage, debt bondage, or slavery, (22 USC § 7102). On the international level, the Palermo Protocol, defines Trafficking in Persons as: The recruitment, transportation, transfer, harbouring or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability or of the giving or receiving of payments or benefits to achieve the consent of a person having control over another person, for the purpose of exploitation. Exploitation shall include, at a minimum, the exploitation of the prostitution of others or other forms of sexual exploitation, forced labour or services, slavery

90


Virginia Greiman and Christina Bain or practices similar to slavery, servitude or the removal of organs or other types of exploitation. (Palermo Protocol, Article 3, para.(a). Table 1 shows that on the basis of the definition given in the Trafficking in Persons Protocol, it is evident that trafficking in persons has three major elements: (1) The Act (what is done); (2) The means (how it is done); and (3) The Purpose (why it is done). Table 1: Elements of human trafficking Act of Trafficking Recruitment Transport Transfer Harbouring Receipt of Persons

Means of Trafficking Threat or use of force Coercion Abduction Fraud Deception Abuse of Power or a position of vulnerability Giving Payments or Benefits to achieve consent

Purpose of Trafficking Exploitation including: Prostitution of Others Sexual Exploitation Forced Labour or Services Modern Slavery or Similar Practices Servitude or the Removal of Organs Other Types of Exploitation

2. Criminalization of Human Trafficking The definition contained in article 3 of the Trafficking in Persons Protocol is meant to provide consistency and consensus around the world on the phenomenon of trafficking in persons. Article 5 therefore requires that the conduct set out in article 3 be criminalized in domestic legislation. In addition to the criminalization of trafficking, the Trafficking in Persons Protocol requires criminalization also of:

Attempts to commit a trafficking offence

Participation as an accomplice in such an offence

Organizing or directing others to commit trafficking.

The Protocol further requires that domestic legislation should adopt the broad definition of trafficking prescribed in the Protocol. The legislative definition should be dynamic and flexible so as to empower the legislative framework to respond effectively to trafficking which: (1) Occurs both across borders and within a country (not just cross‐border); (2) Is for a range of exploitative purposes (not just sexual exploitation); (3) Victimizes children, women and men; (4) Takes place with or without the involvement of organized crime groups. In the United States, sex trafficking was criminalized under 18 U.S.C. para. 1591, Sex trafficking of children or by force, fraud, or coercion, which makes it illegal to recruit, entice, provide, harbor, maintain, or transport a person or to benefit from involvement in causing the person to engage in a commercial sex act, knowing that force, fraud, or coercion was used or that the person was under the age of 18.

3. Definition and scope of cybertrafficking While the traditional means of human trafficking remain in place, cyber technologies give traffickers the unprecedented ability to exploit a greater number of victims and advertise their service across geographic boundaries (Latonero 2011). Importantly, the extent to which these technologies are used in both sex and labor trafficking is unclear and is the subject of emerging research. In recent years the term "cyber" has been used to describe anything that has to do with computers, networks and the Internet, particularly in the security field. However, the contours and meaning of “cybertrafficking” have not yet been constructed to any substantial degree in legal or trafficking literature or in practice. Similar definitional development has occurred around the more well‐established umbrella term “cybercrime” over the last few years and yet considerable debate persists over both the validity of cybercrime as a separate category and the most appropriate scope of the term.

91


Virginia Greiman and Christina Bain Drawing upon several definitions of human trafficking utilized under the Trafficking Victims Protection Act of 2000 (TVPA), 1 the European Convention on Cyber Crime, 2 the Council of Europe Convention on Trafficking in Human Beings, 3 The United Nations Convention against Transnational Organized Crime Protocol on Human Trafficking, 4 and various state statutory schemes, 5 some commonality among the provisions was identified. A review of cases on the websites of the U.S. Department of Justice Computer Crime and Intellectual Property Section (CCIPS), Harvard Law School's Berkman Center for Internet and Society, and Interpol also revealed no existing definition of cybertrafficking but a diversity of definitions for cyber crimes and trafficking in humans. 6 Because there is no consensus on the meaning of "cybertrafficking" we have developed the following working definition of the term to describe the potential reach of “trafficking on the internet.” However, we should note that a precise definition of the term, while useful for some purposes is not necessary to understand the importance of the Internet as a gateway to human trafficking and how this activity is being dealt with in selected jurisdictions. "Cybertrafficking" is the "transport of persons," by means of a computer system, Internet service, local bulletin board service, or any device capable of electronic data storage or transmission to coerce, deceive, or consent for the purpose of “exploitation.” Exploitation shall include, at a minimum, the exploitation of the prostitution of others or other forms of sexual exploitation, forced labor or services, slavery or practices similar to slavery and servitude. "Transport in persons" shall mean the recruitment, advertisement, enticement, transportation, sale, purchase, transfer, harbouring or receipt of persons, for the purpose of exploitation with or without the consent of the victim.

4. The use of technology in trafficking The use of technology in trafficking ‐‐ cybertrafficking ‐‐ takes many forms, but they can be roughly grouped into three major categories. The first is the use of the Internet, text messaging, digital cameras, and mobile devices/smartphones to offer, advertise and sell sex services, some of which are provided by trafficked victims. There has been a dramatic shift in the advertising of commercial sex, moving from the streets, sidewalks and printed ads to online classified advertising sites such as backpage.com, until recently, Craigslist, and a range of more specialized sites. 7 On September 4, 2010, Craigslist removed its “Adult Services” after a campaign launched by 17 Attorney Generals and several prominent national and international anti‐trafficking organizations and replaced the link to the section with one word: “censored” (Miller 2010). A variety of cases and prosecutions has revealed how traffickers make sophisticated use of mobile technology to photograph their victims, place and change online ads quickly when they transport their victims to new cities, send photographs of and other information about victims to potential customers in real time to arrange transactions, etc. While empirical data is not available, anecdotal evidence suggests that a substantial majority of sex trafficking in the United States may now be advertised and arranged on the Internet.

1

The Trafficking Victims Protection Act of 2000 (TVPA) defines trafficking in persons as (a) sex trafficking in which a commercial sex act is induced by force, fraud or coercion, or in which the person induced to perform such act has not attained 18 years of age; Chapter 1, Article 1 (d) of the Convention on Cybercrime defines "traffic data" as: "traffic data" means any computer data relating to a communication by means of a computer system, generated by a computer system that formed a part in the chain of communication, indicating the communication’s origin, destination, route, time, date, size, duration, or type of underlying service. Budapest, 23.XI.2001 Council of Europe, Convention on Cybercrime, opened for signature Nov. 23, 2001, E.E.T.S. no. 185. 3 Council of Europe ‐ Council of Europe Convention on Action against Trafficking in Human Beings (CETS No. 197) Defines human trafficking as: "Trafficking in human beings" shall mean the recruitment, transportation, transfer, harbouring or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability or of the giving or receiving of payments or benefits to achieve the consent of a person having control over another person, for the purpose of exploitation. 4 The United Nations Convention against Transnational Organized Crime, adopted by General Assembly resolution 55/25 of 15 November 2000 Protocol on Human Trafficking. 5 See generally, Polaris Project for a World Without Slavery, (listing state and federal human trafficking laws) available at: http://www.polarisproject.org/resources/state‐and‐federal‐laws 6 U.S. DOJ/CCIP available at www.justice.gov/criminal/cybercrime/intl.html; Harvard Law School Berkman Center for Internet and Society available at http://cyber.law.harvard.edu/vaw02/module3.html; Interpol available at http://www.interpol.int/Public/Children/Default.asp 7 For a number of years Craigslist and its "erotic services" and then "adult services" categories were one of the major locations for commercial sex ads. In 2010, under heavy pressure from U.S. State Attorneys General, Craigslist eliminated the specific "adult services" category of ads. Since then, much of the most blatant and explicit advertising for commercial sex has shifted to other sites, particularly backpage.com and certain more‐specialized, "fetish" sites. 2

92


Virginia Greiman and Christina Bain The second main category of the use of technology in trafficking is identifying, locating, enticing and recruiting new victims into trafficking and then helping to control the victims once they have been trafficked. This may take the form of using social networking sites like Facebook, MySpace, and others or using direct communications tools like email, instant messaging, and text messages. We have seen evidence that this recruiting function is being used both for sex trafficking and for labor trafficking. Examples of the latter category include creating fictitious employment, immigration assistance and “online bride” websites to lure potential victims into contact with the traffickers. One specific case analyzed involved a trafficking enterprise that used phony immigration advice and counseling web sites to "solicit and recruit alien workers from both 8 abroad and within the United States and to obtain information about these aliens." Although Internet classified sites already have come under intense scrutiny, the role of social networking sites and online classifieds has yet to be fully researched. A third category, involves both the advertising and the delivery of coerced sex services over the internet. One case of coerced “cybersex” involved victims offered to customers over the Internet and then forced to perform sex acts for those customers not in person but via Internet webcams and chat technologies. Similarly, the U.S. State Department reports that, in China, many North Korean trafficking victims are subjected to forced 9 prostitution in Internet sex businesses. In November 2012, the USC Annenberg Center on Communication Leadership and Policy (CCLP) issued an important research report on The Rise of Mobile and the Diffusion of Technology‐Facilitated Trafficking. The Report contained the following two key findings on the role of technology in domestic minor sex trafficking: (1) technology‐facilitated trafficking is far more diffuse than initially thought, spreading across multiple online sites and digital platforms; and (2) mobile devices and networks play an increasingly important role that can potentially transform the trafficking landscape. Moreover, the authors noted the centrality of mobile phones has major implications for counter‐trafficking efforts and may represent a powerful new tool in identifying, tracking, and prosecuting traffickers (CCLP 2010, p. 36).

5. Cybertrafficking legislation The importance of legislative frameworks in combatting human trafficking has been notably recognized by Secretary of State Hillary Rodham Clinton: The problem of modern trafficking may be entrenched, and it may seem like there is no end in sight. But if we act on the laws that have been passed and the commitments that have been made, it is solvable. ‐ U.S. Secretary of State Hillary Rodham Clinton, June 28, 2011 The 2012 Trafficking in Persons Report highlights the importance not only of the passage of domestic laws consistent with international standards but also the importance of training the law enforcement and justice officials likely to encounter these individuals violating the laws. A law must ensure to provide a victim‐ centered framework for fighting modern slavery in which everyone victimized by trafficking, whether for labor or commercial sexual exploitation, whether a citizen or immigrant, whether a man, woman or child, is considered a victim under the law (TIP 2012, p. 14). In the United States, many states have passed statutes on trafficking and victim protection, however, these laws have only been passed recently and many do not go far enough in imposing criminal penalties on perpetrators. Significantly, the State Department’s 2011 Trafficking in Persons Report, noted that “while state prosecutions continue to increase, one study found that less than 10% of state and local law enforcement agencies surveyed had protocols or policies on human trafficking.” In 2003, Texas was one of the first states in the nation to criminalize human trafficking. Though the law provides for: (1) a statewide task force on trafficking; (2) a four hour police training program; (3) victim defense to prostitution; and (4) simplifies the burden of proof for prosecutors, the law noticeably does not address the unique aspects of the use of technology in trafficking. 8

United States v. Askarkhodjaev, et al. (W.D. Mo.), Indictment, May 6, 2009, available at http://blogs.kansascity.com/files/traffick.pdf U.S. Department of State Trafficking in Persons Report 2010.

9

93


Virginia Greiman and Christina Bain On November 21, 2011, Massachusetts also passed a tough new law, An Act Relative to the Commercial Exploitation of People, which strengthens protections for victims of human trafficking and prostitution and increases the punishment for offenders by carrying a potential life sentence for traffickers of children. As part of this anti‐human trafficking law, the Legislature created an interagency task force to address all aspects of human trafficking through policy changes. The task force is charged with addressing Human Trafficking through service development, demand reduction, system change, public awareness, and training. As noted above, the protections available to trafficking victims vary among states, and minor victims of sex trafficking can even face prostitution charges in some state courts (Matter of B.W. 2010). New York was the first state to pass legislation addressing this issue in 2010, with the passage of the Safe Harbor for Exploited Children Act. Several states have since passed similar acts.

6. Emerging issues in cybertrafficking prosecutions An emerging issue area around cybertrafficking prosecutions seems to be the lack of case law which fleshes out the various elements of federal and state statutes. Human trafficking prosecutions themselves seem to be limited in comparison to other areas of criminal law, and many cases appear to be pleaded out before reaching a judge or jury to determine factual issues. As a result, many unknowns exist around the scope of proof and requisite evidentiary needs. The cases that do exist often view the Internet suspiciously as lacking in authenticity or trustworthiness. In one Texas case, involving human trafficking through the use of the Internet, the court held that Plaintiff’s electronic “evidence” is totally insufficient to withstand defendant’s motion to dismiss. “While some look to the Internet as an innovative vehicle for communication, the Court continues to warily and wearily view it largely as one large catalyst for rumor, innuendo, and misinformation” (St. Clair 1999). The court went on to say that “there is no way Plaintiff can overcome the presumption that the information he discovered on the Internet is inherently untrustworthy. Anyone can put anything on the Internet. No web‐site is monitored for accuracy and nothing contained therein is under oath or even subject to independent verification absent underlying documentation.” Authenticity and admissibility arose in another case involving control over an Internet account. The court in Commonwealth v. Williams held printouts of MySpace messages sent from defendant’s brother’s MySpace account to witness were not properly authenticated in absence of proof of who accessed the account and sent the messages (Williams 2010). Despite the concerns of authenticity by some courts in prosecuting these cases, the following cases suggest that there are ways of overcoming these concerns. For example, the use of Internet advertising was the key to a successful human trafficking prosecution in the case of Iowa v. Russell. In this case, the defendant was convicted of human trafficking under Iowa Code § 710A.1(1). The defendant met two teenaged girls (age 15 and 16) who had run away from a juvenile home in Nebraska through a woman named “Jazzie.” The victims agreed to go on a road trip and were later told they would have to work at strip clubs and as prostitutes. The legal element of “continuing basis” in the human trafficking statute was met because there was evidence advertising the victims’ sexual services, with photos of the victims on the Internet. In another on‐going matter, a defendant in Florida and four others were indicted by a federal grand jury for conspiracy to traffic in persons under the age of 18 for purposes of causing such persons to engage in a commercial sexual act under 18 U.S.C. §1594(c) (Wilson 2010). This particular defendant sought to sever his trial from the other alleged co‐conspirators. In response, the United States cited evidence of the conspiracy which included internet ads on backpage.com. The United States alleges that the co‐conspirators helped each other with Internet advertising of adult and minor females, such as sharing computers to advertise the sexual services. In addition to the lack of specific legislation governing trafficking through the use of the Internet, critical forensic issues continue to arise in human trafficking cases. These issues include: (1) the ability of law enforcement to access stored communications; (2) whether an interception includes stored communications; (3) the definition of electronic storage; (4) admissibility of electronic evidence; (5) preservation of computer data; and (6) cross border searches and seizures. As technology evolves, law enforcement must always stay on

94


Virginia Greiman and Christina Bain the cutting edge of technological change and continually invest money and resources for new training and equipment.

7. Cybertrafficking research and collaboration The increasing international focus on trafficking in persons has started to be reflected in the surprising amount of research on the issue of human trafficking. Since 2000, the International Organization for Migration has tracked the rapid increase in publications on the subject. However, as noted by the U.S. Department of Justice, the research is limited in the number of reported cases due to enormous difficulty in tracking a global criminal enterprise (DOJ 2011). The U.S. Department of State in its 2011 Trafficking in Persons Report stressed the increased need for information and understanding of the role of technology in trafficking. In the course of our research thus far in the Program on Human Trafficking and Modern Slavery at Harvard’s Kennedy School of Government, we have spoken with various trafficking experts, consulted with investigators and prosecutors and collected filed indictments and other charging documents as well as press accounts of trafficking cases where technology was alleged to have played a role in the selection or recruiting or grooming or control of the victim. The goal of the research is to identify as many case studies of cybertrafficking as possible. Simultaneously, we have sought to interview specific law enforcement officials and prosecutors involved in many of the cases we identify to gather details on the nature of the technologies used and the role they played in the offenses and to collect and analyze actual case evidence relating to technology. We also consult with leading trafficking prosecutors and investigators nationwide to glean their knowledge of the scope and nature of technology use and the adequacy of existing laws to address that use. We will continue targeted legal research and analysis to identify emerging case law and best practices surrounding evidentiary issues relating to electronic evidence, particularly from social networking sites and other internet sources. Due to the unavailability of empirical research, we continue to explore the possibility of developing empirical data with potential governmental or private sector partners for use in mitigating or preventing the use of electronic devices for the commission of human trafficking. The Harvard Cyberlaw Clinic at the Berkman Center for Internet & Society has been instrumental in providing valuable insight into the evidentiary issues faced in trafficking prosecution. In addition, our research team has been forging valuable collaboration links with a number of other researchers working in this space including danah boyd, a prominent technology and youth‐safety advocate, Fellow at Harvard's Berkman Center for 10 Internet & Society and is now a senior researcher at Microsoft Research. We also have shared information and approaches with a promising research and advocacy program, the Technology and Trafficking in Persons Research Project at USC's Annenberg Center on Communication Leadership and Policy. 11 Highlighted in table 2 below is a summary of the key objectives and questions raised in our research to date. Table 2: Key areas of research Research Objectives Ensure a victim‐centric focus to understanding the trafficking problems and its impacts. Analyze trafficking forensics and evidentiary issues in prosecution and the role of the Internet prosecutor. Surveying the impact of digital evidence in the courtroom.

Research Question Are the international, federal and state laws effective in preventing and resolving human trafficking on the Internet? What typical and unique evidence gathering techniques are being used successfully by law enforcement and prosecutors? Do the case decisions uphold the use of digital evidence and how do the rules and procedures impact these decisions?

8. Conclusions As recognized by the U.S. Department of State, while human trafficking problems are being resolved on the international level through the passage of domestic legislation under the Palermo Protocol, emerging technologies give rise to new challenges in fighting human trafficking. Though the Internet offers new ways of conducting human trafficking, it also offers opportunities to campaign against trafficking and to provide knowledge about the dangers of trafficking as it impacts the victims. It also offers the ability to proactively monitor and prevent these events before targets become victims. The information security implications of 10

See biography available at http://www.danah.org/ http://communicationleadership.usc.edu/projects/technology_trafficking_in_persons.html

11

95


Virginia Greiman and Christina Bain these technologies are areas of active research, and methodologies for protecting victims from cybertrafficking are still evolving. In the interim, collaborative research is critical to the development of security models that protect victims of trafficking, while at the same time developing electronic evidence of trafficking activity that will withstand motions to dismiss in the legal tribunals around the world.

References Bellia, P.L., Berman, P.S. and Post, D.G. (2007). Cyberlaw: Problems of Policy and Jurisprudence In The Information Age, 3rd ed. St. Paul, MN: Thomson West. Budapest, 23.XI.2001 Council of Europe, Convention on Cybercrime, opened for signature Nov. 23, 2001, E.E.T.S. no. 185 Commonwealth v. Williams, 456 Mass. 857, 926 N.E. 2d 1162 (2010). Council of Europe Cybercrime training for judges and prosecutors: A Concept, Project on Cybercrime (2009) www.coe.int/cybercrime and the Lisbon Network, Strasbourg, France. Council of Europe, Convention on Cybercrime, Explanatory Report, P 6, Nov. 23, 2001, S. Treaty Doc. No. 108‐11, 2001 WL 34368783, 41 I.L.M. 282, C.E.T.S. no. 185. Council of Europe Convention on Action against Trafficking in Human Beings (CETS No. 197) Craigslist, Inc. v. McMaster, No. 2:2009cv01308 (D.S.C.) Crook, John R. (July 2008) Contemporary Practice of the United States Relating to International Law: U.S. Views on Norms and Structures for Internet Governance, The American Society of International Law, American Journal of International Law, 102 A.J.I.L. 648, 650. Council of Europe Convention on Action against Trafficking in Human Beings CETS No.: 197 Hughes, Donna M. (2000) The Internet and Sex Industries: Partners in Global Sexual Exploitation, 19‐1 Technology and Society Magazine, available at http://www.uri.edu/artsci/wms/hughes/siii.htm In the Matter of B.W., 313 S.W.3d 818, 826 (Tex. 2010). This case involved a 13 year old who was arrested and convicted in Texas for offering to perform an illegal sex act on an undercover officer, despite a state law that persons under 14 cannot consent to sex. The Texas Supreme Court reversed the decision on appeal, noting, “Children are the victims, not the perpetrators, of child prostitution. Iowa v. Russell, no.9‐906/08‐2034, 2010 Iowa App. LEXIS 145 (Iowa Ct. App. 2010), aff’d, 781 N.W.2d 303 (Iowa Ct. App. 2010). Latonero, M. (2012). The Rise of Mobile and the Diffusion of Technology‐Facilitated Trafficking, University of Southern California, Annenberg School for Communication & Journalism, Center on Communication Leadership & Policy, Research Series on Technology and Human Trafficking, November 2012. Latonero, M. (2011). Human Trafficking Online: The Role of Social Networking Sites and Online Classifieds, University of Southern California, Annenberg School for Communication & Journalism, Center on Communication Leadership & Policy, Research Series on Technology and Human Trafficking, September 2011. Miller, C.C. (2010). Craigslist Blocks Access to ‘Adult Services’ Pages, September 4, 2010, New York Times, Technology Business Daily (available at: http://www.nytimes.com/2010/09/05/technology/05craigs.html St. Clair v. Johnny’s Oyster & Shrimp, Inc., 76 F. Supp.2d 773 (S.D. Tex. 1999). The United Nations Convention against Transnational Organized Crime, adopted by General Assembly resolution 55/25 of 15 November 2000 Protocol on Human Trafficking United Nations (2000). The United Nations Convention Protocol To Prevent, Suppress And Punish Trafficking In Persons, Especially Women And Children, Supplementing The United Nations Convention Against Transnational Organized Crime. U.S. Department of Justice Literature Review (2011). Prepared by Elzbieta M. Gozdziak and Micah N. Bump, Data and Research on Human Trafficking: Bibliography of Research‐Based Literature, Georgetown University Institute for the Study of International Migration, October 2008, 45. The United States, Trafficking Victims Protection ACT (TVPA) of 2000 (Pub. L. 106‐386) U.S. v. Wilson, No. 10‐60102‐CR, 2010 WL 2609429 (S.D. Fla. 2010). th U.S. v. Wong, 334 F.3d 831, 838 (9 Cir. 2003) U.S. Department of State, June 2009 ‐ June 2012 Trafficking in Persons Report World Internet Usage and Population Statistics (2012)

96


Deep Routing Simulation Barry Irwin and Alan Herbert Department of Computer Science, Rhodes University, Grahamstown, South Africa b.irwin@ru.ac.za g09h1151@campus.ru.ac.za Abstract: This paper describes the implementation and testing of a flexible framework for near‐real‐time simulating the routing of IPv4 network traffic within the deep Internet. The purpose of this is to provide an improved degree of realism in cyber defence exercises. The most noticeable aspect of the tool, is the return of multiple hops when running link path discovery tools such as traceroute, although the correct handling of packets though decrement of IP TTL values is performed at each node. This allows for the better simulation of very large network topologies without the need for multiple discreet (or simulated) routing devices. Implemented for deployment on common open source unix‐like platforms, the multi‐threaded software scales on modern multi‐core CPU platforms, and provides the simulation of thousands of network paths and intermediate nodes. Setup is simplified by providing a loading process that can consume real traceroute data, and optimize the internal network representations, an improvement over other simulation tools currently available. Additional functional aspects are the introduction of packet loss, corruption or delay at any point within the simulated paths. An instrumentation interface allows for the real‐time monitoring, adjustment and reconfiguration of the network. This interface also provides a means for scripting automated packet insertion or configuration changes during the course of the simulation run. The initial implementation has been found to be stable, and offer adequate performance. The framework aims to provide a scalable and efficient means of providing this route simulation, and a number of future extensions are discussed, most notably the intended porting to an embedded platform, and possibilities for increasing throughput rates. Keywords: routing, simulation, cyberdefense

1. Introduction This paper describes the design and prototype implementation of a software component to be used to provide a means of simulating routing over the internet. The intention with this project is that it would provide a means of increasing the realism possible when integrated as part of a larger cyber‐security simulation. The term ‘deep routing’ is used to describe the kind of routing paths ones typically sees when communicating with hosts over the Internet. Typically one traverses multiple nodes spanning disjoint networks in order to reach remote paths. Many existing approaches to network simulation, and specifically those used for cyberdefense training tend to have shallow routing paths, due to the complexity of creating realistic paths with physical (or even virtual infrastructure). At this stage of software only prototyping, no consideration has been given to implementation using hardware in order to accelerate performance. Specifically, the use of hardware platforms such as FPGA and GPU technologies has been considered as out of scope; however these are addressed under the future work section. The intention was to produce a workable prototype that functioned at an acceptable level on commodity PC hardware platforms. Currently available routing simulators generally suffer from heavy resource requirements. This is largely due to the high memory and CPU requirements related to storing routing tables and the actual packet routing within the simulation. Some of these simulators also suffer from low availability due to costly investments into the software and hardware of these simulators. The need for a memory and CPU effective solution to routing simulation along with ease of configuration, low cost and high compatibility is required. Routing simulators such as these are used in the education, research and development sectors. Using real networks in these cases can lead to unforseen side effects, and it is therefore desirable to run in an isolated environment. The remainder of the paper is structured as follows. A brief history of routing simulation is provided in Section 2. This is followed by a discussion of the Design of the routing simulator in Section 3 in which some of the trade‐offs are considered. Section 4 discussed the role and application of the simulation software in a testbed. Initial performance testing of the prototype is reported in Section 5. The paper concludes in Section 6 with some considerations of the impact and application of such a simulator as part of a cyberdefence training platform, and some suggestions around further developments and extensions.

97


Barry Irwin and Alan Herbert

2. Related work One must first understand what approaches and research has been done in this field before moving onto designing and implementing a routing simulator. This is to prevent one from looking for solutions that have already been found and thus take new steps into this field of research. There are many successes and failures that have been documented, tried and tested. Taking these results and reworking the better points or redesigning failures can lead to a better routing simulator than what previous attempts had yeilded. First one needs to understand what packet routing algorithms currently exist. This is essential to designing a routing simulator that is suited to real routing replication as if one cannot mimic this basic property, then one cannot build up the more complex structure that rely on this. This section provides a brief overview of work considered during the development of this tool.

2.1 Approaches to network simulation Leighton et. al. (1994) state that Packet routing is most simply described by the process of moving packets from a source host to a destination host through a network. A route may be a direct connection to the destination host or it may involve a series of hops through routers, hubs and load balancing systems. Routing protocols are classified into three major classes which are interior gateway routing through link state routing protocols, interior gateway routing through path vector or distance vector protocols and exterior gateway routing. The purpose of these routing protocols is to prevent loops forming when routing occurs within a network. Baker (1995) states that they also deal with selecting the best routes around a network based on a predefined cost on each link to reduce latencies and link traffic. As more hosts are added into the network routing becomes more complex and the chance of a single node being connected to only one host becomes diminish. This makes error testing increasingly difficult as other systems that are not directly part of the system one may be testing can get affected. Mahajan et. al. (2003) show that increased load can lead to a loss of throughput, higher latency and potentially even a complete denial of service. Network simulation software was developed because of this to handle the needs of systems in development that required live network testing but were not yet ready to be opened to a live network. Network simulators could allow for creation of network traffic and complex routes without ever connecting to a real network. Some hardware implementations that allow this simulation are developed by Apposite Technologies (2012) and Packet Storm Communications, Inc (2012). Network Simulation is also required at a software level for education and research as it is easier and safer to control a virtual environment than it is to control a live network. This also reduces hardware and spatial costs required to create the simulated environment. NS‐3 was designed as a software implementation with this in mind and allows for scalability of the networks simulated size and also allows interfacing to real networks (Henderson et. al. 2006). For current purposes, only IP datagrams have been considered, with the framework operating on Ethernet (802.3) networks.

2.2 Complications in design and implementation Bandwidth, latency and packet loss are also factors of designing and implementing a network simulator. A network simulator should be able to handle multiple connections of which all may have separate distances and run on different mediums. This leads to different levels of packet loss, latencies and bandwidths. This is particularly noticeable when comparing a wired to wireless connection. Zhao and Govindan (2003) state that wireless connections typically have higher latencies and higher rates of packet loss than traditional wired connections depending on distance and what mediums are between each wireless link. This means that the average statistical throughput of one environment can differ to that of another adding to the complexity in design and implementation of wireless protocol simulation.

98


Barry Irwin and Alan Herbert

3. Simulator design and implementation The primary design criteria for the simulator was to provide a system that was easy to configure, resource efficient, and provided flexibility in terms of implementation and future modification. Specifically the intention was to be able to achieve acceptable performance using commodity networking and computational equipment. From review of the approaches taken by other solutions and problems that arise with each implementation towards routing simulation it was determined that a constrained experienced by many of them was in terms of heavy memory requirements. With this observation, the approach taken in this paper is to address the issues of memory usage, CPU usage and general throughput under different sized network simulations and network load. Two varied approaches were taken with regard to implementing the system. The first of these used a dynamic generation and processing system, where resources were allocated as they were required, trading off additional computation time for a lower memory footprint. The second approach used a static pre‐allocation of memory at startup. This approach was taken to optimise throughput time for datagrams transiting the system at the expense of additional memory usage. A detailed discussion of the implementation and specific design decisions is contained in REDACTED (2012).

3.1 Functionality The route simulator was designed to fulfil the functional role of an IP router while at the same time implementing a number of discrete design features. The three key features implemented were that of Time to Live (TTL) processing of an IP datagram as it transited the system, application of delay in processing of each datagram at each node, and the possibility of a packet being dropped. Data enters the system though a collector which makes use of libpcap to grab appropriate traffic destined for the simulator ‘off the wire’. Collected datagrams are then processed by the framework. A check is made to see if the packet should be dropped, and if not processing continues. In many ways the simulator can be described as a series of stacked simple routers, where each routing node has an associated time in processing, and correctly decrements the TTL, and responds correctly if it expires, through the generation of an ICMP type 15 message (Braden 1989, Barker 1995). Datagrams for which the TTL has not expired, are passed onto the next node in the chain. When a datagram reaches the end of the configured chain, it is injected back onto the network though the use of Libnet. Libnet is also used for the generation of the TTL expired ICMP datagrams, and their injection back onto the physical network. The process of passing the datagram to the next nodes also implements the delay. The delay value is a calculated average that then takes on added realism through the use of the addition of a small random value in order to better simulate variance. This is done through taking into account the processing delay within the host system and then adding or removing a randomly generated number. Each node in this simulator will contain the route to itself from a top down point of view. In other words, as this is a core routing simulator, we are only concerned about the packets delivery from the core of the network out and not in its route to the core of a network. The delay to the core however should not be omitted and will be dealt with as a general hop with delay that will represent transferring a packet to the core. Discussed later in this section is a more dynamic approach and comparisons are made against static routing system. As this simulation works on an IPv4 bases, each IP within the route is stored internally as a 32‐bit integer representing the IP address followed by a 32‐bit integer to represent the delay between hops, this comes to a total of 8 bytes to store a hop in the route to a node. Nodes also store more general data such as processing delay and chance of packet loss. These will be represented by 32‐bit integer values as well and stored in a global table obviating the need to store a value multiple times in every route that uses a specific node. In addition to the above, a console was built for the software which allows an operator to directly reference a node at any point in execution and change parameters above mentioned parameters which makes for a more realistic simulator. Nodes can also be added or removed. These modifications can be scripted to allow for repeatable and varying tests.

99


Barry Irwin and Alan Herbert

3.2 Data collection In order to be able to present a realistic simulacrum of the Internet at large, as well as to test scalability of the implementation, a suitable subset of data was collected for loading into the system. Once basic validation of the implementation had been performed, traceroute data was collected from the university to the top 10 000 hosts on the Internet as listed by Alexa (Alexa 2012). A sample of this output is shown in Figure 1. This approach is similar to that taken by Huffaker et. al. (2002) Siamwalla et. al. (1998). $ traceroute to www.google.com (74.125.233.17), 64 hops max, 52 byte packets 1 ict.gw.ru.ac.za (146.231.120.1) 1.135 ms 0.828 ms 1.151 ms 2 strubencore‐maincampus‐1.net.ru.ac.za (146.231.2.25) 0.670 ms 0.673 ms 0.513 ms 3 datacentres‐1‐strubencore.net.ru.ac.za (146.231.2.10) 1.137 ms 0.830 ms 0.831 ms 4 border‐struben.net.ru.ac.za (146.231.0.2) 0.300 ms 0.204 ms 0.207 ms 5 tenet.net.ru.ac.za (192.42.99.1) 0.824 ms 0.671 ms 0.826 ms 6 155.232.5.4 (155.232.5.4) 3.944 ms 3.793 ms 3.792 ms 7 cpt1‐t100‐plz1‐t100.net.tenet.ac.za (155.232.6.41) 13.772 ms 13.779 ms 13.775 ms 8 155.232.253.254 (155.232.253.254) 13.936 ms 13.777 ms 13.931 ms 9 64.233.174.105 (64.233.174.105) 14.086 ms 14.088 ms 14.089 ms 10 cpt01s01‐in‐f17.1e100.net (74.125.233.17) 14.397 ms 13.933 ms 13.932 ms Figure 1: Sample traceroute to www.google.com This collected data was then parsed into a suitable format for further processing by the simulation software. From this various configuration files were built using differing numbers of route endpoints, and routing networks constructed. Along with path information, the propagation delay values were recorded and used in later variants of the simulator. An example of a routing network constructed is shown in Figure 2. While not a true reflection of the way Internet routing works in a dynamic nature, this means of building up routing data was felt to provide a suitable simulacrum of Internet scale routing (albeit a snapshot) while at the same time minimising resources required.

Figure 2: Sample routing network The creation of the node network was found to be fairly economical in terms of memory utilisation, with an initial rapid growth in node count as the network was initially constructed, but from 1 000 endpoints onwards a near linear growth in the number of nodes was observed. This growth curve can be seen in Figure 3. For values below 1000, a more exponential cure was observed. This saving is due to the fact that the multiple endpoint paths tend to, in most cases, share a portion of the initial path. Considering the trace shown in Figure 1, the first 7 hops are common to most endpoints, given they are the path towards one of the major switching nodes on the South African National Research and Education Network (SanReN). The actual memory utilisation observed to support growing numbers of endpoints is discussed in Section 5.1.

100


Barry Irwin and Alan Herbert

Figure 3: Routing node count expansion

3.3 Route depth A secondary outcome generated from the processing of the collected traceroute data was to be able to ascertain path lengths over the top 10 000 hosts as determined by Alexa. The average hop count (noted traversed to reach a given destination) was 16, however with a mode of 15, a bias can be observed to those with longer paths despite them being a relatively small portion. Table 1 contains a breakdown of the hop lengths. Table 1: Hop Counts for Alexa Top 10 000 hosts Hop Count

Value

%

<10

1969

19.70

>10 & ≤20

7491

74.98

>20

531

5.32

Total (N)

9991

100

Note: nine of the tested hosts were unreachable Considering the bulk of hosts between 10 and 20 hops distant from the probe system, and given that at the time of collection six hops was required to exit the University Infrastructure and onto the SANReN network, a fairly good degree of connectivity can be observed. Even removing these “access hops”, a mean emulated depth of 9 nodes is achieved.

4. Intended use and operation The intended scenario for the application of this simulation tool is as part of a network testbed environment. One variant of this would be a testbed set up specifically for cyberdefence challenges and training, but this a specialisation of purpose from the generic implementation case. The purpose of this tool is to provide a more realistic experience though the emulation of ‘deep’ routing links as one would see when traversing the Internet. Much of the ‘core’ of the internet is routing infrastructure, with client and server hosts clustered around the edges. Other software tools are able to capably fill the functional requirements for host or endpoint network simulation, but there has been a hole in the ability to easily simulate the core of the Internet. Tools such as ISEAGE (Houghton, 2005) have been available, but there use appears to have been very limited in research, outside of the originating institution. From a practical perspective, the routing simulation system would be connected between client systems, and other target infrastructure, whether real or simulated. Hardware requirements are fairly modest, requiring a modern dual core CPU, and at least two network interfaces. With the addition of suitable network switch infrastructure, the system can operate across a number of transport media types.

101


Barry Irwin and Alan Herbert

5. Performance testing Abbreviated key performance testing results are given in this paper. An exhaustive set of results can be found in REDACTED (2012). Three Areas are reported, being Memory utilisation, CPU utilisation, and throughput performance. A number of tests were run on this network simulator to determine how the system performs as a whole under different conditions. This includes varying simulation loads and size to obtain accurate results of memory and CPU usage as well as other relevant results. A dynamic and static approach will be taken to into consideration in this routing simulator. Testing of the dynamic approach however will only be done where comparison is relevant, that being on CPU and memory requirements. The networks simulated in these tests are constructed from routing data generated from tracing real routes found in the Internet as described in Section 3.2. Prior to commencement of the tests below, testing was performed to ensure correct operation, though the use of end‐to‐end path discovery using traceroute utilising TCP, UDP and ICMP datagrams, as well as connectivity tests. The test platform used was an Intel i7 2.8Ghz CPU, 4 GB of Ram, Intel e100 NICs. The operating system used was an up to date Ubuntu 11.10 64‐bit.

5.1 Memory usage This routing simulator was observed to keeps memory usage down to a minimum as it requires less than seven megabytes of RAM to store approximately 19000 reachable nodes with routing paths in a static approach. This figure is bested by that of the dynamic approach needing less than two megabytes of RAM to achieve the same task at the expense of extra CPU requirements. A graphical representation of this can be seen in Figures 4 and 5, with the node count for the maximal 10 000 hosts shown in Figure 2. In both the memory allocation scenarios, growth was seen to be relatively linear.

Figure 4: Memory utilisation ‐ endpoints As of July of 2012 there was an estimated 900 million reachable nodes in the Internet. Using this value it is estimated that the above dynamic routing approach would require 92GB of memory – based on an estimated at two megabytes for every 19000 nodes. This means that one could simulate an instance of every mappable node in the Internet within a computer with a 128GB of RAM. Technology that can achieve this figure is available in the higher end spectrum of commodity hardware, and at an achievable cost. An alternate solution may be to segment the simulated Internet into smaller chunks spread among a number of systems with lower specifications. This however does come at a trade‐off for CPU requirements as calculating routes in real time exacting a toll onto the host system. It is further anticipated that the traffic generated at this level of simulation may well result in additional slowdowns

102


Barry Irwin and Alan Herbert

Figure 5: Memory utilisation – nodes

5.2 CPU utilisation CPU utilisation on the host system was found to scale in a linear manner. As expected the Dynamic memory allocation strategy was found to generate approximately twice the load as the static allocation method for a given throughput rate. The transmission of a million datagrams resulted in a load of nearly 70% using the dynamics method. This level is getting near the upper end of what can be expected before impacting negatively on the host operating system. Packet loss was found to be negligible during these testing, with the majority of dropped packets attributable to the built in random drop mechanism. Testing was performed by transferring a varying size of data over a bi‐directional connection in a fixed time period. This was achieved using a custom client and server application, and the transmission results verified using packet traces. The output of the point testing is shown in Figure 6.

Figure 6: CPU utilisation CPU utilisation and capacity should scale with the use of modern multi‐core CPUs, due to the threaded nature of the application implementation.

5.3 Throughput Results show that on a standard network no more than thirteen milliseconds of delay will be introduced into the system in a very special case of no hop delay. This style of routing adds significantly more load onto a CPU

103


Barry Irwin and Alan Herbert via a routing algorithm as there is no breaks between each route calculated which means every packet entering the routing simulator will be contending for CPU resources. Thirteen milliseconds added delay can also be accounted for by prioritizing shorter routes to minimize the effect of this delay and by subsidizing longer routes delays according to the CPU load, however in standard routing of a real time network such as the Internet variances may in fact be much higher. One can conclude that this routing system brings in no significant noise under routing in normal conditions. Furthermore this makes this routing system even more successful in terms of more realistic emulation of a live environment. If one were concerned about this extra delay, it can be reduced by using the static routing approach which involves pre‐calculating routes thus omitting dynamic processing delay. This, as mentioned, requires more memory but reduces CPU requirements significantly. This then reduces the delay introduced by routing to no more than six milliseconds and so allows for more accurate routing. From a throughput point of view, this routing simulator under performs. Although it does surpass the mark set in testing of 20Mbps, the actual throughput of roughly 40Mbps that this routing simulator achieves does not even meet the throughput of a commonly found 100Mbps Ethernet connection as found in an office block or other common institutes. Although a positive is that this routing simulator does support multiple connections at very little loss of throughput. It maintains an average of 40Mbps while servicing multiple connections. This being said, even though this simulator reaches a speed that is more than enough to simulate a localized portion of the Internet such as a country or part of a continent with multiple connections. It is not recommended that one should attempt to use this software in its current form to simulate any major continental interconnects or local area network connections and expect realistic throughput rates. Horizontal scaling of simulators may be able to assist with this bottleneck to some extent. The other approach to mitigating this problem would be to upgrade the hardware, particularly the CPU and network interfaces within the host system. For higher speed simulations, PCIE network cards are required, rather than the traditional PCI, due to bottleneck that can occur on the actual system bus, particularly where multiple NICs are used. In terms of servicing packets generated by hosts on the network being simulated, this simulator can handle a throughput of packets up to the point of that which the network adapter is rated to. This must also be compared to the hardware limitations of the host system as if the CPU and memory can't perform up to the rate at which packets are being introduced into the system by the network adapter, then there will be unserviced packets introduced from this factor as well, resulting in discarded or missed traffic. With all above results considered, this routing simulator is applicable in effectively simulating the routing of packets up to a speed of 40Mbps. This is done while keeping delays introduced from processing to a minimum. Also this routing simulator has the ability to route packets through simulated networks on a large scale as memory requirements as a network configuration grows is kept as low as possible. This allows it to simulate every node in the Internet within the limitations of hardware available to the public.

6. Conclusion The routing simulator as implemented has been found to have met the original design criteria, as laid out in Section 3. The system developed provides routing support for IPv4 datagrams, and sends the correct error datagrams for TCP, UDP and ICMP datagrams. Initial support for delay has also been implemented to add an element of increased realism to the simulation. Other issues addressed after such fundamental properties of a routing simulator were achieved were that of both CPU and memory requirements. During design and implementation these were both kept as key points and as such any unnecessary waste of both CPU and memory was tracked and then analysed so as to keep both requirements to a minimum. Overall the authors feel that this component can be used to enhance the realism of network simulation, particularly those intending to simulate traffic at large scale. That said, a number of enhancements have been considered to in order to further increase the functionality offered.

6.1 Future work During the implementation of this project, a number of areas for future development and use were identified. As such the following areas are suggested:

104


Barry Irwin and Alan Herbert

Introduction of additional hosts within the routing simulator. These hosts can be used as additional sources of traffic, particularly the type of generic backscatter traffic often observed on Internet connections. This would be done as a specialisation of the generic routing nodes implemented.

Extending protocol capabilities by introducing IPv6 (Internet Protocol version 6) and allowing for support of all already existing protocols using IPv6 routing. This will further to secure this software's future use and allow for further testing and experimentation.

Extending or re‐implementing the system in order to explore the viability of making use of specialised hardware platforms such as FPGA and GPU co‐processors. The benefits of such an implementation will most likely result in an improvement in achievable throughput and volume of packets routable within the simulator.

Introduction ‘wormhole’ nodes and flow control mechanisms. This will allow for more realistic routing and added functionality into the routing simulator. These nodes would allow specified traffic to ingress the routing network at any point rather than at the edges. Introduction of such a feature will allow further packet control within the routing simulator. This would be similar to the functionality suggested by Karstens (2007).

Implementation of support for BGP and similar routing protocols would allow for an alternate means of building up routing information, as well as easier integration with existing routing infrastructure. This will also improve support for dynamic routing but may come at a cost of further memory requirements.

Scalability could be increased by making use of additional dynamic routing support, and wormhole nodes, which could be used to allow for the linking of multiple simulation instances.

A further type of specialised node could allow for packet sampling, and recording onto an alternate physical interface or even via export to pcap.

References Apposite Technologies. Wan emulation made easy, 2012. URL http://www.apposite‐tech.com/index.html. Accessed 18 September 2012. Baker, F. (1995) Requirements for IP version, 4, RFC 1812. IETF. Braden, R. (1989). Requirements for Internet hosts‐communication layers. RFC 1122. IETF Henderson, T. R., Roy, S., Floyd, S., & Riley, G. F. (2006). ns‐3 project goals. In Proceeding from the 2006 workshop on ns‐2: the IP network simulator (p. 13). ACM. Houghton, D. C. (2005). Design and development of Network Traffic Simulator. Huffaker, B., Plummer, D., Moore, D., & Claffy, K. C. (2002). Topology discovery by active probing. In Applications and the Internet (SAINT) Workshops, 2002. Proceedings. 2002 Symposium on (pp. 90‐96). IEEE. Karstens, N. L. (2007). DeepFreeze: a management interface for ISEAGE (Doctoral dissertation, Iowa State University). Leighton, F. T., Maggs, B. M., & Rao, S. B. (1994). Packet routing and job‐shop scheduling in O(congestion+ dilation) steps. Combinatorica, 14(2), 167‐186. Mahajan, R., Spring, N., Wetherall, D., & Anderson, T. (2003, October). User‐level internet path diagnosis. In ACM SIGOPS Operating Systems Review (Vol. 37, No. 5, pp. 106‐119). ACM. Packetstorm Communications, Inc. (2012) Network emulation with data rates up to 10 gbps, 2012. URL http://packetstorm.com/psc/psc.nsf/site/index. Accessed 18 September 2012. REDACTED (2012) A framework for Deep Routing simulation. Honours report. XXXX University. Siamwalla, R., Sharma, R., & Keshav, S. (1998). Discovering internet topology.Unpublished manuscript. http://www.cs.cornell.edu/skeshav/papers/discovery.pdf Zhao, J., & Govindan, R. (2003, November). Understanding packet delivery performance in dense wireless sensor networks. In Proceedings of the 1st international conference on Embedded networked sensor systems (pp. 1‐13). ACM.

105


Development of a South African Cybersecurity Policy Implementation Framework Joey Jansen van Vuuren1, Louise Leenen1, Jackie Phahlamohlaka 1 and Jannie Zaaiman2 1 Defence Peace Safety and Security: CSIR, Pretoria, South Africa 2 University of Venda, South Africa, Limpopo, South Africa jjvvuuren@csir.co.za jphahlamohlaka@csir.co.za lleenen@csir.co.za jannie.zaaiman@univen.ac.za Abstract: National governments have the responsibility to provide, regulate and maintain national security, which includes cybersecurity for their citizens. Although South Africa has recently published its first draft cybersecurity policy, the implementation of the policy is still in its very early stages. In this paper, the authors propose and describe a possible cybersecurity implementation framework for South Africa. This implementation framework is based on previous analysis of structures in other countries, a cybersecurity awareness toolkit, guidelines for cybersecurity strategies in the literature, and an implementation framework proposed for Jordan. Keywords: cybersecurity, national security, cybersecurity toolkit, policy framework, policy implementation

1. Introduction The development, implementation and review of national cybersecurity policies have become tasks of utmost importance for all governments. The urgent need to address national cybersecurity protection is driven by the growing cybersecurity challenges and threats as well as dependence on technology around the globe. Any cybersecurity policy should include strategies and standards to enable and sustain cybersecurity. The United States of America (USA) approaches this responsibility by employing a broad view; it encompasses the full range of threat reduction, vulnerability reduction, deterrence, international engagement, incident response, resiliency, and recovery. Their approach is supported by strong measures: the USA has created a Cyber Command (CYBERCOM) under the Strategic Command led by the head of the National Security Agency (NSA) which reports directly to the President (US Cyber Command Public Affairs, 2011). In developing nations the focus has been on increasing connectivity whilst largely neglecting the associated security risks. These countries will have to develop and maintain policies, strategies and structures to secure the networks that support their national security and economies. Despite a low Internet penetration rate, South Africa ranks third in the world after the USA and United Kingdom (UK) in terms of the number of cyber‐ attacks encountered (Amit, 2011). The draft version of the South African Cybersecurity Policy Framework was approved by government in March 2012 (South African Government Information, 2012). Whilst various structures have been established to deal with cybersecurity in South Africa, they are inadequate and implementation of the draft policy is still in the very early stages. Jansen van Vuuren et al. (2012) investigated different government organisational structures created for the control of national cybersecurity in selected countries of the world. The main contribution of this work was a proposed structure for South Africa taking into account the challenges of legislation and control of cybersecurity in developing countries. In this paper, a cybersecurity implementation framework for South Africa is described. This framework is based on previous work by Jansen van Vuuren et al. (2012), an implementation framework proposed by Otoom & Atoum (2012), guidelines for the implementation of national cybersecurity strategies by Ghernouti‐Helie (2010), and a cybersecurity awareness toolkit (Phahlamohlaka et al., 2011). Section 2 contains an overview of results on which the proposed cybersecurity policy implementation framework is based, and in Section 3 the authors introduce the proposed framework. The paper is concluded in Section 4.

106


Joey Jansen van Vuuren et al.

2. Background An efficient cybersecurity policy relies on a holistic approach; there is a need for a partnership between business, government and civil society (Ghernouli‐Helle, 2010; Phahlamohlaka et al., 2011). Phahlamohlaka et al. argue that a cyber security awareness programme should incorporate social dimensions and not just rely on fully technical solutions. This team of researchers proposed a Cyber Security Awareness Toolkit (CyberSAT) with national security in mind, and included economic, political, military, psychological and informational dimensions. Details of the CyberSAT are given in Section 3. Jansen van Vuuren et al. (2012) proposed a cybersecurity governance structure and an implementation model based on CyberSAT and organisational structures in other countries. Otoom & Atoum (2012) proposed a cybersecurity implementation framework for Jordan. This framework is applied in order to develop a similar framework for South Africa. Ghernouti‐Helie (2010) argues that an effective approach and culture for national cybersecurity strategy includes political will and national leadership to ensure that the plan receives governmental support; a justice system and police service with a legal framework that supports the police to combat cyber‐crime on national and international level; a cybersecurity capacity that include organisational structures, human capacity as well as the use of technical and procedural cybersecurity solutions; and a cybersecurity culture and awareness training for citizens. The National Cybersecurity Policy Implementation Framework (NCPIF) of Otoom & Atoum (2012) uses a strategic planning process consisting of the Strategic formulation, Strategic Implementation and Strategic Evaluation (Figure 1). The elements are:

A detailed analysis of the policy strategy in manageable, understandable parts. This analysis must be done by different people than those who write the policy. Different stakeholders are to be identified and a reconciled analysis to be done of the necessary implementation needs.

A management structure responsible for the implementation of the strategy. The responsibilities include the breaking down of long terms objectives into annual objectives and development of organisational structures to fulfil the strategy. Resource allocation should be done and change management plans developed.

Strategic moves designed to achieve the different strategic goals. It should consist of a set of coherent implementation programmes identifying exactly what has to be done and direct actions to achieve their objectives.

A set of applicable strategic controls should be deployed. Strategic controls will allow decision makers mechanisms to ensure that innovation, efficiency, and quality are achieved. These controls should be adaptable to the culture and they should evolve.

3. Proposed implementation framework The major goal of the NCPIF of Otoom & Atoum (2012) is to facilitate the implementation of a national cybersecurity policy framework (NCPF). The NCPIF proposes a methodology to analyse the NCPF and break it down into four well‐defined components: an analysis, a management structure, strategic moves and strategic controls. Each of these components is applied to the South African Cybersecurity environment in the subsections that follow below. The analysis results will be used to guide the design of governance structures for cybersecurity in South Africa and to determine the strategic moves that are necessary to achieve the national objectives.

3.1 Analysis The analysis of the national cybersecurity policy framework of South Africa was done using the description of Jablonsky (1997) for national security. Jablonsky defines national security in terms of natural and social determinants of national power. The Cybersecurity toolkit, CyberSAT, (Phahlamohlaka et al., 2011) developed with the South African environment in mind is based on the policy elements as described in the Draft Cybersecurity Policy of South Africa (SA Government Gazette, 2010). The CyberSAT is adapted to the Extended Cyber Security Toolkit (XCYBERST) to include stakeholders. In addition, we adjusted the toolkit by the splitting the Capacity building, culture of Cybersecurity into two separate policy elements; Research and capacity building and Culture promotion, and made some minor changes inside the table. In the authors’ opinion, research and capacity building address other aspects than the creation of a cybersecurity culture.

107


Joey Jansen van Vuuren et al.

Figure 1: Proposed implementation framework (Otoom & Atoum, 2012) The XCyberST for national security is presented in Table 1. In the first column are the elements of the policy, while the second column represents the philosophical position of each element. The third column is divided into the five social determinants of national power elements. While the toolkit is based on the policy elements from the South African environment the determinants of national power are generic, and thus the toolkit could be adopted for Cybersecurity implementations by other countries when national security considerations are pertinent. The major stakeholders are presented in the last column: the State Security Agency (SSA), the Justice, Crime Prevention and Security Cluster (JCPS), the Department of Communications (DOC), the Department of Justice and Constitutional Development (DOJ), the Department of Science and Technology (DST), the Department of Education (DOE), the Communications Authority of SA (ICASA), the South African Police Services (SAPS), the Department of Defence (DOD), the South African Bureau of Standards (SABS), the Council for Scientific and Industrial Research (CSIR), South African Banking Risk Information Centre (SABRIC), and Internet Service Providers (ISP). The table consists of:

Structures in support of cybersecurity: Cybersecurity breaches will happen regardless of the structures established. With this policy element and the accompanying philosophical position, one could develop toolsets appropriate for each social determinant of national power. For instance a military Computer Security Incident Response Team (CSIRT) could be established as a structure in support of cyber security in the military as a social determinant of national power.

Reduction of cybersecurity threats and vulnerabilities: Threats and vulnerabilities will always be there; reduction thereof is a key goal. Monitoring tools and techniques across the five dimensions could be developed aimed at reducing the threats and vulnerabilities

Cooperation and coordination between government and private sector: Partnerships and cooperation across all sectors and society are critical. Guided once more by the five social determinants, toolsets in support of public private partnership could be developed. Knowing whom to call when an incident occurs is very critical, irrespective of where the capability might be housed within the state.

International cooperation on cybersecurity: No country can do it alone. Tools to support international cooperation across borders could be developed, enabling leaders to develop relationships of trust

Research and capacity building: Focus internally and on the basics. Insider threats are more than external threats. Development of research, recruitment and retention strategies to build expertise.

108


Joey Jansen van Vuuren et al.

Promote culture of cybersecurity: Focus internally on research on threats and education of public Promotion of a national program so that the general population across all sectors secure their own parts of cyberspace

Legal framework and compliance with technical and operational cybersecurity standards: Actively Participate in the creation of international standards. Defining the standard of conduct in cyberspace and legal adherence is critical for a safe society.

Table 1: The extended cyber security toolkit for national security (XCyberST)

Philosophica l Position

Social Determinants of National Power Economic

Political

Military

Psychological

Cybersecurit y breaches will happen regardless of the structures established

Establish commerci al and financial response structures e.g. sector CSIRTs

Establish Military CSIRT

Build confidence in the response capacity of established institutions

Reduction of cybersecurity threats and vulnerabilities

Threats and vulnerabiliti es will always be there, reduction thereof is a key goal

Develop monitorin g tools and technique s on an ongoing basis

Effectively communicate the benefits of paying attention to threats and vulnerabilitie s

Cooperation and coordination between government and private sector

Partnerships and cooperation across all sectors and society are critical

Build public confidence that the political leadership will take care of their personal informatio n

Create reasonabl e civil‐ military interactio ns within broader governme nt framewor k

Spell out clear lines of accountabilit y and expected behaviours that could contribute to trust and confidence building

Build confidence in the public that its political leadership will take care of their personal information

DOC DOD

International cooperation on cybersecurity

No country can do it alone

Leaders need to develop relationshi ps that extend across borders

Define standards of conduct in cyberspac e

SSA DOC

Focus internally on research on threats and education of public

Governme nt wide support for cybersecuri ty awareness initiatives and skills developme

Research and understan ding of Cybersec urity threats and set up of

Establish reasonable precautions in relation to balancing secrecy and information sharing are necessary Research and understandin g of Cybersecurity threats enhance better cyber behaviour of individual

Promote information sharing

Research and capacity building

Develop various economic breaches monitorin g tools and technique s Build business confidenc e that continued ICT use is a competitiv e advantage rather than a liability. Internatio nal partnershi ps and shared global spaces are necessary tools Focus on public education and research initiatives for preventio n of

Establish a National security level institutiona l arrangeme nt on cybersecuri ty Send regular political signals that cyber security is a priority

Focus on public education and research agenda

DST DOE DOC CSIR

Policy Elements Structures in support of cybersecurity

109

Information al Establish national CSIRTS Let the public to trust in the security of communicat ion channels and systems Effectively communicat e that cyber security is a priority

Stake‐ holders SSA DOC DOD SABRIC ISP

DOC SSA SABRIC ISP DOD


Joey Jansen van Vuuren et al. individual to become victim of cybercrim e

nt to win the Cybersecur ity battle

protectio n systems against attacks

users

Protectio n of citizen & enhance‐ ment of ethical behaviour is an important part of the cyber‐ security battle Protectio n of citizens with effectual legal framewor k adherenc e and defining of standards of conduct in cyberspac e

It is the behaviour of individual users that is the single most important part of the cybersecurity battle

Focus on public awareness of cyber risks and solutions

DOE DOC ICASA

Legal adherence of citizens to cyber policy guidelines and standards of conduct in cyberspace.

Articulate coordinated national information & communica‐ tions infrastructur e objectives

SABS DOJ DOC SAPS

Promote culture of cybersecurity

Focus internally on the creation of awareness on the risks in cyberspace

Focus on public awareness

Articulate coordinate d national informatio n and communic a‐tions infrastruct ure objectives

Legal framework and compliance with technical and operational cybersecurity standards

Effectual legal system and active participatio n in creation of internationa l standards

Define standards of conduct in cyberspac e.

Articulate coordinate d national informatio n and communic ations infrastruct ure objectives, standards and legal framework

It should be noted that the toolkit is a possible operational guideline that could be used and is not meant to be exhaustive. Its entries could be varied, expanded on and applied at different government levels and institutional arrangements. Further analysis of the stakeholders, their relationships and responsibilities is currently being done by the authors. Workshops are planned with some of the major stakeholders mentioned in Table 1. During the workshops general morphological analysis (GMA) will be used to extract information and views from these stakeholders regarding the main variables and relationships that need to be addressed in the implementation of the policy. GMA is a method for identifying and investigating the total set of possible relationships or “configurations” contained in a given problem complex. This is accomplished by going through a number of iterative phases which represent cycles of analysis and synthesis (Ritchie, 1997).

3.2 Management structures for implementation 3.2.1 Objectives The key elements or objectives that must be covered in a cybersecurity policy differ between countries. The USA policy review team suggest that any complete national cyber policy must at least consider relevant government structures, a supporting architecture, norms of behaviour, and capacity building (Phahlamohlaka et al., 2011). Governmental structures for policy development and the coordination of cyber operations should address the responsibilities and specifically the likely overlap of responsibilities of various stakeholders in the cyber security domain. A supporting architecture refers to the communications systems and infrastructures that are

110


Joey Jansen van Vuuren et al. required for cyber security operations and includes aspects such as performance, cost, security characteristics, strategic planning, research and development, and risk management. Norms of behaviour include legislation, regulations, and international treaties required to circumscribe and define standards of behaviour in cyber‐ space. Capacity building refers to the provision of resources, activities, and capabilities required to become a more cyber‐competent nation. It typically includes resource requirements, research and development, public education and awareness, and international partnerships, and all other activities that allow the government to interface with its citizenry and workforce to build the digital information and communication infrastructure of the future. The Canadian policy emphasises strategies, responsibilities, the importance of individuals, leadership and a global approach (Phahlamohlaka et al., 2011). Effective national strategies should encourage cooperation and information sharing across different agencies. The roles and responsibilities of different agencies should be clarified such that there exist accountability and appropriate behaviour which lead to trust. Although the government and businesses have a strong role to play in advancing cyber awareness and literacy, the role of the individual should not be underestimated. Organisational leadership and international partnerships are considered to be vital aspects of the Canadian cybersecurity policy. It is clear that nations and governments are responding to the cybersecurity challenges by setting up institutional coordination, control and response mechanisms. Linked to the institutional arrangements are also research, development and innovation plans. The elements of South Africa’s draft cybersecurity policy compares favourably with those of the broader international community. The key strategic objectives of the NCPF of South Africa (as identified in the analysis in Table 1) are

Facilitate the establishment of relevant structures in support of cybersecurity;

Ensure the reduction of cybersecurity threats and vulnerabilities;

Foster cooperation and coordination between government and private sector;

Promote and strengthen international cooperation on cybersecurity;

Build capacity and promoting a culture of cybersecurity; and

Promote compliance with appropriate technical and operational cybersecurity standards.

Policy implementation in South Africa will be particularly difficult due to the number of stakeholders, the recent significant increase in the broadband roll‐out and fact that the population is ill prepared for this situation. 3.2.2 Organisational structures Considerations setting up structures Structures should exit at the national level to sustain an effective cybersecurity solution for all. These structures include adequate organisational structures which should take local cultures, particular economic contexts, country size, ICT infrastructure development, and users into consideration. National as well as international needs must also be considered. A snapshot of the international approaches From the Estonian experience, the lesson is that the only way we will learn to move forward on cybersecurity related issues, is by going through a painful growing process of suffering from, and dealing with, online attacks. Estonia’s approach was to establish the Cooperative Cyber Defence Centre of Excellence (CCD COE), a NATO‐ approved think‐tank whose mission is essentially to formulate new strategies for understanding, and preventing, online attacks (Czosseck et al., 2011; Tiirmma‐Klaar 2010). In South Korea, a cyber‐attack resulted in the Ministry of Defence in South Korea launching a Cyber Warfare Command Centre (mimicking the US defensive steps), designed to fight against possible hacking attacks. Along a cyber‐police force, the centre is charged with protecting government organisations and economical subjects from hacker attacks. Despite the establishment of this Cyber Warfare Command Centre, there have been repeat attacks in March 2011. (Jansen van Vuuren et al., 2010; Deloitte & Touche, 2010)

111


Joey Jansen van Vuuren et al. The lesson from Iran is that the Stuxnet type attacks are not over yet, while the key message from Georgia is that attacks could be disguised as civilian while they are military, with some hostile government’s knowledge. In China there is a focus on Industrial espionage with the goal of stealing IP and designs, command signal data and information of financial and commercial nature. (Phahlamohlaka et al., 2011). The UK approach was the establishment of the Cybersecurity Operations Centre, with the motivation that future battles will be fought not just on the ground, but in cyberspace (Espiner, 2010). The USA created a Cyber Command (CYBERCOM) under the Strategic Command led by the head of the National Security Agency (NSA). One of the reasons stated for its creation was that the current capabilities to operate in cyberspace have outpaced the development of policy, law and precedent to guide and control these operations. The CYBERCOM was thus created in October 2009 around this mission. (Deloitte & Touche, 2010) The Australian government initially followed a hands‐off approach to cybersecurity and regarded it largely as a private sector responsibility. However, due to security challenges, the government changed it approach in 2009 and created a number of bodies with new capabilities and responsibilities. Their approach is still problematic because of the large number of separate agencies that are involved (Warren & Leitch, 2012). The Dutch government follows a joint approach with the establishment of a National Cyber Security Centre which includes state institutions, the business community and knowledge and research institutions. The existing GOVCERT also forms part of this centre (Enisa, 2011). As of 1 January 2012, GOVCERT.NL evolved into the National Cyber Security Centre (Ministerie van Veiligheid en Justitie, 2012). It is clear that nations and governments are responding to the cybersecurity challenges by setting up institutional coordination, control and response mechanisms. Linked to the institutional arrangements are also research, development and innovation plans. These national structures responsible for cybersecurity must also lead the capability building processes that will ensure collaboration on international level to achieve the goals identified by global cyber security policies. As seen from the literature, it is important that cybersecurity be controlled on a very high level, as in the case in the USA, Estonia and Korea and other countries. Proposed Structures for South Africa The South African Cybersecurity Policy Framework (SACFP) was approved by government in March 2012. The policy framework identifies specific areas of responsibility by a number of government departments, and the State Security Agency is the custodian for the development and implementation of cybersecurity measures (South African Government Information, 2012).

Figure 2: South African cybersecurity structure

112


Joey Jansen van Vuuren et al. The South African structure (Figure 2) as described in the SACFP provides for a national body (Cybersecurity Response Committee) reporting to the Department of State Security. The Cybersecurity Hub will be responsible for the private sector and civil society. The Electronic Communications Security (ECS‐CSIRT) will be the Government CSIRT. There is also a separation between the civilian and the governmental networks which include state and military security networks (Dlomo, 2012). A notable difference between this structure and those of the USA and Estonia, is that both government and military networks will be controlled by the State Security Agency in South Africa. The State Security Agency is the department of the South African government with overall responsibility for civilian intelligence operations. It was created in 2009 to incorporate the formerly separate National Intelligence Agency, South African Secret Service, South African National Academy of Intelligence, National Communications Centre and COMSEC (South Africa). Political responsibility for the agency lies with the Minister of State Security. Government and civilian systems in the USA and Estonia are not controlled by Intelligence Agencies (Klimburg & Tirmaa‐Klaar, 2011). It should be noted that the JCPS Cyber Response Committee of South Africa reports to the Minister of State Security. During the establishment of the Cyber Command in the USA, the private sector questioned the fact that the military would play such an important role in the process. The concerns raised in the USA were whether the NSA will overshadow the civilian cyber defence efforts and on what assistance for civilian cyber defence there will be. Some concerns in the US were laid to rest with the assurance that the Department of Homeland Security (DHS) will be responsible for federal civilian networks including the dot‐gov, and that CYBERCOM will only assist the DHS in the case of cyber hostilities as a response to an executive order (Burghardt, 2012). Similar concerns on the privacy of data may still be raised in South Africa due to State Security controlling the Cyber Centre, and therefore also indirectly, the civilian networks. There will be close collaboration between the Cyber Centre, the Cybersecurity Hub and the ECS‐CSIRT. The Cyber Centre will be responsible for operational coordination of cybersecurity incidence response activities regarding national intelligence, national defence and cybercrime (Dlomo, 2012). 3.2.3 Resources Ghermouti‐Helle (2010) argues that the building of capacity should be based upon the understanding of the role of cybersecurity actors including their motivation, their correlation, their tools, mode of action, and the generic relevant security functions of any security actions. These considerations will be the underlying principles to be applied for organisational structures to be effective and to determine the kind of tools, knowledge, and procedures necessary to contribute to solving cybersecurity problems. Efficient partnerships between the public and private sectors, linked to cybersecurity organisational structures which are dedicated to support operational proactive and reactive activities should exist. The objectives of the Cybersecurity Hub make provision to achieve these objectives. These organisational structures should also be linked to cybersecurity management at a national level. This will be achieved by the Cybersecurity Response Committee.

3.3 Strategic moves The five elements identified as part of a successful development of a national cyber security strategy (Ghernouti‐Hélie, 2010) can be used to identify the strategic moves. 3.3.1 Political will National leadership is imperative as both an individual and an organisational role to ensure effective cybersecurity policies. Although the South African national cyber security policy framework has been approved by the cabinet, partial implementation has only started in May 2012 (Dlomo, 2012). The policy aims to ensure that government organisations and the private sector cooperate to secure South African networks (Guy, 2011), and it does address some levels of compatibility at an international level. 3.3.2 Adapted organisational structures A proposal for a South African Cybersecurity structure has been presented in Figure 2. Organisational structures should exist to sustain effective cyber security solutions deployment for individuals, organisations and governmental agencies. A national CSIRT can be considered the most prominent organisational structure in joining communication networks and information systems with economic and social development structures. Previous research has identified nine steps to ensure the successful adaptation of a CSIRT as

113


Joey Jansen van Vuuren et al. organisational structure. Of these steps, clarifying the mandate and policy related issues are the first and most crucial step (Grobler & Bryk, 2010). 3.3.3 Identifying accurate proactive and reactive measures Both individuals and groups are largely dependent on data. This dependence relates not only to the physical data, but also to the relation of this data to specific infrastructures. Ghernouti‐Hélie (2010) proposed that cybersecurity actors can be classified into specific roles: the protector, the protected, or the criminal. With the strong digital component of everyday actions, the multiplicity and automation of cybersecurity is becoming more prominent to maximise outputs and minimise human error. Accordingly, it is important that these roles can take on proactive or reactive measures. 3.3.4 Reducing criminal opportunities Due to the international scope of the Internet and wide reach of technological usage, cybersecurity intersects largely with the application and implementation of international legislation. Regardless, the foundation for an adequate security strategy is twofold: raise the level of risks taken by the criminal, and raise the level of difficulties faced by the criminal. In all instances, legislative and regulatory measures should assist to raise the level of risk perceived by a criminal and decrease the favourable context to perpetrate an illegal action (Ghernouti‐Hélie, 2010). 3.3.5 Education and awareness Organisational structures should encourage, lead or coordinate continuing education for professionals in the legal, economic and political fields. In addition, the realisation of a global cybersecurity awareness culture will contribute to help achieving part of the goals of a national cybersecurity strategy (Ghernouti‐Hélie, 2010). In South Africa, there is a number of cyber security awareness programmes aimed at educating different user groups in different geographical parts of South Africa (Grobler et al., 2011).

3.4 Strategic controls Otoom & Atoum (2012) stress that applicable controls are essential to the success of an implementation framework: they will enable decision makers to make necessary adjustments and improvements during the implementation process. Atoum (2012) elaborates on the strategic controls that are required: holistic performance control, quality controls, risk control, human resource incentives, evaluation and correctness, vigilance, and global schedule monitoring. This is one aspect of our implementation framework that requires thought and research and will be addressed in future work.

4. Conclusions This paper describes a cybersecurity policy implementation framework for South Africa which is based on previous work of the authors as well as guidelines and other frameworks in the literature. An Extended Cybersecurity Toolkit (XCyberST) and an organisational structure are presented with the intention that it could be used as a stepping stone for the implementation of South Africa’s proposed cybersecurity policy. Because South Africa does not yet have a consolidated national security policy and strategy, a cybersecurity awareness raising campaign designed in accordance with the proposed toolkit could go a long way in preparing the country to respond to the cybersecurity challenges it is currently facing.

References Amit, I. I. (2011). Information Security Intelligence Report: A Recap of 2010 and Prediction for 2011. Retrieved 5 February 2011 from www.Security‐Art.com Atoum, I.A.F. (2012). A Holistic Cyber security Strategy Implementation Framework. Master Thesis, Published by the University of Philadelphia, Philadelphia, USA. Burghardt, T. (2012). The Launching of USA Cyber Command (CYBERCOM), Offensive Operations in Cyberspace. Retrieved 24 February 2012 from http://www.globalresearch.ca/ index.php?context=va&aid=14186 Czosseck, C., Ottis, R., & Talihärm, A‐M. (2011). Estonia after the 2007 Cyber Attacks : Legal, Strategic, and Organisational Changes in Cyber security. International Journal of Cyber Warfare and Terrorism, Volume 1, Number 1, pp. 24‐34. Deloitte & Touche. (2010). Constitution of the Republic of South Africa. (1996). Chapter 11 Principle 198. National Cybersecurity Strategies. Paper presented at the GOVCERT.NL symposium.

114


Joey Jansen van Vuuren et al. Dlomo, D.T. (2012). Cyber Sceurity Policy Discussions and ICT Security Approach in the Republic. Presentation at the Stakeholders Worskhop on 2 November 2012 at the CSIR International Convention Centre, Pretoria, South Africa. Organised by the Department of Communications. ENISA, (2011). European Network and Information Security Agency (ENISA): Dutch Cyber Security Strategy. Retrieved on 2 December 2012 from http://www.enisa.europa.eu/media/news‐items/cyber‐security‐strategies‐of‐de‐nl‐presented Espiner, T. (2010). UK's Cyberdefence Centre Gets Later Start Date. Retrieved 21 February 2011 from http://www.zdnet.co.uk/news/security‐threats/2010/03/10/uks‐cyberdefence‐centre‐gets‐later‐start‐date‐ 40082405/ Ghernouti‐Helie, S. (2010). A National Strategy for an Effective Cybersecurity Approach and Culture. The 2010 International Conference on Availability, Reliability and Security. Grobler, M. & Bryk, H. (2010). Common Challenges Faced During the Establishment of a CSIRT. Presented at the ISSA conference 2010. Sandton, South Africa. Grobler, M., Flowerday, S., von Solms, R. & Venter, H. (2011). Cyber Awareness Initiatives in South Africa: A National Perspective. Proceedings of the First IFIP TC9 / TC11 Southern African Cyber Security Awareness Workshop (SACSAW). Gaborone, Botswana. Guy. (2011). Cyber security policy will go before cabinet for approval this year. Accessed 5 March 2011, available online from http://www.defenceweb.co.za/index.php?option=com_content&view=article&id=13783: cyber‐security‐policy‐ will‐go‐before‐cabinet‐for‐approval‐thisyear&catid=48:Information %20&%20Communication%20Technologies&Itemid=109 Jablonsky, D. (1997). National Power. Parameters, Volume 27, pp. 34‐54.Otoom, A., & Atoum, I.A.F. (2012). An Implementation Framework (IF) For the. National Information Assurance and Cyber. Security Strategy (NIACSS) of Jordan [Electronic Version]. IAJIT. Retrieved 15 November 2012 from www.ccis2k.org/iajit/PDF/vol.10,no.4/4842‐ 10.pdf. Jansen van Vuuren, J., Phahlamohlaka, J., & Brazzoli, M. (2010). The Impact of the Increase in Broadband Access on National Security and the Average citizen. Journal of Information Warfare, 5, 171‐181. Jansen van Vuuren, J. Phahlamohlaka, J., Leenen,L. (2012). Governance of Cybersecurity in South Africa. Proceedings of the th 11 European Conference on Information Warfare and Security. Laval, France. Klimburg, A. & Tirmaa‐Klaar, H. (2011). Cybersecurity and Cyberpower: Concepts, Conditions and Capabilities for Cooperation for Action Within the EU. Reference number EP/EXPO/B/SEDE/FWC/2009‐01/Lot6/09. Published by the Directorate‐General for External Policies, European Parliament. Ministerie van Veiligheid en Justitie. (2012). Dutch National Cyber Security Centre. Retrieved 15 November 2012 from http://www.govcert.nl/english/service‐provision/knowledge‐and‐publications/national‐cyber‐security‐ centre/ncsc.html Otoom, A., & Atoum, I. (2012). An Implementation Framework (IF) For the. National Information Assurance and Cyber. Security Strategy (NIACSS) of Jordan [Electronic Version]. IAJIT. Retrieved 15 November from www.ccis2k.org/iajit/PDF/vol.10,no.4/4842‐10.pdf. Phahlamohlaka, L. J., Jansen van Vuuren, J. C., & Coetzee, A. J. (2011). Cyber Security Awareness Toolkit for National Security: an Approach to South Africa's Cyber Security Policy Implementation. Proceedings of the First IFIP TC9 / TC11 Southern African Cyber Security Awareness Workshop (SACSAW). Gaborone, Botswana. Ritchie, T. (1997). Scenario Development and Risk Management using Morphological Field Analysis. Proceedings of the 5th European Conference on Information Systems. Cork publishing Company, Vol 3, pp. 1053‐1059. SA government gazette, 2010. South African National Cybersecurity Policy. Retrieved on 02 March 2011 from http://www.pmg.org.za/files/docs/100219cybersecurity.pdf South Africa Government Information. (2012). Statement on the Approval by Cabinet of the Cybersecurity Policy Framework for South Africa. Retrieved on 21 October 2012 from http://www.info.gov.za/speech/DynamicAction?pageid=461&sid=25751&tid=59794 Tiirmaa‐Klaar, H. (2010). International Cooperation in Cyber Security: Actors, Levels and Challenges. Proceedings of Cyber Security 2010, Brussels. US Cyber Command Public Affairs. (2011). US Cyber Command. Retrieved on 4 January 2013 from http://www.stratcom.mil/factsheets/Cyber_Command/ Warren, M. J., & Leitch, S. (2011). Protection of Australia in the Cyber Age. International Journal of Cyber Warfare and Terrorism, Vol. 1, No. 1, pp. 35‐40.

115


Replication and Diversity for Survivability in Cyberspace: A Game Theoretic Approach Charles Kamhoua1, Kevin Kwiat1, Mainak Chatterjee2, Joon Park3 and Patrick Hurley1 1 Air Force Research Laboratory, Information Directorate, Cyber Assurance Branch, Rome, New York, USA 2 University of Central Florida, Electrical Engineering and Computer Science Dept, Orlando, Florida, USA 3 Syracuse University, School of Information Studies (iSchool), Syracuse, New York, USA charles.kamhoua @ rl.af.mil kevin.kwiat @ rl.af.mil mainak @ eecs.ucf.edu jspark @ syr.edu patrick.hurley @ rl.af.mil Abstract: An effective defense‐in‐depth avoids a large percentage of threats and defeats those threats that turn into attacks. When an attack evades detection, it may disrupt the systems and networks, and then the need for survivability is more critical. In this context, mission assurance seeks to ensure that critical mission essential functions (MEFs) survive and fight through the attacks against the underlying cyber infrastructure. Survivability represents the quantified ability of a system, subsystem, equipment, process, or procedure to function continually during and after a disturbance. US Air Force systems carry varying survivability requirements depending on MEF’s criticality and protection conditions. Almost invariably, however, replication of a subsystem, equipment, process, or procedure is necessary to meet a system’s survivability requirements. Therefore, the degree of replication within a system can be paramount for MEF’s survival. Moreover, diversity will prevent the same fault or attack from damaging all the replicas so that they can continue the mission. This research shows that the more dangerous vulnerabilities (that affect more replicas) in a system are sometimes less likely to be exploited. The attacker may be better off when exploiting small vulnerabilities because they will be less protected by the defender. In fact, diversity always gives extra challenges to attackers. This work uses the mathematical framework of game theory to show the significance of replica diversity for mission survival in cyberspace. Keywords: cybersecurity, diversity, game theory, replication, survivability

1. Introduction Today, most system and network operators in an organization (academic institute, industry lab, government facility) deploy fairly homogenous systems primarily because of ease of maintenance, monitoring, and upgrades. Homogeneity could provide advantages at the software systems, configuration files, security protection mechanisms, hardware or device level, network interfaces, etc. However, such homogenous environment also facilitates an attacker to concentrate their efforts on just a few types of systems. If the attackers are successful in finding any vulnerability, then they can exploit that to launch an attack that can potentially affect a large number of systems. Thus homogeneity acts as a catalyst that enhances the asymmetric advantages that attackers enjoy today. For example, in May 2012, the Flame virus was declared the most complex malware ever written by researchers at Kaspersky Labs after infecting approximately 1000 machines primarily located in Middle Eastern countries. Flame exploited a flaw in the Microsoft certificate licensing service to propagate and used several novel schemes to avoid detection and gather usage data illicitly. The success of the Flame virus is accelerated by the fact that most computers run identical software from Microsoft. Approved for Public Release; Distribution Unlimited: 88ABW‐2012‐4886 dated September 10, 2012. One of the ways to impede attackers is to make the expected payoff much lower than the cost of launching attacks. It is to be noted that, attackers would like to use the best possible and most efficient strategies to inflict the maximum damage. Thus, attackers can be discouraged by diversifying the technologies that the systems use. This is because a typical attack exploits a specific vulnerability and different systems are not likely to be affected. For example, if systems were different, the attackers will have to explore additional vulnerabilities as a vulnerability in one system might not be effective in other systems. This diversity would cause impediments for the attackers in two ways: i) by increasing their effort required to infect systems, and ii)

116


Charles Kamhoua et al. by reducing the number of systems that could be infected because of the additional efforts required. In summary, the more diversity is introduced in a system the less will be the attacker’s payoff from exploiting a system’s vulnerability. In either case, the return on investment is reduced making it less profitable to attack. Generally, although some survivability steps can be applied before an incident, some survivability models are effective after the security mechanisms have failed or after exploitable vulnerabilities have been discovered on a system. By definition, survivability is the capability of a system to fulfill its mission, in a timely manner, even in the presence of attack, failures, or accidents. To assure system survivability, replication and diversity become two strong components. Replication will allow the tolerance of the failure to a minimum number of replicas. For instance, a system using five replicas for the same mission will tolerate the failure of two replicas if using a simple majority vote. Diversity will prevent the same fault or attack from damaging all the replicas so that they can continue the mission. The main contribution of this paper is to provide an analytical modeling of replicas diversity for critical mission survival using game theory. With the increased complexity of cyberspace, cyber survivability will increasingly rely on theoretical models. Analytical and theoretical approaches such as game theoretic modeling provide a general framework that can be applied to numerous problem specific scenarios. Game theory is the branch of applied mathematics that formalizes strategic interaction among intelligent rational agents. A game theoretic approach is appropriate because the attacks launched in critical systems are becoming more sophisticated and obviously originate from intelligent agents. Moreover, a game theoretic framework can use the Nash equilibrium profile to predict an intelligent attacker’s behavior. This research shows that the more dangerous vulnerabilities (that affect more replicas) in a system are sometimes less likely to be exploited. The attacker may be better off exploiting small vulnerabilities because they will be less protected by the defender. To the best of our knowledge, there is no research that analyzes replica diversity in the framework of game theory. The remainder of this paper is organized as follows. Section 2 is about the related works. Section 3 presents our game model. From the general framework of Section 3, Section 4 uses a typical scenario to illustrate our game model. Section 5 shows our numerical results that confirm the paramount importance of replicas diversity in a critical mission. Section 6 concludes the paper and expresses future research directions.

2. Related works The interest of using game theory to address network security challenge has increased in recent years. This is because game theoretic modeling favors a comprehensive understanding of strategic cyber interaction. Several types of games have been used depending on the specific scenario. Different scenario results in a distinct game model. Since the attacker’s and defender’s goals are purely conflicting, zero‐sum games are used in (Nguyen 2009, Kamhoua 2012a). Most game theoretic models assume that all the players are rational. The research in (Sun 2008, Kamhoua 2011) relaxes the assumption of player’s rationality and uses the mathematical framework of evolutionary game theory to model network security. In some scenarios, the security game is static (Jormakka 2005, Liu 2006), but in others, the game model is repeated, or more generally stochastic (Nguyen 2009, Shiva 2010, Kamhoua 2012b). A stochastic game is a generalization of a repeated game. In a repeated game, players play the same stage game in all periods, whereas in a stochastic game, the stage game can randomly change from one period to the next. Cyber security games also consider what information each player knows. When the rules of the game, and each player’s strategy and payoffs are assumed to be common knowledge, the cyber game is of complete information as in (Jormakka 2005). Otherwise, we have a game of incomplete information that can be formulated as a Bayesian game as in (Liu 2006). The work in (Liu 2006, Agah 2004) modeled intrusion detection as a game. Information sharing in online social networks is modeled using a Markov decision process (Park 2012) and a zero‐sum Markov game in (Kamhoua 2012a). The research in (Kamhoua 2012b) uses a repeated voting game among replicated nodes to extend the mission survival times in a critical mission. A key component of game theoretic modeling of cybersecurity is to find the Nash equilibrium of the cybersecurity game. At a Nash equilibrium profile, no player can increase his payoff by a unilateral deviation. Also, each player is playing his best response to other players’ strategies. As a consequence, the network defender can use the Nash equilibrium profile to predict the attacker’s behavior. A survey of game theory as applied to network security is provided in (Roy 2010, Alpcan 2010). A detailed presentation of game theory is found in (Myerson 1997). As we can see, game theory has provided a solid mathematical framework to model cyber security. Nevertheless, to the best of our

117


Charles Kamhoua et al. knowledge, this is the first work that investigates replicas’ diversity for cyber survivability in the framework of game theory.

3. Game model Our game model focuses on malicious faults caused by an intelligent attacker. By intelligent, we mean that the attacker can analyze, understand the system and respond by launching sophisticated attacks that will cause the maximum damage given the defensive actions. Clearly, the attacker’s goal is to impact the maximum number of replicas. In fact, the greater the number of compromised replicas, the more likely the mission will fail. On the other hand, the goal of the network defender is to minimize the number of compromised replicas. The conflicting nature of the objectives makes the scenario ideal to be modeled as a game. Moreover, a successful attack necessarily means a failure by the defender and a successful defense means a failed attack. Therefore, we model the conflict as a 2‐payer zero‐sum game. Let us assume that there are N diverse replicas of a node running a mission essential function. Let them be . Diverse replicas mean that the replicas are not perfect copies of each other. denoted as R i where Though we assume that the replicas execute the same function and should yield the same results, their protection mechanisms are different. That is, each node of those N replicas is exposed to different vulnerabilities. While we are dealing with a survivability model, let us also assume there are known vulnerabilities in the system. The vulnerabilities here include all the vulnerabilities in the replicas, where each replica may have its own set of vulnerabilities. Certainly, a critical system must be strengthened with a survivability model and a fight through capability to be able to continue its operation despite known vulnerabilities. Moreover, the network defender or system administrator must continuously scan their systems in search of new vulnerabilities before any attacker can exploit them. The defender must also find the potential attack strategies of an intelligent attacker as well as the best defense strategies against those attacks. Cautiously, the defender always acts as if the system is under attack by an intelligent attacker having knowledge of the system. For instance, 57th Information Aggressor Squadron executes cyberspace operations by emulating current and emerging threat capabilities and tactics and providing adversary operational and tactical influence operations and network operations (Online). In contrast, the attacker also scans the system and would like to exploit new vulnerabilities before the defender become aware of them. At anytime, the set of discovered vulnerabilities can be represented using a Venn diagram depicted in Figure 1. Figure 1 shows 5 vulnerabilities (i.e., V 1, V2, …V5). V1 is a vulnerability only known to the attacker. The vulnerabilities V2, V3, and V4 are common knowledge between the attacker and the defender. V5 is a vulnerability only known to the defender. Finally, there could be other vulnerabilities that are unknown to both the attacker and the defender; however, these vulnerabilities will not appear anywhere in the game formulation or solution so they are omitted from the Venn diagram. Attacker’s Knowledge

V1

V2

Defender’s Knowledge

V3

V4

V5

Common Knowledge Figure 1: Venn diagram representation of discovered vulnerabilities However, the defender does have a strategic advantage in the sense that the defender’s scanning of their own systems is designed with the authority to by‐pass any alarms that the attacker would have to try and avoid. Furthermore, the defender's scanning would also be designed to fully anticipate any defensive system agility (e.g., IP hopping) that are meant to confound an attacker. Therefore, we might reasonably assume that the defender has an edge in finding the vulnerabilities before the attacker. By then stating that we are letting the attacker have equal knowledge concedes an advantage to the attacker that he would not likely have otherwise

118


Charles Kamhoua et al. ‐ thus it might be considered a worst case scenario. Therefore, we assume that the subset of vulnerabilities affecting each of the replicas is common knowledge between the attacker and the defender. Thus, each player can design a vulnerability matrix that will help to optimize their attack and defense strategy accordingly. ), where K is the total number of the Let the discovered vulnerabilities be denoted by Vj ( vulnerabilities in the system. Each of the replicas can be exposed to any number of vulnerabilities. In general, . To obtain the vulnerability matrix, we define a replica Ri is exposed to m vulnerabilities, where binary variable

(

) as follow:

The vulnerability matrix can be represented as in Table 1.

Table 1: Vulnerability matrix R1 V1 x11 V2 x12 V3 x13 Vulnerabilities … Vj x1j … VK x1K

R2 x21 x22 x23 x2j x2K

Replicas R3 … Ri … RN x31 xi1 xN1 x32 xi2 xN2 x33 xi3 xN3 x3j xij … xNj … … x3K xiK … xNK

We consider that the attacker can only exploit one of the vulnerabilities at a time. Similarly, the defender, defending all the replicas, can choose to defend any of the vulnerabilities; however, it can defend only against one of the vulnerabilities at a time. We also consider that the attack can still be successful with probability p ) although the defender defends against the vulnerability exploited by the attacker. We consider ( the value of p to be common knowledge between the attacker and the defender. ) which correspond to exploiting vulnerabilities Vj Let the strategies of the attacker be Ej ( ( ) respectively. Similarly, the defender strategies are Pk ( ) which correspond to protecting against vulnerabilities Vk ( ) respectively. In fact, Pj is the only suitable defense strategy against the attack Ej because Pj protects the replicas against the vulnerability Vj being exploited by the attacker. We consider Ej and Pj at the abstract level, including primitive operations. Although exploiting the vulnerability Vj is abstracted by a single strategy Ej, many of the strategies may consist of a multi‐stage process involving step such as scanning, collecting information, and launching the attack. Similarly, the defender strategy Pk may consist of multiple actions such as system monitoring, reconfiguration, and patching. The attacker’s payoff is the number of compromised replicas while the defender’s payoff is the opposite since ) while the defender plays Pk ( ), the the game is zero‐sum. When the attacker plays Ej ( attacker payoff is:

In the first case, we have . The attack Ej is successful because the defender chooses a strategy other than Pj. Therefore, the attacker may compromise all the replicas that are subject to vulnerability Vj. In the second case, when , the defender protects vulnerability Vj being exploited by the attacker. Nevertheless, there is still a probability p of a successful attack. Thus, the attacker’s payoff is probabilistic. There are two distinct interpretations of the probability p that yields the same payoff as in (1). First, it can be

119


Charles Kamhoua et al. that all the replicas that share a vulnerability fail together or survive together when that vulnerability is exploited by the attacker. Then, p will be the probability that all the replicas that share a vulnerability fail given that the attacker exploits that vulnerability while the defender protects against it. In the second case, the replicas that share a vulnerability may fail independently. Thus, p may be interpreted as the probability that a replica having a vulnerability fail given that the attacker exploits that vulnerability while the defender protects against it. In either case, the optimum defense strategy will depend on the specific normal form game that in turn depends on the number of replicas, the number of vulnerabilities, and the corresponding vulnerability matrix. Moreover, the defender can have an actual appraisal of the loss only after the attacker exploits a vulnerability. In turn, the attacker would not know the potential losses the defender will incur for the different attacks it can launch. Without knowing the defender's defense strategy and payoff, the attacker will not have a definite strategy that maximizes the damage. Such a game with incomplete information can be modeled as a Bayesian game by considering that the distribution over the payoff from exploiting a vulnerability is common knowledge among the two players. Such consideration is not the primary focus of this paper. In this paper, we propose our game theoretic approach for mission survivability using replication and diversity based on the following scenario with complete information in the game.

4. Model illustration An illustrative example (see Table 2) shows 9 replicas ( ) and 5 vulnerabilities ( ) in the entire set of replicas. Note that, replica 6 is exposed to all 5 vulnerabilities, whereas replica 3 is exposed to none. We highlight that the same replica can be affected by different vulnerabilities and a given vulnerability can affect several replicas. Also, from an attacker’s perspective, an exploitation of vulnerability 1, 2, 3, 4, or 5 will affect

2, 3, 4, 4, 6 replicas respectively (the attacker payoff). Those are the numbers They are obtained by summing the “1” in each row

we mentioned in (1). of Table 2, i.e.,

. From the general game model described above, the vulnerability model described in Table 2 can be matched into the Normal form game of Table 3 . when we consider When the attacker plays E1 while the defender plays P1, the attacker gets a payoff of zero because the defender has protected the replicas against the vulnerability V1. Since the game is zero‐sum, the defender also gets a payoff of zero. However, when the attacker plays E1 while the defender plays a different strategy than P1 (say, P2, P3, P4, or P5), the defender has failed to protect against the vulnerability V1. The two replicas R1 and R6 are compromised (see table 2). The attacker gets a payoff of 2 while the defender gets ‐2. The same rationale holds for the other four attack strategies. We can see that no strategy is dominated. By definition, at a Nash equilibrium profile, no player can increase his payoff by a unilateral deviation. Moreover, each player plays a best response to the behavior of other players. We can see that there is no pure strategy Nash equilibrium in the game of Table 3. To check that there is no pure strategy Nash equilibrium, if, for instance, the attacker plays E5 when the defender plays P5, both players get a payoff of zero. The attacker’s best response will be to change his strategy to E4 and increase its payoff to 4. After that, the defender’s best response should be to change his strategy to P4 and so on. No pure strategy profile will be stable. One of the players will have incentive to deviate. Table 2: Vulnerability Matrix (9 replicas with 5 vulnerabilities) V1 V2 Vulnerabilities V3 V4 V5

Replicas R1 R2 R3 R4 R5 R6 R7 R8 R9 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 1 0 1 0 1 1 0 1 0 1 1 0 1

120


Charles Kamhoua et al. Table 3: Normal form game (attacker’s payoff, defender’s payoff) Attacker’s Strategies

E1 E2 E3 E4 E5

P1 0,0 3,‐3 4,‐4 4,‐4 6,‐6

Defender’s Strategies P2 P3 P4 P5 2,‐2 2,‐2 2,‐2 2,‐2 0,0 3,‐3 3,‐3 3,‐3 4,‐4 0,0 4,‐4 4,‐4 4,‐4 4,‐4 0,0 4,‐4 6,‐6 6,‐6 6,‐6 0,0

To obtain the mixed strategy Nash equilibrium profile, the defender randomizes to make the attacker indifferent. Also, the attacker randomizes to make the defender indifferent. Then the attacker and the defender play the best response to each other. The mixed strategy Nash equilibrium profile is: (0E1, 0E2, 0.375E3, 0.375E4, 0.25E5; 0P1, 0P2, 0.25P3, 0.25P4, 0.5P5). The attacker’s payoff is 3 and the defender’s payoff is ‐3. Let us make two important observations about this Nash equilibrium profile:

The vulnerabilities V1 and V2 are neither exploited by the attacker nor protected by the defender. This is because they yield a lower value (in term of the number of vulnerable replicas) relative to the other three vulnerabilities.

The vulnerabilities that are present in more replicas are protected by the defender with a higher probability. This constraints the attacker to exploit those vulnerabilities less often, e.g., the attacker plays E5 25% of the time and plays E4 and E3 37.5% of the time although E5 has a higher value.

We will now consider the case when the attack can still be successful with probability p although the defender defends against the vulnerability exploited by the attacker. In that case, the game in Table 3 is translated to a more general game in Table 4. Table 4: Normal form game (attacker’s payoff, defender’s payoff) Attacker’s Strategies

E1 E2 E3 E4 E5

Defender’s Strategies P1 P2 P3 P4 P5 2p,‐2p 2,‐2 2,‐2 2,‐2 2,‐2 3,‐3 3p,‐3p 3,‐3 3,‐3 3,‐3 4,‐4 4,‐4 4p,‐4p 4,‐4 4,‐4 4,‐4 4,‐4 4,‐4 4p,‐4p 4,‐4 6,‐6 6,‐6 6,‐6 6,‐6 6p,‐6p

The Nash equilibrium profile will depend on the specific value of p. In addition, as the probability p increases, the attacker’s strategies E1, E2, E3, and E4 become strictly dominated by E5. The same holds for the . That is because the corresponding defense strategies. For instance, E5 strictly dominates E1 if minimum payoff the attacker gets by playing E5 (6p) is greater than the maximum payoff (2) the attacker can get by playing E1. Similarly, E5 strictly dominates E1 and E2 if

and E5 strictly dominates E1, E2, E3,

. The pure strategy profile (E5, P5) is a strict Nash equilibrium for . and E4, if When similar replicas are used, the vulnerability matrix of Table 2 is changed. All the “0” are replaced by “1”. That is because each of the vulnerabilities automatically affects all the 9 replicas. As a consequence, the resulting game in Normal form is represented in Table 5. The mixed strategy Nash equilibrium profile is: (0.2E1, 0.2E2, 0.2E3, 0.2E4, 0.2E5; 0.2P1, 0.2P2, 0.2P3, 0.2P4, 0.2P5). The attacker payoff increase with probability p as shown in Figure 2. We will see that this game is always favorable to the attacker. Table 5: Normal form game (attacker’s payoff, defender’s payoff) Attacker’s Strategies

E1 E2 E3 E4 E5

Defender’s Strategies P1 P2 P3 P4 P5 9p,‐ 9p 9,‐9 9,‐9 9,‐9 9,‐9 9,‐9 9p,‐ 9p 9,‐9 9,‐9 9,‐9 9,‐9 9,‐9 9p,‐ 9p 9,‐9 9,‐9 9,‐9 9,‐9 9,‐9 9p,‐ 9p 9,‐9 9,‐9 9,‐9 9,‐9 9,‐9 9p,‐ 9p

121


Charles Kamhoua et al.

5. Numerical results This section provides a more detailed analysis of our model illustration of the last section. Of particular importance will be the probability p. In fact, the probability p measures the defense capability compared to the attacker. When an experienced and skillful network defender is faced by a weak attacker, the attacker has no . On the chance to successfully exploit a vulnerability that is protected by the defender and thus contrary, when an expert attacker oppose an unskilled defender, that attacker can always go around the . We should have when protection mechanism implemented by the defender and then both the attacker and defender are competent. Figure 2 shows the changes in attacker’s payoff with probability p in two scenarios: without replica diversity and with replica diversity. The defender’s payoff is the opposite since we have a zero‐sum game. As expected, the attacker’s payoff increases with probability p in both scenarios. A more skillful attacker will get a higher payoff. With diverse replicas, the attacker payoff slowly and linearly increases with a slope of 1.5 until the probability p reaches a value of 2/3. Then the attacker payoff starts a faster linear increase with the slope of 6. This is due to a change from a mixed strategy Nash equilibrium to a pure strategy Nash equilibrium as shown in Figure 3. On the other hand, with similar replicas, the attacker payoff linearly increases with probability p with the slope of 1.8. We can see that diversity always gives extra challenges to attackers. With diverse replicas, the attacker get on average less than half of the payoff they should get if similar replicas were used. Moreover, and diverse replicas are used, the attacker is still worse off compared to the case even though and similar replicas are used. This indicates that a less skillful defender that diversifies his replicas is always better off than the more skillful defender using similar replicas. Figure 3 shows how the players adjust their strategy with p, using diverse replicas as proposed. The game has , both players adopt a mixed strategy. The two fragments depending on p. In the first part, attacker’s strategy (0E1, 0E2, 0.375E3, 0.375E4, 0.25E5) remains unchanged. However, the defender modifies his strategy with p. As p increases, playing E5 becomes substantially more profitable to the attacker. Thus, to counteract, the defender increases his probability to play P5. As a result, the attacker has no incentive to , both players adopt a pure strategy. This is because E5 change his strategy. In the second segment, is the dominant strategy for the attacker while P5 is the dominant strategy for the defender. Recall that the attacker’s strategies E1 and E2 are never used. The same is true for the defender’s strategies P1 and P2. Those strategies are not represented in Figure 3. Therefore, replicas’ diversity can offer a tremendous advantage to a cyber defender while diminishing the attacker’s payoff and incentive. In fact, population diversity leads to population survivability because avoiding monoculture prevents any single infection from disabling the entire population. Diversity should be applied at all stage of the design process, in hardware as well as software. The less similarity between the replicas, the less likely is, that any vulnerability found in one replica will be found in others. Thus, the attacker’s payoff is substantially reduced as shown in Figure 2, while the defender’s payoff is considerably increased.

6. Conclusion This work has used a game theoretic model to demonstrate the importance of diversity in cyber survivability. As opposed to common belief, we have shown that the more dangerous vulnerabilities in a system are sometimes less likely to be exploited. The attacker may be better off when exploiting small vulnerabilities because they will be less protected. Our results show that the defender is always better off when using diverse replicas. That is because any vulnerability will affect all the replicas when the replicas are perfectly similar to each other. In the future, we will consider incomplete information game in which the attacker skill level and thus the probability p is not common knowledge but private information. We will also look into the case that the attacker can simultaneously exploit multiple vulnerabilities while the defender can also simultaneously protect against several vulnerabilities.

122


Charles Kamhoua et al. Changes in Players' Payoff with Diversity and Probability p 8 6

Players' Payoff

4 2 Attacker's Payoff with Similar Replicas Attacker's Payoff with Diverse Replicas Defender's Payoff with Diverse Replicas Defender's Payoff with Similar replicas

0 -2 -4 -6 -8 0

0.2

0.4 0.6 Probability p

0.8

1

Figure 2: Reduction of attacker’s payoff with replicas’ diversity Changes in Attacker's and Defender's Mixed Strategy with Probability p 1 Attacker's Probability of Playing E3 or E4 Attacker's Probability of Playing E5 Defender's Probability of Playing P3 or P4 Defender's Probability of Playing P5

0.9

Mixed Strategy Nash Equilibrium

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4

0.5 Probability p

0.6

0.7

0.8

0.9

1

Figure 3: Changes in the Nash equilibrium strategy profile with probability p (with replicas’ diversity).

Acknowledgements This research was performed while Dr. Joon Park held a National Research Council (NRC) Research Associateship Award at the Air Force Research Laboratory (AFRL). This research was supported by the Air Force Office of Scientific Research (AFOSR).

References Agah, A. Das, S. K. Basu, K. Asadi, M. (2004) Intrusion Detection in Sensor Networks: A Non‐Cooperative Game Approach, nca, pp.343‐346, Network Computing and Applications, Third IEEE International Symposium on (NCA'04), 2004.

123


Charles Kamhoua et al. Alpcan, T. and Basar T. (2010) Network Security: A Decision and Game‐Theoretic Approach Cambridge University Press; 1 edition (November 30, 2010) Jormakka, J. and Molsa, J. V. E. (2005) Modelling information warfare as a game, Journal of information warfare; vol.4(2), 2005. Kamhoua, C. Kwiat, K. Park, J. (2012a) A Game Theoretic Approach for Modeling Optimal Data Sharing on Online Social Networks, in proceedings of the 9th IEEE International Conference on Electrical Engineering, Computing Science and Automatic Control (IEEE CCE 2012), Mexico City, Mexico, September 2012. Kamhoua, C. Kwiat, K. Park, J. (2012b) Surviving in Cyberspace: A Game Theoretic Approach, in the Journal of Communications, Special Issue on Future Directions in Computing and Networking, Academy Publisher, Vol. 7, NO 6, June 2012. Kamhoua, C. Pissinou, N. Makki, K. (2011) Game Theoretic Modeling and Evolution of Trust in Autonomous Multi Hop Networks: Application to Network Security and Privacy, in proceedings of the IEEE international conference on communications (IEEE ICC 2011). Kyoto, Japan, June 2011. Liu, Y. Comaniciu, C. Man, H. (2006) A Bayesian game approach for intrusion detection in wireless ad hoc networks, ACM International Conference Proceeding Series; vol. 199, 2006. Myerson R. Game theory: analysis of conflict Harvard University Press, 1997. Nguyen, K. C. Alpcan, T. Basar, T. (2009) Stochastic games for security in networks with interdependent nodes, in proceedings of Intl. Conf. on Game Theory for Networks (GameNets), 2009. Park, J. Kim, S. Kamhoua, C. Kwiat, K. (2012) Optimal State Management of Data Sharing in Online Social Network (OSN) Services in the 11th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (IEEE TrustCom‐2012), Liverpool, United Kingdom, June 2012. Roy, S. Ellis, C. Shiva, S. Dasgupta, D. Shandilya, V. Qishi, W. (2010) A Survey of Game Theory as Applied to Network Security, 43rd Hawaii International Conference on System Sciences (HICSS). Honolulu, HI, USA. March 2010. Shiva, S. Roy, S. Bedi, H. Dasgupta, D. Wu, Q. (2010) A Stochastic Game with Imperfect Information for Cyber Security, 5th International Conference on i‐Warfare & Security (ICIW), 2010. Sun, W. Kong, X. He, D. You, X. (2008) Information security problem research based on game theory, International Symposium on Publication Electronic Commerce and Security, 2008.(Online) http://www.nellis.af.mil/library/factsheets/factsheet.asp?id=4098

124


Situation Management in Aviation Security – A Graph‐Theoretic Approach Rainer Koelle1, 2 and Denis Kolev2 1 EUROCONTROL, Brussels, Belgium 2 Lancaster University, Lancaster, UK rainer.koelle@eurocontrol.int denis.g.kolev@gmail.com Abstract: This paper addresses support to aviation security incident management within distributed and highly interconnected system of systems like SESAR and NextGen. Explicitly, we address the problem of designing an information‐ centric approach to situation management. The management of an in‐flight security incident requires the collaboration of various stakeholders with different information needs (e.g. national crisis cell, military, policy, airports, airlines, ATM). A graph‐theoretic approach is chosen to model and investigate the design requirements for an aviation security incident management capability. Situation Management related information is modelled as information flows under associated network performance constraints. The network model is described as a feasibility and optimisation problem and the solution of a set of performance/constraint functions. These constraints represent resource limitations, capabilities of the agents, and the required infrastructure features (e.g. redundancy). The goal of this research is to develop a decision‐ support system for aviation security incident management. This paper presents the approach, initial modelling and design of such a capability. An algorithm for the corresponding feasibility and optimisation problem solution was developed. The model and algorithm are validated as part of a preparatory action for an upcoming European ATM security project. The results obtained demonstrate the feasibility of an information‐centric approach to Situation Management, explicitly, its application to aviation security. This allows aviation security incident stakeholders to address the operational challenges in a more fine‐tuned and timely manner. The graph‐theoretical results were validated and proven through demonstration simulations. The approach and model discussed in this paper can be used for dynamic multi‐agent coordination and collaboration and has the potential to systematically address information exchange requirements between distributed stakeholders in time‐critical contexts, e.g. aviation security, critical infrastructure protection, and mission‐critical systems. Keywords: situation management, aviation security incident management, graph‐theory, feasibility problem, optimisation problem

1. Introduction The focus of this paper is on the support to aviation security incident management (AVSIM) within distributed and highly interconnected system of systems like SESAR (Europe: Single European Sky ATM Research) and NextGen (United States: Next Generation Air Transportation System). AVSIM contexts can be described as a network of collaborating agents, and we address the problem of designing an information‐centric approach for establishing such a situation management capability. The Air Transportation System (ATS) is a complex system comprising a variety of stakeholders with different organisational and/or operational objectives (e.g. airlines providing cost‐effective on‐time service, air traffic control ensuring the safe, expeditious and orderly flow of air traffic, airports providing ground facilities and services). States typically allow access to (portions of) the airspace under the assumption of regular airspace operations. Security agencies (e.g. national crisis cells) and authorities (e.g. national government authority, air defence, law enforcement) are tasked with the assurance of the integrity of the territory/airspace and national security. None of these stakeholders can address AVSIM in isolation and effective incident management is a shared activity between these stakeholders. Throughout the recent years the global response to structured attacks against civil aviation has been on preventive aircraft and airport security measures. In consequence, about a decade after 9/11, ATS still misses an effective AVSIM capability. Initial work on an AVSIM capability has been conducted as part of pan‐European research projects, for example, SAFEE – Security of Aircraft in the Future European Environment, PATIN – Protection of Air Transportation and Infrastructure, and ERRIDS – European Regional Renegade Information Dissemination System, an initial NATO/EUROCONTROL demonstration project. Similar research efforts have been reported in the United States (Koelle 2012). However, the results are not carried forward under the umbrella of the on‐ going transformation programmes SESAR and NextGen. In the absence of a clear political goal for Aviation Security, both programmes focus on operational improvements and technical enablers rather than addressing AVSIM. The GAMMA (Global ATM Security Management) project is addressing this research gap by focussing

125


Rainer Koelle and Denis Kolev on requirements and architecture components for a comprehensive set of security capabilities in the future ATS and ATM System (GAMMA 2012). This paper will first present the background and motivation that led to the development of the reported graph‐ theoretic approach to situation management and the design of an AVSIM capability. The conceptual model and its subsequent problem formulation are described in section 3. In section 4, the research approach and results are presented. Finally, section 5 closes this report with our conclusion and ideas for future work.

2. Background The research reported in this paper addresses AVSIM from a situation management perspective. In this section we briefly review the conceptual building blocks and identify the research / capability gap.

2.1 Situation management Situation Management is an emerging paradigm. Jakobson et al (2005) introduces the term ‘Situation Management’ as collectively identifiable operations revolving around situation monitoring (sensing), awareness (reasoning), and control (acting) in dynamic and operational environments. Alfredson (2007) stresses the process of managing dynamic situations by combining internal and external resources throughout the sense‐reason‐action cycle. Koelle (2012) defines situation management as a distributed decision‐making and multi‐agent problem based on an information‐centric approach suitable for situation analysis and resource‐ and action‐management. Situation management exists in various time‐critical decision domains, including military command and control, homeland security, emergency/crisis management, mission‐critical systems, and medicine. A comparison of these domains allows for the identification of a set of common characteristics:

a set of collaborating agents/entities;

dynamic and time‐critical scenarios;

a finite window for decision‐making; and

a commonly shared objective (e.g. reduce impact of hostile attacks, avoid mid‐air collisions, patient stabilisation after reanimation).

2.2 Air transportation system transformation More than a century after the invention of powered flight, air transportation has become a global industry and driver for economic growth. Despite the recent down‐turns (e.g. 9/11 attacks, economical crisis), air transportation is reported to be the fastest growing transportation sector after the World Wars. With the begin of the 21st century aviation is under‐going a transformation and novel technologies are readying for deployment, both in ground‐based and airborne or space‐based systems. Recent aircraft advancements revolve around ’eEnabled’ aircraft. This concept entails the integration of sensors, information and communication systems to enhance flight operations, operational efficiency (e.g. reduction of turn‐around times) and maintenance (e.g. engine wear and tear), and ultimately enhancing aircraft operator revenue (Koelle 2012). In order to meet the projected growth of air travel of 3‐5% new operational concepts, systems and technologies need to be developed and implemented (JPDO 2007/2011; SESAR 2007/2008). Associated air traffic management programmes are underway in Europe (SESAR), the United States (NextGen), and Japan (Collaborative Action for Renovation of Air Transport System [CARATS]). We place this research in the context of future systems, higher level of interconnectivity between ground‐based and airborne systems, and associated communication technologies. Explicitly we address the use of these technologies for security related information processing.

2.3 Aviation security In Annex 17 to the International Convention on Civil Aviation, ICAO defines aviation security as “a combination of measures and human and material resources intended to safeguard civil aviation against acts of unlawful

126


Rainer Koelle and Denis Kolev interference” (ICAO 2011). Attacks on air transportation have been a concern since the beginning of commercial aviation. The first recorded aircraft hijacking occurred in the 1930s. Structural targeting of air transportation emerged during the 1960s and subsequently lead to the adoption of the Annex 17 in 1974. Since then Annex 17 has been updated multiple times in response to aviation security incidents and associated lessons learnt. Critics refer to this reactive approach as ‘fighting the last war mentality’. There is a strong focus on preventive airport and aircraft controls driven by historic attack methods. Today’s security regimes are designed to detect ‘bad people’ and ‘bad items’ and ban them from boarding. The aforementioned transformation will require a paradigm change and push away from the physical dominated approach to aviation security. The recent introduction of a recommendation for cyber security in Annex 17 (ICAO 2011) is an example for a more forward looking perspective and the recognition of the prevalence of information and communication systems in air transportation, including the increasing interconnectivity between ATS systems and functions (Koelle, Markarian and Tarter 2011).

2.4 Capability gap Arguably, the 9/11 attacks changed the aviation world forever. The major lesson learnt from this event is the lack of an incident management capability, including cross‐organisational/jurisdiction information sharing. In the aftermath of 9/11, a detailed review of national and regional response capabilities, and a series of research projects had been launched. For example, the ERRIDS concept and the US Domestic Events Network are targeted at improving the coordination and collaboration between different national or regional AVSIM agencies/stakeholders. However, these initial concepts and capabilities are not extensively further developed. At the time being the focus of SESAR, NextGen, and CARATS is on operational and technical enablers for enhancing air traffic operations. Little focus is given to the integration of security functions and capabilities (Koelle and Tarter, 2012).

3. Modelling situation management in aviation security In this section we introduce the conceptual view and develop our graph‐theoretic interpretation of the application domain.

3.1 Conceptual view Within the European context, there is a mix of (non‐)EU and (non‐)NATO member states, and national regimes and bilateral agreements may overturn harmonised procedures. Accordingly, the set of stakeholder entities involved in the response to an incident changes dynamically given the specifics of the cross‐border scenario. Figure 1 projects the trajectories of three 9/11 flights onto the European environment to describe a hypothetical scenario and to highlight the increased coordination requirements between the involved stakeholders.

Figure 1: Cross‐border scenario

127


Rainer Koelle and Denis Kolev For this paper we present a simplified stakeholder categorisation: The operational dimension is presented by the aircrew, air traffic control and airports. National air defence is typically tasked with the categorisation of incidents and the control of tactical air operations by military aircraft. In the European context, the first task is jointly managed by NATO and national resources. After the 9/11 attacks, various states have reviewed their procedures and established aviation incident centres or expanded the mission of existing aviation emergency operation centres. Table 1 presents the mix of ATS and non‐ATS stakeholders, including their security incident management roles. Net‐centric operations have become the mantra in many time‐ and mission‐critical system disciplines. Conceptually, the AVSIM stakeholders form information agents in an incident management network. This approach allows us to define and describe a net‐centric AVSIM capability within the system‐wide information management (SWIM) concept proposed by SESAR and NextGen. Table 1: Stakeholder categories

Aircrew/Aircraft

Primary ATS Role and Responsibilities safe operation of aircraft and passenger.

ATC

separation and synchronisation of air traffic

Stakeholder

Airports NATO/national Air Defence intelligence and law enforcement national crisis centres

facilitation of airspace user operations and passenger flows identification of flight operations; tactical control of military operations

Security Incident Management Role potential first‐hand information flight‐related information; first speaker flight‐related information; ground intervention facilities tactical operations in support of national security supporting information; intervention resources coordination between decision‐making and operations

3.2 Model formulation The problem of distributed multi‐agent collaboration in AVSIM has been defined in Koelle and Tarter (2012). The distribution of resources (e.g. data, information) between different actors (agents) is essential when determining interdependencies to establish timely situation awareness. Because of these interdependencies, an effective networking mechanism is required to fuse information and achieve results on a network level. This can be broken down into two interrelated problems:

Feasibility: identification of the appropriate incident management network (network configuration); and

Optimisation: ensuring the appropriate information exchange between the source nodes and all relevant receiver nodes.

From a graph‐theoretic perspective, the incident management network is represented by a Graph G comprising n network nodes (i.e. vertices), V = {v1,…,vn}. These nodes represent stakeholder entities (e.g. operation centres) and associated processes or sensors (c.f. Table 1). Connectivity and information exchange between these nodes is represented by the set E of edges (information channels) between these nodes, E = {eij | vi and vj interconnected}. This incident management model is specified in Table 2. We define the general notion of ‘situation’ as a composite of ‘situational elements’. These situational elements can be interpreted as the outputs (information types) processed by the different nodes (e.g. sensor results, process outcomes/actions). Hence, a situation can be described as the factored set of situational elements ik and the different types of information given by the set I = {i1,…,im}. In our scenario, I comprises sensor measurements Table 2: Graph‐theoretical formulation graph G comprising set of vertices (nodes) V and set of edges (links) E set of possible information types ik

weight function of communication channel cost function

loading function

connectivity function

128


Rainer Koelle and Denis Kolev (e.g. surveillance ‐ aircraft position), telemetry (e.g. cockpit noise, pilot heart beat), video (e.g. cabin surveillance), and voice (e.g. radio or phone coordination). With Table 2, we can further define different characteristics of the graph‐model:

weight function: for each type of information ik and information channel between vi and vj the weight function characterises the ‘hardness’ of information exchange.

cost function: this function characterises the link between two nodes and is independent of the actual information transmitted via this link. We use the term ‘cost’ to describe limitations imposed on the model that are linked to resource constraints (e.g. monetary resource required to establish and maintain the link).

loading function: refers to structural aspects of the node and its connectivity. The load factor provides a measure to determine the maximal number of connections a node can support (i.e. number of incoming links). It is a measure of the capacity of every node and introduces an upper limit to the incoming data flow.

connectivity function: this function defines the minimal number of incoming links. It ensures that a connected graph exists and determines the number of redundant channels for each information type

3.3 Interpretation of network characteristics Figure 2 defines for a pre‐defined information type ik the weight of a path between different interconnected nodes, p = (vi1,…vik), for a given incident management network G = (V,E).

Figure 2: Weight of path per information type The set of all paths between nodes vi and vj in network G is defined as P(G,vi,vj). The optimal path in this set – the path with the minimal weight – is denoted as opt(G,vi,vj,ik). The formulation of an information type and connectivity path dependent weight function (c.f. Figure 2) allows for the representation of different physical communication characteristics. For example, the weight function can represent the delay of an information element in the selected path. The additive nature of the delay can be immediately derived from Figure 2 as the total delay encountered is the sum of the delays of each path‐segment. Following this principle of path‐ segment properties, we can define a multiplicative measure for the transmission error rate. Let perr(vi,vj) denote the error rate for the information channel between nodes vi and vj, then for any given path p = (vi1,…vik), the conventions of Figure 3 apply. In particular, the term ln(1‐perr) is again an additive characteristic of the network and enables us to use the same approach to estimate multiplicative network characteristics.

3.4 Addressing situation management problems: feasibility and optimisation Based on the principal approach defined above, we now formulate the feasibility and optimisation problem for situation management and address these problems by defining bounds and constraints for the connectivity of the network. Error rate:

Probability of correct data transmission:

Figure 3: Error‐rate and probability of correct data transmission 3.4.1 Feasibility – acceptable network configurations Stakeholders in our scenario may have different information needs to support time‐critical decision‐making. In an ideal world all nodes would be connected to all relevant information sources. However, limitations on the

129


Rainer Koelle and Denis Kolev availability of communication channels (e.g. reception range, technology) require a networked approach. Thus, a feasibility constraint is the optimal path from receiver node vi to the ‘closest’ information source. We express ‘closeness’ as the weight of the optimal path (c.f. Section 3.3). This measure provides an upper limit for the weight of the path. The corresponding type of communication is referred to as “one‐to‐any” connection strategy; ‘any’ referring to the closest source for the specific information element ik.

Figure 4: Feasibility measure The feasibility constraint can be expressed for a node vi and information type ik as specified in Figure 4, where Tik denotes the sub‐set of vertices providing information element ik (source nodes). The parameter ε allows for the specification of a permissible bound. We can further specify an upper bound condition for specific source nodes. With Figure 5 we can require for a pair of receiver and source node, (vi,vj), and information type ik that a maximum constraint μ is met. Figure 5: Optimal information weight Figure 5 describes the selection of specific source nodes. For example, if the information needs of the receiver require a video stream transmission, than it is necessary to connect to each camera node. The associated communication strategy is referred to as “one‐to‐every”; ‘every’ relating to each node with the required information element. With the feasibility constraints given in Table 3 we are able to define a sub‐set of nodes, respectively, paths and information streams, <vi,vj,ik>. The associated constraints allow for the elimination of configurations (connection channels) between nodes that are not required. This constraint mechanism defines a set of ‘acceptable’ graphs. The solution to the feasibility problem is the sub‐set of graphs that meets the constraint conditions of Table 3. The resulting incident management network Gimn = (V,E) is an acceptable configuration of the original full graph. We interpret ‘acceptable’ as a configuration that is fit‐for‐purpose by ensuring that information needs of the nodes are met. 3.4.2 Optimality – maximising the performance function Following the identification of the set of acceptable graphs, the second step in the system analysis comprises the selection of the optimal configuration. We propose a performance function to support the optimisation that can be interpreted as an essential network characteristic in support of the situation management task at hand. Figure 6 describes the total information exchange in the network Gimn = (V,E) . Figure 6: Total information exchange Table 3: Feasibility constraints Feasibility Constraints on Network Graph

Interpretation

a.) general cost limitation on all links b.) maximal capacity of links c.) minimal number of links d.) upper limit of weight for closest information source e.) criteria for selection specific source nodes

130


Rainer Koelle and Denis Kolev This function represents the maximal weight of the path that is used for the transmission of a specific information type. Accordingly, we can model the network design as a discrete constraint optimisation problem as described in Figure 7. The solution ensures that the maximal ‘hardness’ of information exchange between a source and the most remote nodes is minimal. w.r.t. the feasibility constraints given in Table 3.

Figure 7: Constraint optimisation problem

4. Results In this section we present and discuss our results from an initial investigation of the feasibility and optimisation problem. In applying our approach to the test scenarios, we aimed to demonstrate that our algorithm is capable of supporting the design of multi‐agency incident management networks

4.1 Feasibility problem Considering a set of information requirements of different stakeholders, the first step of the feasibility problem is to eliminate the number of network configurations not meeting the constraint conditions of Table 3. For the further analysis we define an exceedance function for our network (c.f. Figure 8 ).

Figure 8: Exceedance function Legal and classification constraints do not allow for the presentation of the European context depicted in Figure 1. We generalised the model for this paper by modelling a network of 100 information nodes and two main information types; namely, telemetry: remote measurement of pilot heart beat; and video: cabin situation. For our experiments we used the following – generalised – parameters:

communication strategies: telemetry – one‐to‐any; video – one‐to‐every

communication characteristics (weights of the network edges): maximum information delay for transmission of information on the respective link; telemetry – 0.2 seconds; video – 1 second

number of network nodes: 100

4.1.1 Acceptable network configurations Figure 9 depicts the simulation results (Matlab/Simulink) as a spectrum defined by the mean time of delay (dependent on the information type) and the delay variance based on the modelled communication channels (upper chart – video exceedance; lower chart – telemetry exceedance). The spectrum represents the exceedance function values for different network configurations. Figure 9 allows for the selection of acceptable network configurations per information type. For example, Configuration 1 is acceptable for both information types, while Configuration 2 is acceptable in for telemetry information, however, is less acceptable for the video link. 4.1.2 Impact of communication strategy The following analyses the impact of the communications strategy on the exceedance function. Figure 11 depicts the camera delay (video) exceedance function for the above mentioned communication strategies.

131


Rainer Koelle and Denis Kolev It is obvious that the ‘one‐to‐any’ connection strategy provides a lower performance than the ‘one‐to‐every’ strategy. This can be shown analytically (c.f. Figure 10); the ‘one‐to‐any’ strategy will provide lower‐weights to the links/paths as it is concerned with connecting to the closest source.

Figure 9: Delay variance vs mean delay per information type Figure 10: Impact of communication strategy This analytical result was validated through a set of experiments. For a given set of 100 nodes mean delay parameters were‐obtained by evaluating different configurations. For each configuration the exceedance function was calculated for the different information types. Figure 11 depicts the results for ‘video’ transmission and both connection strategies. Both exceedance functions demonstrate a similar behaviour and can be interpreted as lower and upper bounds. The higher performance requirements for the “one‐to‐every” strategy can be derived from the fact that it requires a lower mean delay. The lower and upper bounds allow for the performance estimation of different network configurations per associated mean delay value.

Figure 11: Exceedance vs mean delay – video information type

132


Rainer Koelle and Denis Kolev

4.2 Moving towards optimisation We introduced our design process as a discrete optimisation problem in Figure 7. We can further derive from the optimisation function (Figure 6) that the fully connected graph represents the global performance minimum. Performance improvements can be achieved through the removal of mon‐optimal edges and paths. This elimination increases the weights of the remaining links and, thus, maximises the information performance function. The resulting optimal path identification process exhibits a typical Bellman optimality property: the sub‐path of every optimal path is optimal. This allows for the specification of criteria for an optimisation algorithm and their analytical presentation in Figure 12:

Removal of a non‐optimal link from the graph has no impact on the (non‐)optimality of other links

Removal of a non‐optimal link from the graph does not influence winfo

If G’ denotes the graph G where all non‐optimal links are removed, than M(G) = M(G’).

,

.

Than

Figure 12: Arguments for (non‐)optimality Based on these criteria we devised a ‘branching procedure’ for approaching the optimisation problem on our network by eliminating links of the fully connected graph. Figure 13 shows the results of this initial optimisation process step for our demonstration experiment. This demonstration comprised 10 European airport and one information type. The weights represent a generalised ‘hardness’ measure for the information transmission. In particular the weights were split into three classes (i.e. small – medium – high) according to expert judgement. This resulted in a network with the following weight distribution: small – 10 links; medium – 20 links; high – 60 links. Figure 13 presents the effectiveness of the ‘branching procedure’ as a connectivity function for two of the weights (i.e. w2: medium and w3: high). Here, connectivity is defined as the ratio of the number of edges in the resulting graph (i.e. incident management network) to the number of edges in the full graph.

Figure 13: Connectivity dependence

133


Rainer Koelle and Denis Kolev The dependency of the connectivity level on the configurations of w2 (i.e. medium) and w3 (i.e. high) can be derived from Figure 13. It is evident that the lowest level of connectivity is obtained when the ratio w2/w3 is minimal. Connectivity levels are separated by linear border functions. This linear relationship can be used to further specify limits on the number of supported links by a specific node and allows for the assessment of the internal network structure (i.e. channels, quality of connections).

5. Conclusions and future work In this paper we addressed an AVSIM capability from a situation management perspective. In order to address the challenge of designing such a capability for the future ATM System, we presented a graph‐theoretic approach and formulated the information‐centric network design task as a feasibility and optimisation problem. This research is part of a preparatory action for the GAMMA project that aims to fill the research void by identifying architecture components and capabilities for a system‐wide security management approach. In particular we

identified the conceptual building blocks of a situation management capability and formulated it as a graph‐/network problem;

developed an interpretation of network characteristics to support the system design; and

demonstrated and validated the analytical results in a set of simulation experiments based on the analysis of the European ATM environment.

At the time being we are concerned with the refinement of the proposed model. The results help to model advanced information fusion and communication strategies between the different stakeholders. This allows for the specification of a data model/dictionary for security information in SWIM. Moreover, we are able to support the identification of security requirements on future communication technologies (e.g. wireless data‐ link) by assessing different implementation proposals in terms of their weight ranking/impact. In this paper we focused on feasibility and an initial approach to optimisation. Legal and data classification constraints did not permit us to deploy our demonstration capability in the ATS/ATM environment. The conceptual approach developed and demonstrated in this paper needs to be extended to map and to validate the complete spectrum of aviation security incident management operations and information needs as part of future research. Our model forms a principal contribution to the work to be conducted under the umbrella of GAMMA. GAMMA will further allow us to generalise the situation management concept to resilience monitoring and response capabilities of the future ATM System.

Acknowledgements The authors would like to thank Professors Garik Markarian and Georg Kolev for the conceptual discussions and formulation of the communication network problem. Disclaimer: The views expressed herein are the authors’ own and do not reflect a EUROCONTROL or GAMMA consortium position or policy.

References Alfredson, J. (2007) Differences in Situational Awareness and how to manage them in the development of Complex Systems, PhD Thesis, Linköping University. GAMMA Consortium (2012) Global ATM Security Management, Project Proposal, Seventh Framework Programme, Call SEC‐2012‐2.2‐2. International Civil Aviation Organisation (ICAO) (2011) Annex 17 to the Convention on International Civil Aviation – Security – Safeguarding International Civil Aviation against acts of unlawful interference, ICAO, Montreal. Jakobson, G., Lewis, L., Matheus, C., Kokar, M., and Buford, J. (2005) “Overview of Situation Management at SIMA 2005”, Military Communications Conference, 2005. MILCOM 2005. IEEE , vol.3, pp.1630‐1636. Joint Planning and Development Office (JPDO) (2007) Concept of Operations for the Next Generation Air Transportation System, Technical Report version 2.0, Washington D.C. Joint Planning and Development Office (JPDO) (2011) Targeted NextGen Capabilities for 2025, Technical Report, Washington D.C. Koelle, R., Markarian, G. and Tarter, A. (2011) Aviation Security Engineering – A Holistic Approach, Artech House, Boston/London.

134


Rainer Koelle and Denis Kolev Koelle, R. (2012) A Study into Situation Management applied to Time‐Critical Decision‐Making in Aviation Security, PhD thesis, Lancaster University. Koelle, R. and Tarter, A. (2012) “Towards a Distributed Situation Management Capability for SESAR and NextGen”, Integrated Communications, Navigation and Surveillance Conference (ICNS 2012), pp.O6‐1‐O6‐12. SESAR Consortium (2007) The ATM Target Concept, SESAR Definition Phase ‐ Deliverable 3, SESAR, Brussels. SESAR Consortium (2008) SESAR Master Plan, SESAR Definition Phase ‐ Deliverable 5, SESAR, Brussels.

135


Exercising State Sovereignty in Cyberspace: An International Cyber‐ Order Under Construction? Andrew Liaropoulos University of Piraeus, Department of International and European Studies, Piraeus, Greece andrewliaropoulos@gmail.com Abstract: Cyberspace is erroneously characterized as a domain that transcends physical space and thereby is immune to state sovereignty and resistant to international regulation. The purpose of this paper is to signify that cyberspace, in common with the other four domains (land, sea, air and outer space) and despite its unique characteristics, is just a reflection of the current international system, and thereby is largely affected by the rules that characterize it. The issue of state sovereignty in cyberspace is critical to any discussion about future regulation of cyberspace. Although cyberspace is borderless and is characterized by anonymity and ubiquity, recent state practices provide sufficient evidence that cyberspace, or at least some components of it, are not immune from sovereignty. The increasing use of Internet filtering techniques by both authoritarian regimes and democracies is just the latest example of attempting to control information flows. Cyberspace is non‐territorial, but in sharp contrast to the land, sea, air and outer space, cyberspace is not a part of nature, it is human‐made and therefore can be unmade and regulated. States have continuously emphasized their right to exercise control over the cyber‐infrastructure located in their respective territory, to exercise their jurisdiction over cyber‐ activities on their territory, and to protect their cyber‐infrastructure against any trans‐border interference by other states or by individuals. As a result, states are filtering and monitoring cyber‐bytes. Over the past years, there is a growing number of states that is publishing national cyber‐policies and establishing cyber‐centers that aim to protect the national cyber‐infrastructure and control their citizens’ access to information. The issue of state sovereignty in cyberspace raises critical questions about the need to regulate the cyber domain and gradually reach an international cyber‐order. Keywords: cyberspace, sovereignty, state sovereignty, international law, international cyber‐order

1. Introduction Over the past three decades, cyberspace has expanded and affected many aspects of human life. States, organizations and individuals have extensively exploited the opportunities that cyberspace offers. The cyber domain has challenged the traditional political, social and economic structures of the international society. It has radically increased the speed, volume and range of communications, and thereby, largely altered the way states are governed, the way companies deliver services and public goods, the way individuals interact and build social networks in Internet and the way citizens participate in civil society (Betz and Stevens, 2011: 9‐11). Along with these developments, the emergence of cyberspace has also raised major challenges to individual and collective security. Critical national infrastructure is vulnerable to cyber‐attacks, world economy is threatened by cyber‐crime and cyber‐espionage and individuals are terrorized by hackers (Carr, 2011). In the cyber domain, cyber‐attacks cross national borders, they are hard to trace and affect both civilian and military networks. Militaries, terrorist groups and even individuals, now have the capability to launch cyber‐attacks, not only against military networks, but also against critical infrastructures that depend on computer networks (Liaropoulos, 2011b: 541). News reports are replete of cases, where private and public communications were disrupted, banking systems were manipulated and even military communication systems were destroyed. A number of questions is inevitable raised. How will states adapt to cyberspace? How does the condition of anarchy affect international politics in cyberspace? Is it possible for states to exercise their authority and control in a borderless world? In order to deal with these questions we need first to define the security challenges that the Westphalian state faces in cyberspace. In a latter phase we will examine a key concept in international politics, that of sovereignty and apply it in cyberspace. By conceptualizing sovereignty, we will be able to address a number of critical issues regarding the establishment of common principles and norms in cyberspace. Paraphrasing Hedley Bull’s concept of international order, we could argue that exercising state sovereignty in cyberspace, is a necessary step for establishing an international cyber‐order. According to Bull, states act in such a way as to preserve international order, because this order is in their own interest. (Bull, 1977). The question we need to ask is, whether states will act in the same way in cyberspace, in order to preserve an international cyber‐order.

136


Andrew Liaropoulos

2. Security in cyberspace A few recent examples of cyber‐conflicts vividly illustrate the challenges that the Westphalian state system faces in cyberspace. In April 2007, the Estonian government’s decision to move a Soviet‐era war memorial, the Bronze Soldier, triggered a cyber‐conflict, in the form of a three‐week wave of distributed denial‐of‐service (DOS) attacks that crippled the country’s information technology infrastructure (Blank, 2008: 227‐247). The cyber‐attacks temporarily disrupted the Estonian communications networks, by targeting the government, newspapers, mobile phones, emergency response systems and banks. The target included the Estonian presidency, its parliament and many government ministries. Although the cyber‐attacks cannot be attributed to a specific actor, it is widely believed in Estonia that Moscow was behind these attacks. Russia claimed that the attacks came from cyber‐patriots and not on the order of the Russian government (Crosston, 2011: 104‐5). Regardless of the true identity of the attacker, the important issue is that the inability to trace the origin of the attack (the attribution problem), hinders any attempt of retaliation (Tsarougias, 2012). Likewise, during the conflict that broke out on August 2008, between Russia and Georgia, over South Ossetia, cyber‐attacks were launched against Georgian governmental websites, media, and communication services (Korns and Kastenberg, 2009). As with the Estonian case, there is no proof of who was behind these attacks. Georgia accused Russia, claiming that the route traffic pointed to the Russian Business Network (RBN). The Georgian case clearly shows that cyber‐attacks that take place in a borderless world, where the traditional law of armed conflict cannot be applied, might be a very handy strategy when states choose to exercise coercive diplomacy. The barriers to entry to cyberspace are lowering due to the proliferation of low‐cost information and communication technology (ICT) and therefore, the cyber option seems to be a very attractive and less costly one, compared to the use of traditional military means. Cyber‐attacks can take many forms and the examples of Ghost Net and the Google hacking, are indicative of the above. Both incidents have been related to China and raise many questions regarding the way the victims could respond. Ghost Net was a massive cyber‐espionage operation that was discovered by the Information Warfare Monitor in March 2009. The operation used malware and attacked non‐governmental organizations and embassies working on Tibetan issues, in 103 countries (Information Warfare Monitor, 2010). In January 2010, Google announced that a computer attack originating from China had penetrated its corporate infrastructure and stolen information from its computers, most likely source code. The attacks also targeted Gmail accounts of human‐rights activists and infiltrated the networks of 33 companies (Thomas, 2010: 101‐33 and Morozov, 2011: 1‐33). The borderless and complex nature of cyberspace might explain why Beijing regards Google as an element of US power (Klimburg, 2011: 52) and social networks as a threat to national security. The latest and most definite cyber‐attack is the Stuxnet worm. Stuxnet is malicious software (malware) that was designed specifically to strike the Iranian nuclear facility at Natanz. It spread via Microsoft Windows and targeted Siemens industrial software. The value of the Stuxnet lays not so much on its technical characteristics, but on the political and strategic context, within which it operated (Farwell and Rohozinski, 2011: 23‐40). The scenario of launching an air strike to stop or slow down Iran’s nuclear program has troubled security experts for years. The outcome of such an operation would be doubtful and the risks for the regional and international security, potentially disastrous. A preventive air strike on Iranian nuclear facilities would most probably start a conflict in the Middle East and would be unlikely to prevent the eventual acquisition of nuclear weapons by Iran. So did Stuxnet offer a better and risk‐averse alternative to a conventional attack? Even better, what if it was launched by a criminal organization or by a group of patriot‐hackers? What if nations outsource cyber‐ attacks to third parties, to cyber‐mercenaries, thereby bypassing the attribution issue? The above incidents demonstrate that malicious actors, state and non-state, have the ability to compromise and control millions of computers that belong to governments, private enterprises and ordinary citizens. These developments have challenged social scientists to redefine key concepts like politics (Karatzogianni, 2009, Chadwick and Howard, 2009 and Morozov, 2011), power (Dunn Cavelty et.al, 2007, Betz and Stevens, 2011 and Nye, 2011), ethics (Dipert, 2010 and Liaropoulos, 2011a), international law (Tikk et.al, 2010, Hughes, 2010 and Schmitt, 2013) and security (Dunn Cavelty et.al, 2007, Kramer et.al, 2009 and Ryan, 2011). There is a growing body of literature that covers in depth many cyber‐related issues, but anyone attempting to untangle the complexities of cyberspace, cannot afford to ignore the concept of sovereignty. After all, state sovereignty largely defines the current international order. The United Nations are based on the principle of sovereign equality of all its members and preserving state sovereignty is a top priority for both

137


Andrew Liaropoulos international organizations and individual states (Franzese, 2009: 7). The reasons for concentrating on the concept of sovereignty are two. First, to explore how the state, being a territorial entity, can exercise sovereignty, thereby authority and control, in a non‐territorial and borderless domain like cyberspace. Second, by framing the debate on sovereignty in cyberspace, we will develop a useful framework to address other cyber‐related issues. As a preliminary to this discussion, however, some exegesis of the key terms cyberspace and sovereignty is required. This may seem as a semantic exercise, but semantics are important; how words are understood defines expectations and expectations are important in shaping policy.

3. Cyberspace and sovereignty The relevant literature offers various definitions of cyberspace, depending on the conceptual understanding of the author. A definition that is widely accepted among the cyber‐experts is that of Daniel Kuehl. He defines cyberspace as a global domain within the information environment whose distinctive and unique character is framed by the use of electronics and the electromagnetic spectrum to create, store, modify, exchange, and exploit information via interdependent and interconnected networks using information‐communication technologies (Kramer, 2009: 28). Cyberspace refers to the fusion of all communication networks, databases and information sources into a global virtual system. Cyberspace is composed of three layers. The first one is the physical layer that consists of electrical energy, integrated circuits, communications infrastructure, fiber optics, transmitters and receivers. The second layer is the software, meaning the computer programs that process information. The last and least concrete layer is that of data (Tabansky 2011: 77‐8). Cyberspace is non‐territorial, but in sharp contrast to the land, sea, air and outer space, cyberspace is not a part of nature, it is human‐made and therefore can be unmade and regulated (Herrera, 2007). The modern system of communications seems boundless, but it is not. Cyberspace is bounded by the existing physical structures. Much of what actually constitutes cyberspace is located in the sovereign territory of states (Betz and Stevens, 2011: 35). In common with the Westphalian era, states will always try to control the information flow. Cyber‐bytes cannot escape this practice (Demchak and Dombrowski, 2011: 41). Recent developments demonstrate that states are trying to overcome the border paradox and delimitate borders by asserting sovereignty over cyberspace (Von Heinegg, 2012). Sovereignty is regarded as a fundamental concept in the current international order. Sovereignty signifies authority within a distinct territorial entity, but also asserts membership of the international system. Defining what constitutes sovereignty in international politics can be a puzzling task. A useful typology of sovereignty for the purposes of our analysis is provided by Stephen Krasner. He identifies four ways in which sovereignty can be understood: domestic sovereignty, interdependence sovereignty, international legal sovereignty and Westphalian sovereignty (Krasner, 1999: 3‐25). Domestic sovereignty refers to the way public authority is organized within a state and to the level of effective control these authorities can exercise. Political authorities either organized in a parliamentary or presidential system, in a monarchical or republican way, or in an authoritarian or democratic way, are responsible for regulating and controlling developments within their own territory. Interdependence sovereignty relates to the ability of public authorities to control trans‐border movements, the flows of people, materials and ideas across borders. If a state fails to regulate what passes its borders, it will also fail to control what happens within them. Therefore, loss of interdependence sovereignty can affect domestic sovereignty, in terms of inefficient control. Advocates of globalization argue that there is a number of activities, like environmental pollution, currency crises and terrorism, where state’s control is declining. International legal sovereignty refers to the mutual recognition of states in the international system. Finally, Westphalian sovereignty highlights the right that states have to determine their political life and makes reference to the exclusion of external actors from influencing or determining domestic authority structures (Krasner, 1999: 11‐25).

4. Exercising sovereignty in cyberspace Krasner’s typology of sovereignty will assist us to debunk the myth that cyberspace is immune from state sovereignty. This myth is based on a widely‐held belief that cyberspace is not a physical place and therefore defies the rules that apply to land, sea, air and outer space. Actions in the cyber domain seem to take place outside the state in a virtual manner, but their implications affect the real world, inside states. Getting back to the description of cyberspace and the physical layer, it is obvious that cyberspace requires a physical infrastructure in order to operate. This infrastructure is terrestrially based and therefore not immune from

138


Andrew Liaropoulos state sovereignty (Von Heinegg, 2012: 9). Even more, cyberspace cannot operate in a chaotic manner, but needs regulation and oversight. Companies that operate in cyberspace need the laws of state to operate their business (Wu, 1997). Finally, states need to be present in cyberspace and exercise control for reasons of national security. National critical infrastructures, like banking and finance, oil, gas and electricity, water and transportation, all depend upon computer networks to operate and therefore cannot escape the control of the state (Franzese, 2009: 13‐4). Regarding domestic sovereignty, cyberspace has affected domestic authority and control, in both liberal democracies and authoritarian regimes. There is a growing number of states that attempts to control their citizens’ access to information, on the basis that certain types of content constitute threat to the domestic order or national security (Deibert, 2009). According to a report by OpenNet Initiative, the flow of information in cyberspace in many Muslim countries emulates the flow of information in the real space (Noman 2011: 2). In the West, the threat of terrorism, serves in a similar way. Western governments, mainly the US and EU members, have increased their filtering surveillance techniques and limited anonymity in cyberspace. The unrestricted flow of information in cyberspace has definitely challenged the interdependence sovereignty. Terrorist use of the Internet as a means of propaganda is a classic case where the state is unable to control what passes its borders. Governments have attempted to restore control by removing videos with terrorist content from the Web (Betz and Stevens, 2011: 69‐70). Cyberspace poses no challenge to international legal sovereignty. The claim that cyberspace, due to its unique nature should acquire international legal status is not popular among the states (Franzese, 2009: 11). The most essential challenge that cyberspace poses, is to Westphalian sovereignty. As stated above in the cases of the cyber‐conflicts in Estonia and Georgia, as well as in the case of Stuxnet, cyber‐attacks in another’s country information infrastructure, consists a violation of the Westphalian sovereignty. States have to solve a number of critical issues, both of technical and political nature, in order to successfully establish sovereignty and thereby order in cyberspace. The lack of attribution is a major obstacle in exercising sovereignty. Unless states gain the ability to identify actors and trace back cyber‐attacks, any claim to exercise power in cyberspace will be fragile. Creating an international investigative body, modeled after the International Atomic Energy Agency (IAEA), to review and investigate cyber‐attacks, might not answer the attribution problem, but definitely points to the right direction (Austin and Gady, 2012: 12). Reaching a consensus about regulating cyberspace is another major issue. The role of great cyber‐powers like USA, Russia and China is critical. Washington, Moscow and Beijing are trying to implement their policies in multilateral and international fora, where cyber issues are debated. According to US officials, the United States faces a real danger of cyber‐attacks from both state and non‐state actors. Such attacks could be as destructive as the terrorist attack on 9/11 and could virtually paralyze the state. US officials stress the need to develop offensive capabilities to defend the nation and its allies. China and Russia do not share the US position that existing international laws should apply to cyberspace. China and Russia have argued that new rules and laws need to be created. In September 2011, the two countries submitted to the UN General Assembly a proposal for a code of conduct in cyberspace. The proposed code calls for states to respect domestic laws and sovereignty and to settle disputes within the framework of the United Nations. Cyberspace is another example where great power politics are exercised. China views its cyberwarfare capabilities as a powerful asymmetric tool to deter the US. For Russia, the ability of states to have control over the information space is intrinsic. Moscow has been actively proposing international cyber security legislation that constrains the free flow of information (Giles, 2012: 2). Many states in the West do not share the same views with Russia, since they view cyberspace as a mean to spread democracy and freedom.

5. Conclusion To conclude, cyberspace is not immune to state sovereignty, but at the same time states have still a long way to go until they establish an effective mechanism of authority and control. Cyberspace is rather a reflection of the current international system in a new domain. Therefore, international politics in cyberspace will be shaped by state rivalry and geopolitical concerns, as well as common interests and existing norms. Cyberspace is a domain where national interests clash, but also where states cooperate. Although it is premature to refer to an international cyber‐order, it is fair to say that such a process seems to be underway. Echoing Hedley Bull’s concept of international order, we need to examine closely the role of the following ‘institutions’ in cyberspace: the balance of power, international law, diplomacy, war and the role of great powers. Bull’s work on international order could serve as a useful guide for international relations scholars to investigate the

139


Andrew Liaropoulos behavior of states in cyberspace. Obviously, a short paper like this can only scratch the surface, and I urge the reader to delve into the reference list for further information.

Acknowledgements The author would like to thank his colleagues, Professor P. Ifestos, for his insightful discussions regarding the concept of sovereignty in international relations, Associate Professor P. Liacouras, for clarifying the legal aspects of territorial sovereignty and finally Dr. I. Konstantopoulos for his general comments and research support.

References Austin, G. and Gady, F‐S. (2012) Cyber Détente between the United States and China: Shaping the Agenda, East‐West Institute, New York. Betz, D.J. and Stevens, T. (2011) Cyberspace and the State: Toward a Strategy for Cyber‐Power, Routledge, International Institute for Strategic Studies, Oxon. Blank, S. (2008) “Web War I: Is Europe’s First Information War a New Kind of War?”, Comparative Strategy, Vol 27, No.3, pp.227‐47. Bull, H. (1977) The Anarchical Society, Macmillan, London. Carr, J. (2011) Inside Cyber Warfare, O’Reillly, Sebastopol. Chadwick, A. and Howard, P. eds (2009) Routledge Handbook of Internet Politics, Routledge, London and New York. Crosston, M. (2011), “World gone Cyber MAD. How Mutually Assured Debilitation is the best hope for cyber deterrence”, Strategic Studies Quarterly, Vol 5, No.1, pp.100‐16. Deibert, R. (2009) The Geopolitics of Internet Control: Censorship, sovereignty and cyberspace’ in Chadwick, A. and Howard, P. eds, Routledge Handbook of Internet Politics, Routledge, London and New York. Demchak, C. and Dombrowski, P. (2011), “Rise of a Cybered Westphalian Age”, Strategic Studies Quarterly, Vol 5, No.1, pp.32‐61. Dipert, R.P. (2010) “The Ethics of Cyberwarfare”, Journal of Military Ethics, Vol 9, No.4, pp.384‐410. Dunn Cavelty, M. et.al (2007) Power and Security in the Information Age: Investigating the role of the state in cyberspace, Ashgate, Burlington. Farwell, J. and Rohozinski, R. (2011) “Stuxnet and the future of Cyber War”, Survival, Vol 53, No.1, pp.23‐40. Franzese, P.W. (2009) “Sovereignty in Cyberspace: Can it exist?”, Air Force Law Review, Vol 64, pp.1‐42. Giles, K. (2012) Russian Cyber Security: Concepts and Current Activity, Chatham House, London. Herrera, G.L (2007) Cyberspace and Sovereignty: Thoughts of Physical Space and Digital Space in Dunn Cavelty, M. et.al Power and Security in the Information Age: Investigating the role of the state in cyberspace, Ashgate, Burlington. Hughes, R. (2010) “A Treaty for Cyberspace”, International Affairs, Vol 86, No.2, pp.523‐41. Information Warfare Monitor, (2010) Shadows in the Cloud: Investigating Cyber Espionage 2.0, JR03‐2010, Shadowserver Foundation, web version http://shadows‐in‐the‐cloud.net, last day accessed, 20.10.2012. Karatzogianni, A. ed. (2009) Cyber Conflict and Global Politics, Routledge, London and New York. Klimburg, A. (2011) “Mobilizing Cyber Power”, Survival, Vol 53, No.1, pp.41‐60. Korns, S.and Kastenberg, J. (2009) “Georgia’s Cyber Left Hook”, Parameters, Vol 38, No.4, pp.60‐76. Kramer, F.D., et.al (2009) Cyberpower and National Security, Potomac Books, Inc, Washington D.C. Krasner, S.D. (1999) Sovereignty: Organized Hypocrisy, Princeton University Press, New Jersey. Liaropoulos, A.N. (2011a) War and Ethics in Cyberspace: Cyber‐conflict and Just War Theory, in Ryan, J. ed, Leading Issues in Information Warfare & Security Research, vol.1, Academic Publishing International Ltd, Reading. Liaropoulos, A.N. (2011b) Power and Security in Cyberspace: Implications for the Westphalian state system, in Panorama of Global Security Environment, Centre for European and North American Affairs, Bratislava. Morozov, E. (2011) The Net Delusion. The Dark Side of Internet Freedom, Public Affairs, New York. Noman, H. (2011) In the name of God: Faith‐based Internet Censorship in majority Muslim Countries, OpenNet Initiative. Nye, J.S. (2011) The Future of Power, Public Affairs, New York. Ryan, J. ed. (2011) Leading Issues in Information Warfare & Security Research, vol.1, Academic Publishing International Ltd, Reading. Schmitt, M.N. ed. (2013) Tallinn Manual on the International Law applicable to Cyber Warfare, Cambridge University Press, Cambridge. Tabansky, L. (2011) “Basic Concepts in Cyber Warfare”, Military and Strategic Affairs, Vol 3, No.1, pp.75‐92. Thomas, T. (2010) “Google Confronts China’s Three Warfares”, Parameters, Vol 40, No.2, pp.101‐113. Tikk, E., et.al (2010) International Cyber Incidents: Legal Considerations, NATO CCD COE Publications, Tallinn. Tsarougias, N. (2012) “Cyber attacks, self‐defense and the problem of attribution”, Journal of Conflict & Security Law, Vol 17, No.2, pp.229‐244. th Von Heinegg, W.H. (2012) Legal Implications of Territorial Sovereignty in Cyberspace, in Czosseck, C. et.al, 2012 4 International Conference on Cyber Conflict, NATO CCD COE Publications, Tallinn. Wu, T.S. (1997) “Cyberspace Sovereignty? The Internet and the International System”, Harvard Journal of Law & Technology, Vol 10, No.3, pp.647‐66.

140


SCADA Threats in the Modern Airport John McCarthy1 and William Mahoney2 1 Cranfield University, UK 2 University of Nebraska at Omaha, USA John.mccarthy@servicetec.com wmahoney@unomaha.edu Abstract: Critical infrastructures are ubiquitous in the modern world and include electrical power systems, water, gas, and other utilities, as well as trains and transportation systems including airports. This work is concerned with Supervisory Control And Data Acquisition (SCADA) systems that are at the heart of distributed critical infrastructures within airports. Modern airports are highly competitive cost driven operations that offer a range of public and private services. Many airport systems such as car parking and building control systems are SCADA controlled. This is achieved with sensors and controllers monitored over a large, geographically disperse area. To increase efficiency and to achieve cost savings, SCADA systems are now being connected to information technology system networks using TCP/IP. The merging of SCADA systems into the main IT network backbone is presenting new security problems for IT security managers. Historically, proprietary solutions, closed systems, ad‐hoc design and implementation, and long system life cycles have led to significant challenges in assessing the true security posture of SCADA systems. To address this, this work seeks how SCADA systems are being integrated into the IT network within a modern airport. From this new standpoint we will be able to identify ways in which SCADA may be vulnerable to malicious attack via the IT network. The results of this work could offer solutions to increase security within airports. Keywords: distributed security, airport terminals, control systems, SCADA

1. Introduction Supervisory Control And Data Acquisition (SCADA) systems act as the hidden computer equipment behind large infrastructures that are essential to maintaining the quality of our life. These infrastructures include electrical power grids, water purification and delivery, gas, and other utilities, as well as trains and transportation systems. Legacy SCADA systems, planned and implemented possibly decades ago, were either not designed to be secure, or were designed with “security through obscurity”. In the design and analysis of these systems, features such as physical isolation and technical uniqueness greatly reduced the possibility of cyber attacks. But this is no longer true with newly designed SCADA systems, and it is no longer as true with legacy systems that might now be connected to corporate networks. With long product lifecycles, SCADA systems often become a quilt work of different hardware, operating systems, applications, and software. Meanwhile, due to the continuous availability requirements of such arrangements, operating system and software updates are often not applied. Over time the system components may no longer even be supported for updates, leading to potential vulnerabilities that can be exploited at the component level. At the network level, vulnerabilities are inadvertent, due to the usual misconfigured firewall or router, but also via deliberate interconnections between a SCADA network and the company or utility IT structure. While the replacement of older devices with new devices solves the problem of the lack of software updates, newer devices allow low cost Internet Protocol (IP) based communications, nullifying the uniqueness that once provided some of the security. Due to the nature of the computing equipment – often legacy software/hardware – as well as the criticality of the services which these systems control, some SCADA systems are coming under increasing regulatory oversight. In the United Stares, the International Electrotechnical Commission (IEC) standard 61850 [IEC10] is one such standard; the NERC (North American Electric Reliability Corporation) also produces standards in this area (NERC 2012). Nearly 1,700 of the 3,200 power utilities in the United States have some type of SCADA system in place, and it is estimated that one quarter of these utilities have no separation between the corporate network and the system control network (Lemos 2009). Special publications from NIST 800‐82 (NIST 2008) are designed for securing SCADA systems. NIST 800‐53 guidelines (NIST 2007) have been extended to include SCADA related improvements in the specification of its controls, along with a wide discussion of control system vulnerabilities. The 800‐53 standards now include a description of their applicability to control systems and their vulnerabilities. Other SCADA implementations are coming under greater regulation as well. For example the petroleum and gas refining systems are subject to regulatory issues, and are now asked to secure pipeline and other infrastructure in their API 1164 standard (API 2009).

141


John McCarthy and William Mahoney But these regulations have not entered much of the transportation sector. An obvious distributed system using SCADA would be the railroad industry. However, an equally, if not more important transportation system which relies on SCADA is the world’s airports. In particular, airport SCADA systems control a wide variety of terminal facilities and are not currently scrutinized with respect to regulatory standards. This paper is concerned with an examination of the deployment of SCADA systems in a major airport. The paper is organized as follows: the following section provides a brief set of examples of SCADA security issues to demonstrate the severity of the problem. We use section three to relate the SCADA threat to the modern airport industry. The authors recently visited a major North American airport facility and have comments regarding the visit in section four. Suggestions for increased security are offered in section five, and our conclusions follow.

2. SCADA threats and breaches There are many examples of SCADA systems gone awry that will illustrate the severity of the problem, as well as demonstrate the wide variety of critical infrastructure devices controlled by these types of systems. Among the SCADA critical infrastructure failures which are often cited, include the following: A water treatment plant near Harrisburg, PA was attacked in 2006. The hacker planted malicious software into the control systems and could potentially have altered or stopped the operation of the treatment plant (ABC 2006). The water treatment facility in Queensland’s Maroochy Shire was accessed by a disgruntled past employee named Vitek Boden, who used a wireless connection into the pumping and valve system to route millions of gallons of untreated sewage into a creek adjacent to a hotel (Wyld 2004). Another often cited example is the train system in Poland. Four vehicles were derailed when a teenage boy hacked into the SCADA equipment controlling the track switches, using a modified television remote control (Leyden 2008). Similarly a disruption of freight and commuter train traffic near Washington D.C. in August of 2003 was determined to be caused by the signaling systems being infected with the Sobig virus (Krutz 2006). US investigators reportedly found evidence in computer logs discovered at Afghanistan Al Qaeda camps showing that members spent time on websites offering software and programming instructions for the digital switches that run power grids (Kramarenko 2004). In March of 2007 the US Department of Homeland Security, in what was widely disseminated on various video sharing sites, demonstrated the remote destruction of a power generator. The generator was sent commands telling it to operate beyond the capabilities of the design. (Meserve 2007). A company whose software and services are used to remotely administer and monitor large sections of the energy industry has warned customers that it is investigating a sophisticated hacker attack spanning its operations in the United States, Canada and Spain. Telvent Canada Ltd. said that on Sept. 10, 2012 it learned that attackers installed malicious software into “OASyS SCADA”, a product that helps energy firms connect to “smart grid” technologies (Krebs 2012). Faulty software caused the gates holding back the Torrens Lake, in South Australia, to open when not commanded to do so (Hale 2012). The gates remained open for two hours, completely draining the lake. Officials had purchased faulty software from Ottoway System Integration, a firm that went out of business only days after the incident. While not caused by foul play, outsiders who have remote access to these types of control systems can cause similar damage. SCADA networks are the fundamental foundations of our society and lifestyle, yet are infamously difficult to secure, due to the complexity of their architectures. We next turn to the transportation industry, specifically to SCADA systems used at airports for terminal operations.

3. Modern aviation and cyber security In the USA post post‐9/11 environment, there are a constant variety of threats, even though many of the security recommendations provided by the National Commission on Terrorist Attacks upon the United States have been put into place. Leon Panetta, the one‐time Director of the Central Intelligence Agency, testified in June 9, 2011 to the U.S. Senate. In his statement he acknowledged emerging technology threats: There is no question that the whole arena of cyber attacks, developing technologies in the information area represent potential battlefronts for the future. I have often said that there is a strong likelihood that the next Pearl Harbor that we confront could very well be a cyber attack that cripples our power systems, our grid, our security systems, our financial systems, our governmental systems (Panetta 2011).

142


John McCarthy and William Mahoney To emphasize this, Bob Cheong, Chief Information Security Officer of the Los Angeles Airport, report that a variety of cyber‐attacks in Los Angeles have occurred in the last several years: there were over 6,400 attempts to hack into a new file server two days after it was deployed; in a one‐ year period, nearly 59,000 Internet misuse and abuse attempts were blocked; finally, in that same one‐year period, 2.9 million hacking attempts were blocked (Cheong, 2011, p. 5). In relation to post‐9/11 aviation security, the Transportation Security Administration (TSA) has placed focus on security checkpoints and finding potential threats through bomb‐sniffing technology, terrorist watch lists, increased use of in‐flight security officers, full‐body‐scanners, positive baggage matching, and hardened cockpit doors (Mann, 2011; Poole, Jr., 2008, p. 4). Interestingly according to Bruce Schneider, a security expert, these activities are not meant to actually secure travelers from would be attackers, but are put in place to instill confidence – the more visible the perceived obstruction the more confident the public can feel about flying (Mann, 2011). This opinion is shared by the authors. Beyond physical security at airports, and with respect to Secretary of Defense Panetta’s views on the emergence of cyber attacks as a primary concern, many are turning their eyes to securing the technology that is utilized during the day‐to‐day operations of airports. Dominic Nessi, the Deputy Executive Director and Chief Information Officer of Los Angeles World Airports, acknowledges the challenges to the information technology (IT) expert trying to secure an international airport (Nessi 2011): Organizations, including airports, are rapidly trying to balance the desire for users to have mobile applications and mobile hardware with the new security risks that they bring. The bottom line is that the hardware and new application evolves faster than the preventative measures that an organization needs to take can be developed. … The makeup of an airport’s system and the network total make airports a target. Because of the types of systems that we have in an airport, we’re going to have a lot of exposure just by virtue of the system itself. We can mitigate most of our vulnerabilities through good cyber security measures (McAllister, 2011, p.18). In October of 2011, Mr. Nessi delivered an address to the Airports Council International of North America outlining the cyber security threats facing airports, the potential vectors that might be used in an attack, and tactics for securing known vulnerabilities. Amongst Nessi’s threats were several that were focused on external airport operations, such as external airport or airline websites, concession point‐of‐sale, credit card transaction information, and passenger’s wireless devices. However, the overall impact of cyber‐attacks on systems external to airport operations is small when compared to attacks on systems required to perform internal airport operations. Nessi points out several potential targets within this realm, including: access control and perimeter intrusion systems, eEnabled aircraft systems, radar systems, wireless and wired network systems, and network‐enabled baggage systems. Obviously, a variety of vulnerabilities occur within cyberspace because of humans, hardware, software, and connection points that provide access to such systems. Nessi’s system of assessing threats is similar to the United States Computer Emergency Readiness Team (US‐CERT) and National Institute of Standards and Technology’s (NIST) cyber vulnerability assessment guidance. US‐CERT has provided a “high level overview” of cyber vulnerabilities for control systems (US‐CERT 2012). Within this overview, US‐CERT includes the following vulnerabilities: wireless access points, network access points, unsecured SQL databases, poorly configured firewalls, interconnected peer networks with weak security, and several others.

4. An examination of a major hub airport in North America Since the publication of Nessi’s work there has been much discussion within the airport sector in relation to security measures. Action has been taken and one of the authors of this paper is on a panel commissioned by the Federal Aviation Agency to determine best cyber practice in airports. However, when examining a major hub airport in North America the authors have found that the critical driver for increased security has been the implementation of Payment Card Industry (PCI) compliance regulations for secure credit card transactions. PCI has forced many airports to upgrade and improve security measures or face the loss of revenue from credit card transaction processing. Without this driver the increase in security measures would have been considerably slower. There was also a widely held belief that the SCADA systems in the airport were isolated from the main IT backbone. Often the car parking and baggage control systems were separated from the main IT network by

143


John McCarthy and William Mahoney hardware firewalls. These firewalls were “assumed” secure by IT staff and it was often unclear who had responsibly for the managing and configuration of these firewalls. Additional services could be added to the network without all relevant IT staff being aware of the changes. There appeared to be no overarching group or committee that had a direct focus on cyber security measures despite the considerable size of the airport. Security measures were left in multiple hands and ad hoc systems were assumed isolated due to previous hardware and software configurations without ongoing checks and testing. One key element of PCI compliance is the use of penetration testing. This helps secure systems, at least from the casual port scanner. Regarding penetration testing and internal airport operations, there are examples of airport cyber security lapses and known weaknesses. AirTight Networks, tested wireless security at fourteen airports in the United States, Canada and Asia (AirTight 2008). One of the study’s findings was that “77 percent were non‐hotspot (i.e. private) networks and of those, 80 percent were unsecured or using legacy WEP encryption, a fatally flawed protocol.” These Wi‐Fi networks encompassed ticketing systems, baggage systems, shops, and restaurants. The implications of such a breach could be that a person infiltrates these weakly secured systems and wreaks logistical havoc on an airport, potentially bringing one airport to a standstill. Sri Sundaralingham of AirTight was quoted as saying: Imagine the ripple effect at an airport like Heathrow or O'Hare if someone could work their way into the baggage transiting system and reroute luggage all over the world. It could bring the system to a grinding halt with both economic and security consequences.

5. Securing critical information systems Nessi’s assessment settles on four components within an airport that are vulnerable to cyber attack, each “require a different approach to security: the network, the device, the application, and the back‐end system”. His resolutions for securing such systems is by primarily focusing on process, culture, staffing, and training. Specifically, he recommends continuous software configuration management for software and hardware, and following established updating protocols; “social engineering awareness” campaigns educating staff on proper use of software, hardware and access points and potential exploits that expose human error and provide access to unauthorized persons; and performing penetration testing by both those with internal access and by external, third‐parties such as external audits by Department of Homeland Security employees or approved vendors. Finally, Nessi is a supporter of recruiting the right security personnel and continuing their training, opting for Certified Information Systems Security Professional (CISSP) certification. Cheong suggests it is essential that airports have a cyber security team (Cheong, 2011, p. 4). Additionally, Mirko Montanari, Roy H. Campbell, Krishna Sampigethaya and Mingyan Li have published paper “A Security Policy Framework for eEnabled Fleets and Airports”, which was updated in 2011. Their premise is that future airports will be “highly net‐centric system‐of‐systems with advanced networking and wireless technology to accommodate the ‘eEnabled aircraft,’ enhanced surface area operations, as well as growing business and societal demands” (p.1). The eEnabled concept essentially allows airports and airplanes to remain interconnected through a variety of key pieces of network infrastructure. This includes check‐in systems, transaction devices, baggage handling, et al. become part of the eEnabled airport. It is therefore clear that the problem will become more complex as these systems integrate and evolve.

6. Further research / conclusions Since the problem is destined to be more complex, we feel that additional research into airport cyber security is certainly warranted. There are several reasons for this. First, few researchers focus on this specific area of IT security. Although the transportation sector is often listed as one of the common SCADA domains, the “sexy” part of SCADA seems to always end up being the power grid. Second, “if you have seen one airport, you have seen one airport”. Each is different with particular quirks; there is no standardization in the IT infrastructures available or used at a particular location. Third, airports continue to integrate more and more functionality into the infrastructures, including Electronic Flight Bags and back‐office systems. This integration sees security as an after‐the‐fact nuisance in some cases. Clearly the increased integration of IT infrastructure requires an increased cyber security infrastructure as well. The challenge as always is to find “a balance between the protection capability and cost, performance, and operations considerations” (National Security Agency). This creates a challenge of balancing needs versus operational functionality. If security requirements hinder operations, then there is no true value in

144


John McCarthy and William Mahoney implementing security. However if operational functionality is not secured and vulnerabilities are exploited, operations can cease and you have another extreme situation. Balancing both requires wisdom and the correct security strategy. With regard to securing airport networks, stopping all attacks is nearly impossible; therefore a plan should be put into place to recover from exploitations. Experts recommend having a Computer Incidence Response Team “a carefully selected and well‐trained group of people whose purpose is to promptly and correctly handle an incident so that it can be quickly contained, investigated, and recovered from” (SANS Institute, 2001, p. 2). Modern airport security has been impacted by the horrific events of September 11, 2001. After this tragedy, aviation physical security has received much of the focus by the government, media and public, while critics would suggest that these actions don’t make us more secure, they merely make us feel safer based on our perceptions. Experts would argue that aviation security should focus holistically on real threats to civilian aviation, including cyber security. Because of the ubiquity of network‐enabled airports connected both internally and externally network security is paramount to ensuring international transportation safety. A variety of strategic frameworks can be used by airport information security managers to ensure that vulnerabilities are minimized. Both CERT and NIST provide guidance on establishing importance of key assets and utilizing resources to preserve them. Both Dominic Nessi and Bob Cheong, aviation cyber security experts for LAWA lay out the security required and how to balance that security based on a specific strategic framework. Nessi focuses on securing access control and perimeter intrusion systems, eEnabled aircraft systems, radar systems, wireless and wired network systems, and network‐enabled baggage systems. And Bob Cheong points out that confidentiality, integrity, availability, and non‐repudiation must be considered when securing an airport’s network. The basic principles of security have not changed they simply must be integrated into wider networks and working practices.

References ABC (2006) “Hackers Penetrate Water System Computers”, http://blogs.abcnews.com/theblotter/2006/10/hackers_penetra.html AirTight Networks (2008, March 3). “AirTight study at worldwide airports reveals wireless security risks for travelers and airport operations”. Retrieved from http://www.airtightnetworks.com/home/news/press‐ releases/pr/browse/4/select_category/8/article/123/airtight‐study‐at‐worldwide‐airports‐reveals‐wireless‐security‐ risks‐for‐travelers‐and‐airport‐opera.html API (2009) American Petroleum Institute, “Pipeline SCADA Security”, 2nd edition, 06/01/09. Cheong, B. (2011, October 28). Cyber security at airports. Airports Council International – North America. Retrieved from http://aci‐na.org/sites/default/files/cheong‐cybersecurity‐bit.pdf Hale, Gregory, (2011). “A New Report Details Trends That May Lead To Improvements That Will Help To Protect A System From Attack”, Plant Engineering, at http://www.plantengineering.com/single‐article/report‐scada‐systems‐under‐ siege/c6b4a830db67d2fb9d329b6fd1d04d99.html IEC (2012). IEC Standards available at http://www.iec.ch/ Kramarenko, D. (2004). “Al Qaeda in cyber space: threats of cyberterrorism”, Computer Crime Resource Center, July 27, 2004, http://www.crime‐research.org/news/27.07.2004/515/ Krebs, Brian, (2012). “Chinese Hackers Blamed for Intrusion at Energy Industry Giant Telvent” at http://krebsonsecurity.com/2012/09/chinese‐hackers‐blamed‐for‐intrusion‐at‐energy‐industry‐giant‐telvent/ Krutz, R. (2004). Securing SCADA Systems, Wiley, pp 146. Lemos, R. (2009). “U.S. makes securing SCADA systems a priority” at http://www.securityfocus.com/news/11351/1 Leyden, J. (2008). “Polish teen derails tram after hacking train network”, at http://www.theregister.co.uk/2008/01/11/tram_hack/ Mann, C. C. (2011, December 20). Smoke screening. Vanity Fair. Retrieved from http://www.vanityfair.com/culture/features/2011/12/tsa‐insanity‐201112 McAllister, B. (2011). How To Be Cyber Secure. Airport Business, 26(12), 18. Meserve, J. (2007). “Sources: Staged cyber attack reveals vulnerability in power grid”, CNN, September 26, 2007, http://www.cnn.com/2007/US/09/26/power.at.risk/index.html NERC (2012). Standard Processes Manual, at http://www.nerc.com/files/Appendix_3A_StandardsProcessesManual_20120131.pdf Nessi, D. (2011). Are you exposed? The perils of a connected world. Airports Council International – North America. Retrieved from http://www.aci‐na.org/sites/default/files/nessi‐areyouexposed‐bit.pdf NIST (2007). NIST 800‐53, “Recommended Security Controls for Federal Information Systems”, http://csrc.nist.gov/publications/nistpubs/800‐53‐Rev2/sp800‐53‐rev2‐final.pdf

145


John McCarthy and William Mahoney NIST (2008). NIST SP 800‐82, “Guide to Industrial Control Systems (ICS) Security”, Draft for public comment, Sep 29, 2008, http://csrc.nist.gov/publications/drafts/800‐82/draft_sp800‐82‐fpd.pdf Panetta, L., Hon. (2011). Hearing to consider the nomination of Hon. Leon E. Panetta to be Secretary of Defense. U.S. Senate, Committee on Armed Services. Retrieved from http://armed‐ services.senate.gov/Transcripts/2011/06%20June/11‐47%20‐%206‐9‐11.pdf Poole, R. W., Jr. (2008, December 11). Toward risk‐based aviation security policy. International Transport Forum. Retrieved from http://www.internationaltransportforum.org/jtrc/discussionpapers/DP200823.pdf SANS Institute. (2001). Computer incident response team. SANS Institute. Retrieved from http://www.sans.org/reading_room/whitepapers/incident/computer‐incident‐response‐team_641 US‐CERT (2012). Overview of cyber vulnerabilities. US‐CERT (United State Computer Emergency Readiness Team). Retrieved from http://www.us‐cert.gov/control_systems/csvuls.html Wyld, B. (2004). “Cyberterrorism: fear factor”, http://www.crime‐research.org/analytics/501/

146


Improving Public‐Private Sector Cooperation on Cyber Event Reporting Julie McNally Bellevue University, Bellevue, USA jamcnally@bellevue.edu Abstract: A critical threat to US economic as well as national security lies in the inability of the private and public sectors to collaborate on cyber defence. Their competing interests, the profit motive and national security, have historically impeded any sharing of cyber attack information or defensive tools and strategies. As most critical infrastructure in the US is owned and managed by private companies, lacking access to corporate networks and being unable to compel companies to report cyber events prevents the government from collecting sufficient data on attacks to analyse and develop better defences. The cost of this inability is the continued rate of loss of monies from hacked financial data; loss of work product from billions of dollars of research and development loss; loss of future economic competitiveness as a result of lost future earnings on that work product; and threats to future military dominance and national security from the theft of intellectual property. To overcome the competing drivers of the public and private sector for a workable partnership on cyber defence, there must be better incentives for companies to share cyber event information. Lack of data is the leading impediment to meaningful analysis of trends and anomalies in cyber events. While industry‐specific voluntary reporting associations have attempted to attract companies to report breaches in exchange for analytical products from that data, competition concerns lead companies to underreport, not report, and/or free‐ride the system, resulting in a narrow pool of data. Market tools like insurance have been posed as a possible solution, but its purpose is primarily risk redistribution and indemnification of losses. Companies are only self‐interested in reporting events for which there is coverage and resist full access to networks by insurance auditors for data breach assessment out of privacy and security concerns. Neither solution accounts for the desire of businesses to protect shareholder value and brand reputation by concealing data breaches. A potential solution would be a national cyber event database to which companies could anonymously submit relevant cyber event information for analysis, without revealing identifying information that might compromise corporate interests. By decreasing the risk of information sharing by addressing privacy concerns, while offering the benefits of information sharing and analysis, this system could vastly increase the size and scope of data collection. Keywords: incident management; cyber security cooperation; data breach reporting

1. Introduction A critical threat to the economic security of the US lies in the inability of the private and public sectors to collaborate on cyber defence. The competing interests of these two sectors, the profit motive and the national security, have historically impeded any sharing of cyber attack information or defensive tools and strategies. As the information technology revolution arose from the private sector, and was adopted across all industries without concern, initially, for security, the US has reached a point where most of its critical infrastructure is owned and managed by private companies. Lacking access to corporate networks and unable to compel companies to report cyber events, the US currently faces great difficulties securing critical infrastructure from the onslaught of cyber attacks from foreign states, non‐state actors and companies. The costs associated with these attacks include continued loss of monies from hacked financial data; loss of work product from billions of dollars of research and development; loss of future economic competitiveness as a result of lost future earnings on that work product; and future threats to military dominance and national security from the theft of military and technical defence industry documents. To overcome the competing drivers of the public and private sector for a workable partnership on cyber defence, there must be better incentives for companies to share cyber event information. Lack of data hinders meaningful analysis of trends and anomalies. Incentivizing the sharing of cyber event data via ensured anonymity and access to the intelligence products resulting from that hold promise for improving cooperation. A method for defining and quantifying total losses is lacking, as evident from the reported wide range of estimates from $2 billion to $400 billion (Office of the National Counterintelligence Executive 2011). Despite the difficulty in estimating loss of future sales from stolen research and development work product and intangibles such data theft costs to national security, it remains certain that such thefts directly impact both economic security and national security to a large degree. For example, it was noted that Russia, though second to China in cyber theft, had saved billions of dollars in research and development for their energy, technology and defence industries by stealing from other countries, mostly the US. (Office of the National Counterintelligence Executive 2011) Such intellectual property exfiltration allows competitors to catch up their

147


Julie McNally domestic industries to improve global economic competitiveness at the expense of the US, as well as catch up their militaries’ strategy and technical capabilities to narrow the gap. While economic espionage among states has a long history, the advent of computer networking and the internet has significantly lowered the cost of engaging in this practice, while significantly raising the quantity of data able to be extracted. A study by McAfee and SAIC reported that one quarter of companies have suffered the halting or delay of new product debuts or of a merger or acquisition as the result of a data breach or credible threat of a data breach. (Aiken et al 2011) This global study revealed that over one quarter of companies surveyed perform risk assessments for the security of their intellectual property less than twice per year. Not only are data breaches costing companies significant sums of money and the loss of intellectual property to competitor states and/or companies, but there exists a lack of attention and funding by companies to securing the valuable proprietary data and intellectual capital that drives economic development and growth. While for‐profit criminal cyber attacks are costly for companies, intellectual property theft not only incurs immediate financial losses for companies and states but also extends losses into the future by way of lost competitiveness in the global marketplace. Research and development usually is the most expensive part of the production process. In the case of defence technology data, the cost is also compromised national security, as competitor companies/states can use the data to catch up their military hardware more quickly. Financially motivated cyber attacks are costly to the private sectors of states with advanced economies and can negatively impact GDP, eroding economic competitiveness and security. Further, as the increase in attacks shows, intellectual property theft erodes the technological and military edge of the US, as seen in the aforementioned example of Russia, and gives less advanced or developing countries and/or their domestic companies a competitive edge at a very low cost to them. Whether perpetrated by state, non‐state or corporate actors, continued cyber attacks and data breaches will create further negative impacts to the competitiveness of the private sector and as well as national security.

2. Policy approaches In the short history of US policies on cyber security, historic precedent of non‐intervention in the private sector has led to a dilemma on the road to securing cyberspace for both national security and private sector concerns. While cyber attacks target both the public and private sectors, the collaboration of the two in defence against such attacks is fraught with many issues ranging from the sharing of sensitive information about incursions and responses to the protection of corporate reputations and their ability to continue operating profitably. The US government in recent years has begun the process of strategizing on how to address the threat of cyber attacks and increase security and confidence in internet‐based commerce and governance. Additionally, there are policies attempting to address international responses to attacks originating from other states against both public and private sector assets. The 2008 Comprehensive National Cybersecurity Initiative included establishing a defensive front using situational cyber threat awareness achieved through partnerships to prevent cyber intrusions and reduce vulnerabilities. It also sought to improve counterintelligence, research coordination and the creation of deterrence strategies against cyber attacks. Its successor, the 2011 Cyberspace Policy Review (CPR) included the importance of engaging in public‐private partnerships, while protecting internet freedom and privacy. It stressed that initiatives should “develop mechanisms for cyber security‐related information sharing that address concerns about privacy and proprietary information and make information sharing mutually beneficial” and “initiate a dialog to enhance public‐private partnerships with an eye toward streamlining, aligning, and providing resources to optimize their contribution and engagement.” (CPR 2011) The problem of devising and implementing a method for the private sector to provide the information necessary for to improve critical infrastructure security is a key issue cyberspace policy. A look at impediments to information sharing can lead to improved incentives that will align private sector concerns with public sector needs. Where this intersects with private sector concerns is the extrapolation of critical infrastructure to include the financial system and technology sectors and, by extension, the theft of intellectual property that is the driver of the US economy. As data breaches bleed intellectual property to rivals abroad reducing economic security of the US, the private sector seeks to protect such leaks from public disclosure to ensure that their brand retains integrity and is not discredited, to prevent the loss of customers. Due to cost‐benefit analyses not favouring expenditures higher than the value of the data being secured combined with their propensity to

148


Julie McNally push forward to make up for any losses, companies lack adequate incentives to secure data. Companies can attempt to make up the loss by developing new products and processes or using the intellectual property in question in an accelerated production to capitalize on it before a foreign rival, now in possession of it, can do so. This private sector view on intellectual property theft does not mesh well with national security interests, because it is predicated on the profit motive trumping concerns of protecting critical infrastructure and securing intellectual property and economic competitiveness. There is a lack of motivation for private sector companies to cooperate and share information with government agencies because protecting the reputation of their brand and focusing on profitability drive decision‐making. National security does not enter into the equation. The Cyberspace Policy Review suggests the pursuit of incentives to persuade companies to take part in collective action toward cyber security, but there are no clearly delineated methods for such carrot and stick inducements beyond voluntary industry organizations. There is no way for the government to verify company compliance due to the nature of the threat and the inability of the government to monitor private sector networks. In an attempt to persuade corporate disclosure of cyber risks that could affect shareholders, The Securities and Exchange Commission issued a document outlining what they termed “guidance” for parameters within which a company should report cyber security risks and intrusions on their quarterly and annual filings. (SEC 2011) However, they lack power to induce companies to disclose such information. Most companies avoided detailing the nature and target of any network breaches by inserting vague language regarding general risks into their quarterly filings. (Menn 2012) There are many bills in the United States Congress dealing in part with the issue of cyber security and the private sector, though none have yet passed. The nature of these various policies, whether from the SEC or the White House, and how they look in practice is further evidence that government policies designed to bring private companies into collaboration on cyber security with government agencies lack the power to compel and, currently, lack the power to attract cooperation and disclosure of cyber attacks. The initial steps toward cementing policies and norms are taken in the chaos of a new and rapidly changing system with evolving technologies that likely will continually change the parameters of attack and defence. It is in the lag between innovation and policy that risk is high and defence is tenuous. Establishing a culture of corporate cooperation with the government on these critical national security issues depends upon the adoption of information sharing policies willingly by corporations, because the government cannot compel companies to share information on network breaches. Releasing such details increases the probability of it reaching a wider than intended audience, possibly damaging corporate reputations and negatively impacting share prices. Encouraging trust in the information sharing process is a long‐term project. Creating a climate in which the private sector can trend toward such collaboration safely must involve ensuring no leakage of shared information by government agencies, especially not to competitor companies. Because it is new and the benefits and risks of such a relationship are untested, companies likely will remain reticent to reveal details of network weaknesses and data breaches to protect their ability to compete and operate profitably. Adoption of an information sharing culture will be slow in light of the voluntary nature of the current policies and the desire to protect corporate reputations and customer and investor confidence. The strategizing and implementing of cyber security policies that speak specifically to the intersection of public and private sector coordination is vital to any state’s security. It is this point, however, at which the competing interests of these two sectors throw roadblocks in the way of national security. When the profit motives of private companies collide with the high cost of necessary information security, there is difficulty in successful policy formation and implementation. There must be effective incentives to promote the value of information sharing on cyber threats and intrusions by the companies that control most of the national critical infrastructure. Companies are not incentivized to invest in network security above a certain threshold – usually determined by the value of the information or system being secured, combined with the upper limits of consumer pricing for their products. Additionally, there is a strong disincentive to provide information on data breaches on fears that such data will leak, causing loss of consumer and investor confidence and/or competitive advantage in the marketplace. This can create large financial losses for the company. Without significant amounts of information collection from private companies, however, public sector security analysts will continue to suffer a lack of data with which to identify patterns or anomalies of cyber intrusions and attacks and on which to base responses and counterstrategies.

149


Julie McNally A sound cyber security policy must realign incentives and create common interests that promote confidence in the information sharing process, as well as tangible rewards for private sector cooperation and coordination. Government agencies currently are left with only the carrot of access to classified government threat and response information as the incentive for companies to provide data breach and network intrusion details. The hope is that the value of obtaining this information overrides the private sector concerns of protecting their reputations and profit margins, which is a calculation left to each company to make. Ultimately, the lack of means to ensure compliance with the goal of information exchange results in companies picking and choosing which incursions to report, if any, and what details to allow into government databases for analysts to use. There remains insufficient information populating the databases and skewing analyst results.

3. Market tools for information sharing Most academics working on private‐public sector cooperation on cyber security agree that it is an evolving arena that is lacking in historical data. They also concur that it is an issue best managed with a light touch from government, due to the majority of critical infrastructure being in the hands of the private sector, as well as the perpetual lag time between technology advancements and policy decision‐making and implementation. In other words, regulation is not feasible. The flexibility and resilience required to address cyber security is often agreed upon as something better addressed by the free market, though disagreements lie in the how and who of incentivizing security. Some academics look to voluntary industry‐specific organizations to compile cyber event data for analysis and strategizing. These organizations suffer free riding and withholding of all or some information, as the companies involved seek to protect their reputations and competitiveness. Others focus on competition and insurance as risk redistribution tools of the market and instruments for improving reporting of losses. However, there is a fundamental difference between indemnifying a firm for a loss and protecting infrastructure and data from attacks. Insurance risk management, lacking access to policyholders’ networks as well as expertise in the cyber security field cannot speak to the latter. Some academics consider the market tool of insurance as a solution. However, while some studies conclude insurance reduces investment in and effectiveness of security, others show that it increases security. Proponents of cyber insurance providing a market solution to IT security risks cite the increase by private companies in cyber security investment leading to a higher level of safety for their IT infrastructure; the incentive to adopt higher standards or best practices to comply with policy contract language regarding benchmark security levels; and the correction of a market failure by this risk mitigation tool. (Kesan, Majuca and Yurcik 2004) The assumption seems to be in this position that insurers would have the knowledge, ability and access to their policyholders’ networks to ensure compliance via audit. Additionally, there is an assumption that such standards would be practical and effective means to manage the risk. However, because network and information security is continually changing to handle new and evolving attack methods, it would be difficult for nontechnical, actuarial risk managers to keep abreast of and incorporate such changes into the standards required by policies that are usually renewed each fiscal year. Further, familiarity with the methods by which businesses seek to reduce expenses (premiums) by pro forma compliance would reveal the weakness of standards that are carried out not because they have a real world impact on improving loss reporting and security, but because they are required by contract. Some academics focus on how market forces other than insurance can improve corporate management of cyber risks, such as competition and liability. The argument is that the market rewards companies that are successful in cyber risk management. (Cashell et al 2004) Considering insurance firms specialize in the actuarial science of risk assessment and management, such a position proposes that the solution lies in the traditional market correction afforded by insurance policies. However, without transparency of corporate cyber risks or attacks, the market can only reward the perception of successful risk management. Such a viewpoint also argues that opening up the potential for lawsuits regarding the failure of software and/or hardware, i.e. imposing liability, would offer additional market tools to address the security risk. Considering the overlaps and interdependencies in hardware and software products, liability would not so much address weaknesses and vulnerabilities so much as tie up companies in lengthy legal battles. There are weaknesses in market tools of competition, liability and insurance as solutions to improving private sector information sharing for the public good. Most of the sources reviewed agreed that a lack of historical data of losses and risks, as well as the unexpected future risks, including the quantification of intangible losses (e.g. loss of future income resultant from intellectual property theft), is a main impediment to cyber insurance

150


Julie McNally successfully managing risks. Most agree that there is incentive for a private company not to share information about attacks on their networks to protect their proprietary information and business reputation. Also there is a general acknowledgment of the incentive for companies to free ride and not share information on data breaches, which erodes the benefits of other companies that do share information. The cyber risk is ever evolving in tandem with advances in technology, software and cyber attack tools, which precludes accurate quantifying of risk. This, combined with a lack of historical data, prevents insurers from adequately addressing or managing risk. However, it’s possible that even if insurers could manage that risk, it would undermine the private sector’s ability or incentive to address problems toward the end goal of national security. There could be a change in behaviour in which knowing one is covered for the risk would lead to less stringent efforts to secure the company’s information and networks from attack above and beyond that which is required by insurance auditors and policy language. One study actually found that policyholders did not invest as much in improvements to cyber security because they had insurance policies covering it. (Shetty et al 2010) While insurance redistributes risk and indemnifies losses, which was its original purpose, it does not improve cyber security. However, another study proposes that with increased information sharing among companies with regard to cyber threats, and despite the resultant decrease in spending on cyber security, the level of security has the potential to increase regardless, as a result of pooling data. (Gordon, Loeb, and Lucyshyn 2003) While insurance companies can manage and absorb what financial losses from cyber attacks are quantifiable (e.g. damage to networks), they can do little to address the larger national security concern of intellectual property stolen (e.g. weapons schematics, research and development work product, etc.) by competitor nations and/or foreign companies. Such concerns are more intangible, more a social cost, and less likely to have a firm cash value. Moreover, there is a national security value inherent in some intellectual property, for which financial compensation is beside the point. While free market tools like insurance can address the business perspective on loss, it fails to address the national security perspective on data breaches and the reporting thereof. After all, the customer to the insurance company would not be the strategic interests of the US, but the business interests of the insured company: these interests are not aligned. In general, current academic investigations into the problem of public‐private cooperation on cyber security have been unable to arrive at effective incentives for the private sector to engage in information sharing to improve security of the public good, i.e. critical infrastructure owned by private companies. Even were an insurance company to enjoy full access to insurance policy holders’ networks to audit compliance with reporting losses and improving security, the inability to quantify intangible losses or to prevent them would prevent national security interests from being met by this market tool. Further, the standards of compliance designed by the actuaries would not keep up with technology innovations, nor would they be able to determine what cyber attacks have occurred between system audits. Fundamentally, insurance is a process of redistributing risks and indemnifying financial losses. It is not an effective tool for ensuring reporting of cyber events or attacks and the associated losses, especially if those losses are not covered by the policy.

4. Voluntary information sharing organizations Investigation into the private sector developing information sharing organizations, such as the Industry Consortium for Advancement of Security on the Internet (ICASI), notes information pooling on cyber vulnerabilities would be useful in a public‐private partnership corporation, a non‐profit entity analogous to the American Red Cross, that would administer public and private cyber threat information. By its hybrid nature, such an entity would avoid the appearance of government overreach while ensuring classified information was handled appropriately, while assuaging private sector fears of losing competitive advantage by sharing information. (Rosenzweig 2011) Information Sharing and Analysis Centers (ISACs) are voluntary industry‐ specific organizations facilitated by the US government that collect information on threats and vulnerabilities in which experts in that industry analyse the pooled data and distribute assessments and alerts pertaining to it. There are many separate ISACs for various industries, and they are loosely linked. Some are closed, only sharing data with members, while others report to law enforcement agencies. What communication about risks exists among ISACs is unclear, and likely varies, as once they are formed, they have wide latitude and autonomy in arranging the way they operate. Business competition can lead to free riding, and even under direct government regulation, information provided by corporations may be incomplete or even falsified. (Wagenaar, P. 2009) ISACs are compromised by the profit and competition incentives of participating firms,

151


Julie McNally which inhibits information sharing. Because of their voluntary nature, they contain only a narrow field of cyber event information from which to analyse and strategize. Further, communication channels among the industry‐specific ISACs vary in the type and amount of information exchanged. There remains a lack of adequate incentives to share as well as the means to discourage free riding. The databases are incomplete and unrepresentative of cyber threats across industries, and as such analysts would be unlikely to draw reliable conclusions from their contents. As cyber attackers do not limit themselves to targets in one industry, collecting data by industry limits the ability to collect a large pool of threat data and recognize attack patterns and methods. An approach that encompasses all industries and increases the quantity and quality of data for analysis is needed.

5. Creating incentives to share There is a solution to this lack of data cited by academics as the main impediment to managing cyber risks. Neither the market tools of competition and insurance nor voluntary organizations like ISACs encourage the level of information sharing necessary for improving public and private sector cooperation on cyber security. However, there are a combination of incentives and inducements that government can implement to increase cyber event reporting and subsequently improve cyber defences for critical infrastructure. As the incentive to share is greater if companies are assured anonymity in their reporting, by lowering or eliminating the risk to corporate share value and investor confidence, implementing an anonymous reporting database across all industries would improve data collection. Much like the Center for Disease Control (CDC) in its tracking of epidemics, such a database would address privacy concerns while increasing participation. Data breach details from all industries, stripped of identifying markers, would assist industry and government in assessing and analysing threats in a timely manner. A unique secure login identification process could prove that an entity did provide information to the database, yet conceal which specific data was provided. By limiting the collection to technical information on the network and the method and nature of the attack, but excluding the name of the company or the precise contents of exfiltrated or altered data, participants would be assured of confidentiality and their market competitiveness would be preserved. Ultimately, by minimizing the risks of information sharing, this method would attract greater participation and collect a greater quantity of cyber event data. By creating an incentive to share via ensured privacy and anonymity a cyber event reporting structure would facilitate timely collection and collaboration between the public and private sectors. Being able to update the database easily, quickly and anonymously with the cyber event details occurring on private corporate networks would increase the pool of data for analysis. This increase in information sharing would facilitate better analyses and subsequent strategies for cyber defence of critical infrastructure that could be shared with companies contributing to the database. The benefit of receiving analytical products resulting from the data would feed back into the reporting structure as further incentive for private sector participation and sharing: as benefits are received, trust in the process and a willingness to cooperate should increase as a result. Providing the incentives of ensured privacy and access to analytical products to protect corporate intellectual capital would lead to greater participation and data collection. To further motivate companies to contribute data on cyber events, government could exploit the private sector profit motive. Only providing research and development funding to labs or companies participating in the information sharing database would increase cooperation. Additionally, procurement and contracts could go to companies audited and found to be contributing to the cyber event information database via secure IDs. By denying companies lucrative government contracts or suspending payment on existing ones until or unless they provide data on cyber events, government can induce participation to increase data collection. A cyber event anonymous reporting database would provide the privacy assurances needed to encourage corporate participation, addressing the lack of data that impedes analysis and strategy. Taken together, these policies would incentivize information sharing by producing more benefit with less risk.

References Aiken, S., George, J., van den Berg, M., Hunt, S., Kellerman, T., Pillai, D. et al (2011) Underground Economies: Intellectual Capital and Sensitive Corporate Data now the Latest Cybercrime Currency. [online] McAfee, http://www.mcafee.com/us/resources/reports/rp‐underground‐economies.pdf Cashell, B., Jackson, W. D., Jickling, M., and Webel, B. (2004). The Economic Impact of Cyber Attacks (Rep. No. RL32331). [online] Congressional Research Service, http://www.fas.org/sgp/crs/misc/RL32331.pdf

152


Julie McNally CF Disclosure Guidance: Topic No. 2 Cybersecurity (n.d.). (2011) [online] Securities and Exchange Commission, http://www.sec.gov/divisions/corpfin/guidance/cfguidance‐topic2.htm Executive Office of the President. (2011). International Strategy for Cyberspace: Prosperity, Security, and Openness in a Networked World. [online] White House, http://www.whitehouse.gov/sites/default/files/rss_viewer/internationalstrategy_cyberspace.pdf Executive Office of the President of the US. (2010). Comprehensive National Cybersecurity Initiative. [online], White House, http://www.whitehouse.gov/cybersecurity/comprehensive‐national‐cybersecurity‐initiative Foreign Spies Stealing US Economic Secrets in Cyberspace. (2011). [online], Office of the National Counterintelligence Executive, http://www.ncix.gov/publications/reports/fecie_all/Foreign_Economic_Collection_2011.pdf Gordon, L. A., Loeb, M. P., & Lucyshyn, W. (2003). Sharing Information on Computer Systems Security: An Economic Analysis. Journal of Accounting and Public Policy, 22(6), pp. 461‐485. [online] http://dx.doi.org/10.1016/j.jaccpubpol.2003.09.001 Information Security 2011. (2011). [online] Information Security Policy Council, http://www.nisc.go.jp/eng/pdf/is2011_eng.pdf Kesan, J.P., Majuca, R.P. and Yurcik, W.J. (2004) “The Economic Case for Cyberinsurance”, University of Illinois Law and Economics Working Papers. [online], University of Illinois, http://law.bepress.com/uiuclwps/art2 Menn, J. (2012). “Major Companies Keeping Cyber Attacks Secret from SEC, Investors: Report.” Insurance Journal [online], http://www.insurancejournal.com/news/national/2012/02/02/233863.htm?print Rosenzweig, P. (2011). Cybersecurity and Public Goods: The Public/Private “Partnership”. P. Berkowitz (Ed.), Emerging Threats in National Security and Law. [online], Hoover Institution, http://media.hoover.org/sites/default/files/documents/EmergingThreats_Rosenzweig.pdf Shetty, N., Schwartz, G., Felegyhazi, M., & Walrand, J. (2010) Competitive Cyber‐Insurance and Internet Security. In D. Pym & C. Ioannidis (Eds.), Economics of Information Security and Privacy, Springer, New York, pp. 229‐247. Wagenaar, P. (2009) ed. by Cavelty, M., Mauer, V. and Krishna‐Hensel, S. (2009) Power and Security in the Information Age: Investigating the Role of the State in Cyberspace. Journal of Contingencies and Crisis Management 17(2). White House, ‘Cyberspace Policy Review’ (2009) [online] http://www.whitehouse.gov/assets/documents/ Cyberspace_Policy_Review_final.pdf

153


Copyright Protection Based on Contextual Web Watermarking Nighat Mir Computer Science Department, College of Engineering, Effat University, Jeddah, Kingdom of Saudi Arabia nmir@effatuniversity.edu.sa nighatmir@gmail.com Abstract: Interdependency of information, security and the advent of internet technologies bring more challenges to manage protection against threats like illegal copying, redistribution, tempering, reuse and forgery of online data. Web page is one of the main sources to trade online information and therefore require more protection. In this research a novel tamper proof web watermarking technique based on its textual content has been proposed. Watermarks are generated based on the context and have utilized the structural elements of HTML (Hyper Text Markup Language) to embed the watermarks into a webpage. Watermarks are further secured by a cryptographic technique before the embedding process to integrate more security. Experiments identify the tempered information without revealing any evidence of encrypted watermarks. Proposed system has been tested against different attacks to confirm the robustness and integrity. Keywords: security, web watermarking, copyrights protection, cryptography, hash, HTML

1. Introduction and background Information security is a mechanism to protect it against the basic security parameters to make it more confidential, reliable and available. Tremendous growth in internet use and development has attracted millions of users and writers over the time due to its easy access and economical. Massive growth in electronic publishing has also laid down an impact on print media. However, with the electronic text being more elusive, it brings a huge responsibility on protecting copyrights for electronic web based information and a dispiriting feature can be addressed to prevent an unauthorized access to copy the web content. Various methods have been studied for other data types i.e. audio, video and pictures but there are very few methods for hiding information into text without altering its integrity. Web based attacks have been a very common practice in recent years and hence need strong security mechanism for secure communication. Text is the leading part of web contents besides other data types and requires strong protective mechanism. It is urgently required to have copyrights protection solution which stay close to the text even if it is replicated, edited, and tailored. Comparatively it is difficult to produce protective methods for text as there is less redundant information available in text and becomes more challenging. All basic security concerns can be identified and addressed by using digital watermarking techniques. Philip(1990) proposed a copycat system to secure information which is to certify that novel work of an author is safe against theft and protected against attacks. There are various laws available for copyright protection such as World Intellectual Property Organization (WIPO), Anti‐Counterfeiting Trade Agreement (ACTG), European Copyright Law (ECL), and Protection of Literary and Artistic Works (PLAW) to ensure intellectual property. However, still a huge violence on internet has been noticed across the world in the special report of International Intellectual Property Alliance (IIPA‐2009). Digital watermarking can be extended to web watermarking and to offer security for the web content as online copyrights have not grown as faster as the internet technologies and standards. Digital Watermarking and Cryptography are known as common methods of providing information security and are adequate to ensure the basic security principles. However, both of these work under different mechanisms. Digital watermarking is imperceptible where cryptography is perceptible in which a simple text is converted into a cipher text. Information is watermarked by an identification code in digital format which remains present in the cover file unlike the decryption process of cryptography as presented by (Brassil et al. 1995). Watermarking process can be visible and invisible. Cryptography stands for hidden or secret writing which is used to protect information from third parties and offers the security features such as data confidentiality, integrity and authentication. It can generally be achieved by using diverse symmetric (single key) and asymmetric algorithms (different keys) as discussed by (Lee and Seoul, 2008). Web content based watermarking has been introduced in literature by using different aspects of markup languages like HTML (Hyper Text Markup Language) and XML (eXtensible Markup Language). This has opened a new discussion and a need of subject in the research, however strong and robust mechanisms are required to offer web watermarking. (Atallah et al. 2002) applied semantic rules to offer text watermarking, syntactic textual content based rules have therefore been presented by (Atallah et al. 2001).

154


Nighat Mir (Brassil et al. 1999) presents the use of structural features of text to offer the content security. Line shifting and word shifting algorithms shifts the text to certain positions in horizontal and vertical directions to offer text watermarking stated by Chen (2011) . Web watermarking techniques using the structural features of HTML has been proposed by (Mir and Hussain, 2011). Keeping the individual potency, digital watermarking can be combined with the conventional cryptographic techniques to strengthen the use of security and to offer robust web watermarking based on the content of web itself. (Tian and Lili, 2009) have assorted the watermarking and cryptography to offer web watermarking by computing the counts of each visit to web page and the fingerprints in digital format. Different languages have been utilized to offer watermarking using their syntax, structure, grammer and semantics. English language has been considered by (Topkara's and Atallah, 2006) , Chinese by Kim(2008) and Turkish by (Zhu and Sang, 2008). These languages appear in literature often to be used for establishing copyright protection techniques. Web pages are designed using by different programs or languages and are translated into markup code by a web browser. All web browsers have built in ability to transform any scripting code into a markup language. Due to its construction and physical appearance a huge bandwidth is available to be utilized for developing web watermarking techniques. White spaces are mainly utilized to give visible readability to its viewers, which does not have any visual appearance. This can be added to different frames and parts of a web page and can also be added between different words, lines and sentences to provide readability. White spaces are usually collapsed in HTML and a sequence of white space characters are treated as one string during parsing. However, if desired these can be preserved by using a relevant tag named <pre> in HTML.

2. Proposed methodology Watermarking is categorized as visible and invisible, in this research invisible watermarking is extended to disguised watermarking to make it more imperceptible. Text being a major part of web is parsed based on pre defined rules to generate watermarks in this research. To experiment the practical details, semantics rules have been considered based on the frequency of verbs and prepositions which are mandatory constituents of English language. Occurrence of high frequency verbs (is‐15%, are‐34%) and prepositions (to‐23%, for‐16%) are scanned from the web text to construct a watermark. Their counts are taken as an integer value and encrypted using the HASH cryptographic algorithm to generate a fixed length watermark of 8‐digits. The 8‐digit HASH value is made invisible using the existing memory unicode characters which are known as no face control characters (i.e. u200a, u202f, u205f). No face or invisible watermark is further embedded in a disguised way to a webpage using the HTML description tag named <meta>. Figure 1 shows the process of watermark generation, encryption and embedding into a webpage in a sequential way. A url is subjected to the developed application, it scans the source code of the webpage and considers the text contained in the <body> tag. Text from <body> tag is parsed according to the pre defined semantic rules and only given attributes are extracted to construct the initial watermark. These are encrypted using the HASH‐MD5 which takes an input of variable length and produces a value of fixed length. In this case the length of output is defined as 8 digits only. Further to achieve the invisibility, encrypted hash value is converted into white space characters utilizing the available memory control characters and embedded to the HTML <mata> tag, which is basically used to provide descriptive information about author, version, release date etc and does not appear on a webpage in any browser. Hence it is available only at visiting the source code of a page. However to achieve stronger security, the value is converted into no face characters and is not visible even after visiting the source code of an HTML page. subject a url to the system

Read web text

using <meta> tag

using HASH

Generate Watermark

Hide watermark

Encrypt watermark

convert to whitespaces

use predefined semantic rules

Embed watermark

Register watermark

with CA

Figure 1: Watermark generation, encryption and embedding process

155


Nighat Mir Figure 2 shows the reverse process of extraction and verification of a watermark from a webpage. White spaces are to be first converted back to the hash value using the decryption process of HASH‐MD5 algorithm. A comparison is made between the original and generated reverse hash values, and accepted upon a match which validates the process or else a breech attempt to the copyrights is noted. Watermarks can be registered with the CA (Certifying Authority) which is a fundamental agency to register watermarks for the documents to protect intellectual copyrights of an author. using HASH

Extract Watermark

Compare with original watermark

Decrypt watermark

accept if same

validate with HASH function

convert from whitespaces to visible

Figure 2: Extraction and validation process

2.1 Proposed algorithm start take url parse source code repeat { read and parse <body> text watermarks=count and store verbs/preposition while != </body> } hash(watermarks) hide(watermarks) embed(watermarks<meta> end

3. Testing and results Few websites have been demonstrated in Table 1 to show the results of the application. Three web pages from Wikipedia have been shown with the number of occurrences for each verb and article defined for the research

156


Nighat Mir

157

is=89, are=31, to=107, of=141 kipedia.org/wiki/Hash_function

47112568

22322815 is=95, are=58, to=161, of=258 kipedia.org/wiki/Cryptography

Experimental Results URL's Watermarks Encrypted Invisible watermarks occurrence hash value 27041469 kipedia.org/wiki/Digital_watermarking is=54, are=12, to=58, of=45

along with their hash value, where whitespaces have been highlighted in grey colour for visibility in the last column. Given idea has been tested against various strength parameters like robustness, perceptibility, capacity and attacks like tempering and deletion. It has been noticed the application is robust, imperceptible, having enough capacity, strong against tempering where week against deletion attacks and medium under certain situations of deletion of watermarks if not found.


Nighat Mir

4. Conclusion A novel web watermarking idea to protect intellectual copyrights of an author is proposed in this research. Beside securing the text itself the carrier of online text can be secured to offer web watermarking based on the textual constituents of a language. Invisible watermarking has been combined with cryptographic hash algorithm to add more security and make system robust, imperceptible and strong. Control unicode characters have been utilized in this research to convert the encrypted watermarks into invisible white spaces and are further embedded into the source code of an HTML page. Textual constituents used in this research are based on the high frequency English language verbs and prepositions, which are imperative part of writing. Proposed idea has been implemented and tested on different websites and also verified for different features of digital watermarking, like the robustness, imperceptibility, bandwidth, modification and deletion attacks.

5. Future recommendations Proposed idea can be extended towards different languages and also using different semantic, syntactic and structural features of any language. Also different structural constituents of HTML can be utilized to hide or embed the watermarks. Watermarks can further be broken down into bits and can be hidden into simple tags like random character tags, line break tags and empty tags. Idea can further be enhanced towards other markup and scripting languages as well.

Acknowledgements Research undertaken has been supported by RCI (Research Consultancy Institute) at Effat University, Jeddah Kingdom of Saudi Arabia under 2011‐2012 research grants. Dr. Nighat Mir is working in the computer science department under college of engineering as an assistant professor and also as an institutional research coordinator in the quality assurance department.

References (ACTG), Anti‐Counterfeiting Trade Agreement, [online], http://trade.ec.europa.eu/doclib/html/142039.htm (ECL), European Copyright Law, [online], http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32001L0029:EN:HTML (IIPA), International Intellectual Property Alliance (2009), Special 301 Report, [online], http://www.iipa.com/special301.html (PLAW), Berne Convention for the Protection of Literary and Artistic Works at http://www.wipo.int/treaties/en/ip/berne/index.html (WIPO), The World Intellectual Property Organization, [online], http://www.wipo.int/portal/index.html.en Atallah, M., Hempelmann, C. F., Karahan, M., Sion, R., Raskin, V., Topkara, U. and Triezenberg, K. E. (2002), Natural Language Watermarking and Tamper proofing, 5th Information Hiding Workshop, IHW, LNCS 2578, Springer Verlag. Atallah, M., Crogan, M. J., and Raskin, V. (2001), Natural language watermarking: design, analysis, and a proof‐of‐concept implementation, Information Hiding Springer‐Berlin. Atallah, M. J., Topkara, M., and Topkara, U. (2006) The hiding virtues of ambiguity: quantifiably resilient watermarking of natural language text through synonym substitutions", Proceedings of ACM Multimedia and Security Conference, Geneva‐Switzerland, pp. 164 174. Brassil, J.T., Low, S. and Maxemchuk, N.F. (1999) Copyright Protection for the Electronic Distribution of Text Documents , EEE pp.1181‐1196. Brassil, J. T., Gorman, L. O., Low, S., and Maxemchuk, N. F. (1995) Electronic Marking and Identification Techniques to Discourage Document Copying, IEEE Journal on Selected Areas in Communications, vol. 13, no. 8, pp. 1495‐1504. Chen, Lily (2011) Recommendation for Key Derivation through Extraction‐then‐Expansion, NIST Special Publication 800‐ 56C. Gengming, Zhu and Nong, Sang (2008) Watermarking Algorithm Research and Implementation Based on DCT Block, World Academy of Science, Engineering and Technology 45. Kim, M. (2008), Natural language watermarking for Korean using adverbial displacement, Proceedings of the 2008 international Conference on Multimedia and Ubiquitous Engineering (MUE), pp.576‐581. Mir, Nighat and Afaq, Hussain (2011) Secure web‐based communication, Elsevier: Procedia Computer Science Volume 3, Pages 556‐562. Philip, A., Turner (1990) COPYCAT: A System for the Distribution of Copyright Cataloging Information, IEEE. Sang, Hoon, Lee and Seoul (2008) Accelerating Symmetric and Asymmetric Ciphers with Register File Extension for Multi‐ word and Long‐word Operation, Information Science and Security, ICISS. Tian, Zhou and Li, Li (2009) A Secure Web‐based Watermarking Scheme for Copyright Protection, Sixth Web Information Systems and Applications Conference, IEEE.

158


Towards a South African Crowd Control Model Mapule Modise, Zama Dlamini, Sifiso Simelane, Linda Malinga, Thami Mnisi and Sipho Ngobeni Council for Scientific and Industrial Research (CSIR), Pretoria, South Africa mmodise@csir.co.za idlamini@csir.co.za ssimelne@csir.co.za lmalinga@csir.co.za tmnisi1@csir.co.za sngobeni@csir.co.za Abstract: With the escalating number of incidents of service delivery, labour related protests and the increasingly violent nature of protests; crowd control is one of the major challenges facing South Africa today. Often these protests are characterized by violence stemming largely from clashes between the protesters and the law enforcement agencies, which results in property vandalism and even death. For this reason, there is a demand for greater understanding, modelling and simulating crowd control. In response, this project aims to develop a crowd control model that will be used to understand the interactions between different variables during the protest and subsequently a better crowd control approach. However, modelling a multidimensional social problem as complex as crowd control requires time, knowledge and experience from a wide range of disciplines. This is therefore a long‐term project consisting of three main phases. Phase 1, identifies the most important variables concerning crowd control and how they relate to each other using general morphological analysis. Phase 2 of the project will be the verification and validation of the model by experts in the field, which will be followed by the identification of relevant tools and techniques. Phase 3, will be the development of decision support system for crowd control. This paper discusses Phase 1 of the project, which includes identification of various variables regarding crowd control and their relationships. During the Arab Spring Uprisings, social media was identified as one of the factors significant for the mobilization of the crowd. This phase will determine if social media is one of the major factors to consider in a South African context and the extent to which it affects the crowd. The role of social media or lack thereof has some implications on cyber defence in South Africa. The identification of variables and the relationships between them were carried out in a facilitated workshop. The result of this phase is a South African general morphological analysis crowd control model. Keywords: crowd control, general morphological analysis, crowd control variables

1. Introduction The demand to understand and model crowd control continues to increase as numbers of people across the globe take to the streets to protest over many issues, including but not limited to: unemployment, food and petrol price hikes, economic and political reforms and corruption within governments. Violence resulting in physical damages to individuals and property is also escalating. In Egypt on January 2011, three protesters died in Suez after being shot with rubber bullets and beaten and in Cairo a policeman died after a stone hit his head (BBC News Africa, 2011). In London on August 2011, police officers were pelted with bottles and fireworks as a group of young people rampaged setting buildings, vehicles and garbage dumps alight, and looting stores (Inquirer News, 2011). In South Africa, the apartheid era saw various failed crowd control scenes such as the Sharpeville massacre in 1960, Soweto youth riot in 1976, Bisho in 1992, the beating and subsequent death of Mr. Andries Tatane by police during the service delivery protest in April 2011 (Mail & Guardian, 2011); and this year 2012, about 42 striking Lonmin miners at Marikana, at the North West province were killed in a clash with the police. These incidents and many like them have increased the demand to understand and model a social phenomenon as complex as crowd control. This paper asserts that a successful crowd control model is one where law enforcement agencies have methods and skills that allow them to apply force in the right measure to defuse an event while maintaining a balance that will not create new problems. The main goal of this project is to develop a decision support system for crowd control. This is a long term project made up of three phases as illustrated in Figure 1.

159


Mapule Modise et al.

Figure 1: Crowd control modelling and simulation research plan This paper focuses on the first phase of the project. The goal of this phase is to use general morphological analysis to identify and describe the most important factors regarding crowd control and how they relate to each other. This paper is structured as follows: the next section discusses the fundamentals of the morphological analysis approach. This is followed by the development of general morphological analysis crowd control model. The relevance of the crowd control on cyber defence concludes the study.

2. General morphological analysis Ackoff (1974) makes the following distinctions about messes, problems and puzzles. A mess is defined as a complex issue, which is not well‐formulated or defined. A typical mess is illustrated by region 3 of Figure 2. A problem is a well‐formulated or defined issue, but with no single solution (different solutions depending on various factors) region 2 in Figure 2 best captures the essence of a problem. Lastly, a puzzle is a well‐defined problem, with a specific solution, which can be worked out, and this is illustrated by region 1 in Figure 2. In this paper, crowd control is classified as an unstructured messy complex problem. According to Mingers Rosenhead (2004), unstructured problems are characterized by multiple actors with multiple and often conflicting perspectives. Ritchey (2006) elaborates further that the socio‐technical systems have a large number of elements with many interactions that are not always predictable. The interactions between the elements are generally loosely organized and tend to behave probabilistically. Such systems evolve over time and are open to the environment. In addition, the systems are full of contradictions and circular causality, stakeholder‐oriented and associated with strong political, moral and professional issues (Ritchey, 2006). Due to the complexity inherent in these types of problem spaces, traditional quantitative methods, mathematical (functional) modelling and simulation will simply not suffice (Ritchey, 1998 & 2006). According to Ritchey (2006), a number of non‐quantified Problem Structuring Methods (PSMs) have been developed during the past 30 years as an alternative to mathematical modelling. Some of the few PSMs applied today include; Checkland’s Soft System Methodologies (SSM), Strategic Options Development and Analysis (SODA), Strategic Choice Approach (SCA), Viable Systems Model (VSM) and Forrester’s Systems Dynamics (SD).These methods were developed mainly for structuring and analysing what was previously termed wicked problems and social messes (Rittel and Webber, 1973 and Ackoff, 1974; as cited in Ritchey, 2006). General morphological analysis belongs to this group of methods. GMA was developed by Fritz Zwicky, the Swiss astrophysicist and aerospace scientist based at the California Institute of Technology (Caltech), as a method for structuring and investigating the total set of relationships contained in multidimensional, non‐quantifiable and problem complexes (Zwicky 1966, 1969 cited by Ritchey, 2011). During the past 15 years, GMA has been extended, computerized and applied to long‐term strategy management and organizational structuring (Ritchey, 2005). It is especially useful for developing models of alternative scenario and strategies. This method is employed in phase 1 of the project to identify important factors regarding crowd control, and how they relate to one another.

160


Mapule Modise et al.

Figure 2: Illustration of problem, puzzle and mess

2.1 GMA process The task of any GMA process is to develop a morphological analysis model that describes the total problem complex, which can then be used as a laboratory to test various initial conditions (inputs) against possible outcomes (outputs). The models are developed using Computer Aided Resource for Morphological Analysis (CARMA) modelling platform. The first step involves the identification of the most important factors (or parameters) concerning the problem context. The second step identifies and defines a range of values or conditions for each variable. The variable and variable–condition matrix is the morphological field, which implicitly contains the solution space for the problem context (see Figure 3). The solution space contains all of the theoretically possible scenarios, which can amount to hundreds and even thousands.

Figure 3: The crowd control morphological field The third step examines the internal relationships between the field parameters and reduces the field by weeding out all mutually contradictory conditions. This process is called a Cross‐Consistency Assessment (CCA) and is the most cumbersome and time‐consuming phase of the morphological analysis process, but at the same time the most valuable step in the GMA process. As mentioned, the cross consistency assessment is done to weed out inconsistent configurations. The two types of inconsistencies are those that are purely logical contradictions (those based on the nature of the concepts involved) and those that include empirical constraints (relationships judged to be highly improbable or implausible on empirical grounds) (Ritchey, 2006). CCA reduces the number of configurations because a morphological field involving as many as 100,000 formal configurations can require no more than few hundred pair‐wise evaluations in order to create a solution space; and this can be tedious. The result of the work is a computerised model/laboratory in which alternative scenarios could be formulated, developed and evaluated.

161


Mapule Modise et al.

3. Crowd control Riots have occurred in every century and every region in the world. These riots have been attributed to poverty, unemployment, industrial disputes, political, religion and ethnic differences, sport, alcohol and even the weather (Kenny et al, 2001). Crowd management and crowd control are terms that are used interchangeably; however, “these are two distinct but interrelated concepts” (Abbott and Geddie, 2001). The former includes the facilitation, employment, and movement of crowds, while the latter comprises steps taken once a crowd (or sections of it) has begun to behave in a disorderly or dangerous manner (Abbott and Geddie, 2001). The main task of crowd control is to prevent riots. The control of a violent crowd is referred to as riot control but for purposes of this paper, riot control is implied in crowd control. There are various factors that should be considered when controlling the crowd. These include:

Reasons for gathering.

The cultural, situational, psychological and social factors that contribute to violent behaviour of the crowd.

The threats or risks associated with a crowd.

Leaders of the crowd.

Law enforcers/ crowd controllers.

Any public assembly or gathering, whether lawful or unlawful, may require the response of law enforcer. The response can range from observation to engaging in various crowd management strategies (Cappitelli, 2012). Crowd controllers must at times take measures to protect themselves from agitated, fearful or angry elements in a crowd. At the same time, crowd control interventions by the controllers can be experienced as provocative and threatening. Depending on how crowd members and controllers view each other, and to the extent that there are forces present which actively encourage violent behaviour; events involving crowd control can escalate from a peaceful manifestation to chaos and violence. For purposes of this study, crowd controllers refers to police officers and soldiers as law enforcement agencies. It is of paramount importance to understand that gathering has three important phases; the assembling, the gathering and the dispersing. The assembling phase refers to the movement of people from different locations to a common location within a given period of time. (Kenny et al, 2001). The gathering phase refers to the collection of individuals and small groups in a common location, and the dispersal phase involves the movement of people from they have been gathering (see Figure 4).

Figure 4: The phases of gathering (Kenny et al, 2001) A large number of articles reviewed on crowd control focus only on the gathering phase; and logically this is the starting point of CCGMA model. This model focuses on analysing the relationships that are inherent between crowd control variables in order to assist law enforcers with their planning when they have to control a crowd. This model is an extension and modification of the model that was initially developed in a workshop facilitated by Dr. Tom Ritchey for the European Defence Agency, in 2010. Six researchers from Command, Control and Information Warfare (CCIW) Competency Area at the Council for Scientific and Industrial Research (CSIR) in South Africa, used Dr. Ritchey’s model and literature reviews as ground work to develop the crowd control model relevant to the South African context.

162


Mapule Modise et al.

3.1 Crowd control general morphologic analysis structure This section provides detailed information regarding the Crowd Control General Morphological Analysis Model (CCGMAM). The model was carried out in a two‐day workshop at CSIR. Following the GMA process, the first step identified the variables of the model. The focus question for the workshop was: “What are the most important factors/parameters/variables/dimensions regarding crowd control, and how do these relate to one another”? On the basis of the cited focus question, the group initially identified a set of 20 factors, including: Reasons for the gathering The common link (social identity) Types of weapons in the crowds Access to weapons Size of the crowd Collective memory of violence Endurance of crowd How does crowd perceive controllers Types of intervention Local rules (e.g. curfews) Probability of propagating violence Crowd’s sensitivity to violence Geographic – environmental constraints Level of associated risks to controllers Mandate for the use of force Effects of adverse weather How controllers perceive the crowd General political situation Crowd’s respect of authority AND crowd’s perception of Crowd mood’s own power Through a series of discussions the factors were reduced to a manageable size of 11. These are categorized into three groups, as described below:

3.1.1 Situational variables (predictive inputs/diagnostic outputs):

Crowd’s reason for the gathering – what are the reasons for gathering? In South Africa, some people gather because they have been intimidated and sometimes they have been hired to “gather”. This phenomenon is unique to the South African context.

Types of weapons in the crowd – what weapons does the crowd have?

Size of the crowd – how many people have gathered?

Endurance of crowd – what is the duration or the time frame of the gathering?

Geographic environmental constraints – what constraints are inherent to the crowd?

Information dissemination – how is information distributed amongst the crowd?

Controller’s perception of the crowd – how do controllers view the crowd?

Crowd’s respect of authority and crowd’s perception of own power – what is the crowd’s perception of its own power and what is the crowd’s level of respect for authority?

3.1.2 Decision variables

Types of interventions – for controllers to use

Mandate for the use of force – by controllers

3.1.3 Consequential variables (predictive outputs/diagnostic inputs)

Probability of propagating violence – how violent can the crowd become?

The prototype variables and their subsequent values are shown in the morphological field in Figure 5. Once the morphological field was developed, the second step was to perform a cross consistency assessment (CCA) in order to determine which variable directly affects other variables.

163


Mapule Modise et al.

Figure 5: The main prototype modelling crowd control Some of the pair‐wise relationships done include:

Reasons for gathering ÅÆ Types of weapons in crowds,

Size of the crowd ÅÆ Reasons for gathering,

Flow of information/media coverage ÅÆ Reasons for gathering,

Endurance ÅÆ Reasons for gathering,

Flow of Information ÅÆ Size of crowd,

Types of intervention ÅÆ Reasons for gathering, and

Types of intervention ÅÆ Types of intervention.

The cross consistency matrix in Figure 6 shows the whole list of pair‐wise relationships. The “X” shows incompatibility between the variables, and “‐” means that the variable can co‐exist. The “S” are possible scenarios.

Figure 6: Cross consistency matrix with assessment for the crowd control model

164


Mapule Modise et al.

3.2 Analysis Figure 7 shows a prototype of CCGMA model that is completed and compiled. This model has attained a full coverage showing that nearly all conditions are in some ways linked to some other conditions. According to the model ‘Isolate’ and ‘No access or exit’ are not related to any of the variables.

Figure 7: A compiled prototype of CCGMA model When the solution space (or outcome space) is synthesized, the resultant morphological field becomes a flexible model, in which anything can be an "input" and anything else "output". Thus, with computational support, the field can be turned into a laboratory with which one can designate one or more variables as inputs, in order to examine outputs or solution alternatives. Figure 8 displays the morphological field with one crowd control scenario.

Figure 8: Morphological field for crowd control with one scenario displayed Single or multiple drivers can also be selected in order to investigate more detailed conditions and outcome clusters (Figure 9 and Figure 10). In this case, if the reason for gathering is smart/flash mob (in red) as an input, what other parameters of the crowd control coexist with it? The (blue) outputs in the remaining parameters of the model point out the conditions that are most relevant to the designated input. Which variables co‐exist with “Reasons for gathering when the reason is flash mob”? Figure 9 reveals flash mob as the reason for gathering, which coexists with:

“no weapons” in the type of weapons in crowds;

crowd size that is greater than 20 but less than 1000;

internet based media as a typical information dissemination method;

the crowd can be there for hours;

the probability of propagation of violence ranges from low to moderate;

limited movement in geo‐environmental constraints;

165


Mapule Modise et al.

no special constraints for mandate for use of force;

the controller’s view the flash mob as friendly, and

The high respect for authority and low perception of own power.

This type of a scenario is the most relevant in this social media era.

Figure 9: Input flash mob (red) and output (blue) Figure 10 displays the ability of the model to test the scenario where three inputs are selected.

Figure 10: Three factors selected (red) examine which other factors are compatible (blue)

3.3 Evaluation of general morphological analysis 3.3.1 Strength and limitation of GMA

Compatibility: GMA is compatible with other modelling procedures, and can be employed as a test‐bed or first step in the development of other types of models.

Foster dialogue: The GMA process requires a diverse group of subject matter experts; the cross consistency matrix fosters dialogue among the participants which leads to a better understanding of the problem from different perspectives.

166


Mapule Modise et al.

Audit trail: The method leaves an audit trail, which means there are no black boxes; all the steps and decisions taken are recorded in the model.

Facilitation: The success of a GMA depends largely on the ability and experience of the facilitator.

Participants: GMA cannot be effectively carried out in groups larger than 7‐8 participants, where the whole point is to foster dialog between subject specialists.

Time: GMA takes time. Depending on the complexity of the problem and the level of ambition, developing a morphological model can take between 2 and 10 full group‐workshop days.

Computer software: Doing group work with the type of problems described in this article is virtually impossible without the support of computer software.

3.3.2 Evaluation of general morphological analysis crowd control model The CCGMA model is based on the existing and tested variables. This study has a unique in definition of the relationships between the variables. The model has identified the most important factors to crowd control and has shown how these relate to each other. However, this model focuses only on the gathering phases (Kenny et al, 2001). It is the view of the researchers that inclusion of processes and variables pertaining to activities prior to the gathering require a separate model that can be integrated to the existing model. Information dissemination is another variable that requires a special attention. The focus would be to identify other specific information dissemination variables that are not currently captured in this model. In general, it can be said that the CCGMA model confirms thatcrowd control is not simply a loose grouping of a number of concepts and technical areas. If the various aspects of crowd control are not integrated and analyzed, then a complete picture of crowd control and training cannot be achieved. Furthermore, it is inappropriate to only focus on limited aspects of crowd control with the hope that if these are in place, crowd control will transform automatically. A great deal of attention must be given to all variables identified and the inter‐relationships uncovered. It is through these inter‐relationships that an understanding of possible strategies that could achieve a decision support system for crowd control is developed.

4. Relevance of crowd control model on cyber defence The control, manipulation, and dissemination of information have always been a staple of conflict, but now the ability to use information in war is no longer a monopoly of the nation state (Rawley, 2012). Advances in information and communication technology offer the ability to speedily process, organize and disseminate information have undoubtedly transformed social and economic practices, organizational structures and military operations. The use of electronic communications and social media continues to grow. People of all ages, different background, international and local are using these internet based technologies for information dissemination. The crowds as well as the law enforcement can use these tools effectively to achieve their different objectives. For law enforcement, the tools can be used for control and management purposes, such as building the relationships with the public and protestors, and also for communicating with the populations by providing relevant information prior to the event and providing timely early warnings (Coronel, 2004). Equally, the protestors can use the same tools to discredit the police, to recruit, organize and mobilize gatherings. As illustrated during the Arab Spring Uprising, the role of social media in the spread of protests cannot be ignored. Present discourse on crowd behaviour and crowd control downplays the role of social media. In South Africa however, there is currently little evidence of instances where social media was extensively used for organizing, planning and mobilizing the protests.

5. Conclusion It is possible to model the disparate dimensions of crown control in order to facilitate the possible simulation of crowds for training and awareness building. It is also important to note that the process of creating this type of a model is as important as the product; that is, the model itself. The workshop allowed the participants, to communicate their respective viewpoints or standpoints on the issues at hand as well as to collectively model these issues. This is important for multi‐stakeholder groups to understand each other’s positions and to build smart groups or teams.

167


Mapule Modise et al. The model developed in this study presents a better understanding of the connections between institutions, actors and issues. Furthermore, it highlights the complexity associated with crowds and controlling them. A wealth of information and insight is locked up in the crowd control GMA model. Without this model, it would have been difficult to develop the insights necessary to identify and define those variables that are important. The power of this model lies on the inter‐relationships that are clearly shown when interacting with the model. The current model does not delve much on the role of social media and the intimidated or hired crowds. These issues will be raised with the experts in the field, whose purpose will be to validate and verify the current model and to add variables that were not included.

Acknowledgements We would like to express our sincere gratitude to Dr. Tom Ritchey. A large portion of this work is derived from the workshop he facilitated for the European Defence Agency (EDA) in 2010.

References Abbott, J.L. and Geddie, M.W. (2001) “Event and Venue Management: Minimizing Liability through Effective Crowd Management Techniques”, Event Management, Vol 6, pp 259‐270. Ackoff, R.L. (1974) Redesigning the Future: A System Approach to Societal Problems, John Wiley & Sons, Inc, New York. BBC News Middle East, (2011) “Egypt Protests Escalate in Cairo, Suez and other cities”, [Online], BBC Website, http://www.bbc.co.uk/news/world‐middle‐east‐12303564. Coronel, S.S. (2004) “The Role of the Media in Deepening Democracy”, [Online], United Nations Online Network in Public Administration and Finance, http://unpan1.un.org/intradoc/groups/public/documents/un/unpan010194.pdf. Cappitelli, P. (2012) “Crowd Management, Intervention and Control”, Post Guidelines, [Online], California Commission on Peace Standards and Training, http://lib.post.ca.gov/Publications/CrowdMgtGuidelines.pdf. Kenny, M.J., McPhail, C., Waddington, P., Heal, S., Ijames, S., Farrer, D.N., Taylor, J. and Odenthal D. (2001) “Crowd Behavior, Crowd Control, and the Use of Non‐Lethal Weapons”, Technical report, Institute for Non‐Lethal Defense Technologies, Pennsylvania State Applied Research Laboratory, 1 January. Mingers, J. and Rosenhead, J. (2004) “Problem Structuring Methods in Action”, European Journal of Operational Research, Vol 152, No. 3, pp 530 – 554. Rawley, C. (2012) “Liberated Information and the Future of Irregular Warfare”, [Online], Information Dissemination blog, http://www.informationdissemination.net/2012/05/liberated‐information‐and‐future‐of.html, 1 May. Ritchey, T. (1998) "General Morphological Analysis ‐ A general method for non‐quantified modelling", 16th European Conference on Operational Analysis, Brussels. Ritchey, T. (2006) “Problem Structuring Using Computer‐Aided Morphological Analysis”. Journal of the Operational Research Society, Vol 57, No. 7, pp 792‐801. Ritchey, T. (2006) “Modelling Multi‐Hazard Disaster Reduction Strategies with Computer Aided Morphological Analysis”, Reprint from the Proceedings of the 3rd International ISCRAM Conference, Newark. Ritchey, T. (2011) Wicked Problems‐Social Messes: Decision Support Modelling with Morphological Analysis, Springer, New York. Ritchey, T. (2012) “Advanced Computer Support for General Morphological Analysis”, [Online], Swedish Morphological Society, http://www.swemorph.com/macarma.html. Sosibo, K. (2011) “Who was Andries Tatane?” [Online], Mail&Guardian, http://mg.co.za/article/2011‐04‐21‐who‐was‐ andries‐tatane, 21 April. Stringer, D. and Satter, R.G. (2011) “Britain Burns: Riots spread through UK cities”, [Online], Yahoo News, http://news.yahoo.com/britain‐burns‐riots‐spread‐uk‐cities‐013736610.html, 09 August.

168


A Vulnerability Model for a Bit‐Induced Reality Erik Moore Academic Computing Services, Adams 12 Five Star School District, Thornton, Colorado, USA eriklmoore@gmail.com Abstract: The increasing proliferation and psychological and physical embeddedness of the global digital infrastructure call us to reconsider traditional models of vulnerability, attack trees, and security auditing. The easy coordination of disparate digital means of attack suggests we should move to tighter coordination between digital information assurance, psychological operations, and physical security. Examples at the physical end of the spectrum includes embedded Floating Point Gate Array (FPGA) computer chips that can be configured on the fly to function as completely different chips, 3D printers that can be used to bypass traditional physical security, hypervisors that virtualize complexity previously instantiated in hardware, and immersive communications environments where traditional physical facilities are being replaced. Digital technology also has a profound ability to monitor and induce behavior, opinion, and identity in ways that were not possible in previous eras. These include vivid multimedia production resources capable of inducing assumptions of events and facts in large populations, artificial intelligent systems capable of filtering of large volumes of communications for population behavior and also filter for patterns or persons of interest, and opportunities for highly engaging insertions of pseudonym contact and identity imprinting with low risk to operatives. Analyzing the behavior of a population and inducing behavior seem separate at first, but from the perspective of bit‐induced reality and bit monitored reality, the two are ever closer. The author proposes that we re‐assess vulnerability models to ensure that the natural integration occurring across a new Psy‐BIR‐Phys spectrum that tracks the level at which systems are a Bit‐induced Reality (BIR) across a psychological‐physical spectrum. The examples presented are particularly in reference to persistent threats and long‐term security requirements. Traditional resources and functions are drawn by the author in a way that reflects changing vulnerabilities as they migrate within the author’s Psy‐BIR‐Phys Matrix model. Scenarios presented in this work, like an attack on a device with a speaker, are formulated by the author based on an analysis of recent incidents and technology trends that include mobile devices, cloud infrastructure, programmable logic controllers, internet‐based surveillance, and social media.

Keywords: vulnerability threat attack virtual 3D

1. Introduction Traditional security models have significantly evolved based on the introduction of digital data storage and transport. This has had a significant impact on how reality is formed in a chain of causation from moment to moment. As we computerize, the vulnerabilities that we traditionally expect often migrate to new areas. To create better vulnerability models we need to compensate effectively for this change as we look to defend intellectual property and operate successfully against those with malintent in the new landscape of bit‐ induced reality. To specifically define the term “induced reality” a little background is in order. Reality in a chain of causation is induced generally by the prior interaction of objects with a given set of characteristics. A crystal in a supersaturated solution made up of related atoms, due to the physics of its surface as a substrate, will induce the replication of additional crystals. The DNA molecule in an appropriate environment will induce the replication of itself in a way that evolves over time, aggregating variance in surviving lines of reproduction. As organisms evolve methods of communications, from the dance of a bee to written human language, inducing behavior and awareness in those that the communications affects and the secondary effects of their behaviors. Based on this web of tangible linguistic interaction world, memes, or mental constructs that tend to replicate, induce memes in other organisms or media through live or codified communications networks. Because of these communications networks, technologies, like a magnetic core to store a bit of information, are created based on aggregating sets of knowledge models such as the electromagnetic theory. And that magnetic core, given a certain magnetic state and interacting environment, can induce a particular Boolean response repeatedly, allowing the particular Boolean set state or “bit” to induce effects in the chain of causation. We live in such a world. Bit‐induced reality is the most non‐intuitive part of the human‐made world because bit induction of reality can happen counter to our everyday experiences counter intuitively. A simple example would be the capability of infected computers to perform cyber‐attacks when they appear to the user to be merely running slow. It is non‐intuitive also because bit‐induction of reality is one of the most rapidly evolving aspects of our human

169


Erik Moore experience. What was normal about digital technology to the mental map of a person operating in 1983 is vastly different from someone in 2013 understands as the potential of using binary code. The later implicitly understands that it can induce events in regards to issues like privacy, personal safety, finance, propaganda, national borders, and various aspects of intellectual property.

2. Bit induction levels As a part of the model presented herein, definitions of bit induction need to be defined as variance from other natural processes described in the introduction. The level of induction primarily reflects how much systems are created, guided, and sustained by digital means. An increase in bit induction may not be expressed in every case as “virtualization” in the contemporary sense of the word. But it may be a move from analog television to digital television. Moving from an analog platform to a digital one induces new artifacts into the system, like digital compression, digital encryption, and address‐specific reception. Table 1: Bit induction levels Level of Bit Induction 5

4

3

2

1

0

Description

Example

An induced system that could not exist and is sustained completely by a digital cause An induced system that while existing independent of binary sustainment, exists in great specificity primarily because of a digital cause The system characteristics was heavily influenced by a digital cause

Immersive virtual world or hypervisor‐based web server

3D printed object that would be inordinately hard to create without computers.

The system, while created independently of digital means, finds its state influenced by a digital cause A type of system that has susceptibility to or history of being influenced by related digital causes. There is no function of the system induced by a digital cause

Space shuttle designed with Catia software and many parts of which were manufactured using numerical tools. A gas centrifuge made from hand‐drawn blueprints that was digitally controlled. Before starting a morning patrol, a person finds out browsing on the Internet that digital sensors in the area detect high levels of environmental UV radiation, suggesting that suntan lotion is in order. Pre‐1940’s warfare

3. Psy‐phy spectrum shifting As experienced by the author while virtualizing various systems, increases in the bit induction of a system often leads to a shift in how objects are both perceived, and how they physically exist along a spectrum ranging from a psychological existence to a physical existence. The shift along this spectrum occurs because systems with a higher level of bit induction introduce new digital artifacts not inherent in the earlier systems. They also leave behind physical or psychological components of earlier systems that they supersede. For example, when we move from a physical key to a digital key code, we no longer need a physical “key” object, but we must have in our minds a sequence to unlock a door. Clearly defining a spectrum along which this shift takes place can give those interested in the digitization of systems a common vocabulary. Before proceeding, a clarity of definitions will help the user of the Psy‐BIR‐Phys model understand the implications of shifting between the psychological end of the spectrum and the physical end of the spectrum. The category of all things physical is represented by the Greek character Φ, or “Phy.” Those who read Greek know that the term φυσική or physics refers to the natural world as opposed to metaphysics, which refers to the magical influence of the world on those things which we imagine in our mind as spiritual myths that are not confirmable in the natural world or measurable processes of it. Likewise, psychology, represented by the Greek character Ψ or “Psy” refers to ψυχή or “psyche” that in ancient Greece what might have been thought of as the metaphysical soul as apart from the natural world. In contradiction to the above notions sported by some ancient Greeks, it is important in using the Psy‐BIR‐Phys model to understand that as a person informed by the modern science of psychopharmacology, neuroanatomical experiments, behavioral genetics, and controlled psychological experiments, all things psychological are brain‐based. Psychology is a study of the functions of physical brains and the resultant behaviors as resident in animals. In the case of this model, the

170


Erik Moore scope is primarily human animals. Therefore, when using the Psy‐BIR‐Phys model, shifting systems on the Psy‐ Phy spectrum means shifting more in terms of where the objects appear to be interacting, either extant in the physical world, or instantiated as a state of physical neurons that allow us to perceive mental constructs and induce behaviors. Table 2: The Psy‐phy spectrum Ψ Level

Φ Level

5

0

4

1

3

2

2

3

1

4

0

5

Description

Example

A perceived system that impacts the psychological sate but has no map to external physical world systems An system that does not map as perceived to the physical world, but has connections and impacts on the physical world An system that has psychological characteristics that are perceived as functioning more than its physical characteristics A system that has physical characteristics that are perceived as functioning more than its psychological characteristics A physical system that is perceived as functioning primarily in that capacity A physical system of which there is no current awareness

Misconception, subconsciously embedded thought, or delusion that drives behavior Virtual World Command Center really impacting the delivery of disaster relief supplies Pamphlets dropped from airplane as part of a Psychological Operations (PSYOP) mission Wall around a city with a particular architecture suggesting ownership, etc. A pipe that people can see Forgotten landmines or undiscovered vulnerability

One important aspect of the Psy‐Phy spectrum is that perception of the system can shift where a physical object actually sits on that spectrum. One might ask, “When is a pipe not a pipe?” The answer might be, “When it is a flagpole.” Thus, unintended consequences can arise when perceptual variance is not taken into consideration.

4. The Psy‐BIR‐Phys matrix For the Psy‐BIR‐Phys model to be usable in vulnerability analysis, it must map changes in the level of bit induction in systems and suggest the implications of those changes in terms of whether they create psychological or physical artifacts that suggest vulnerabilities. A matrix combining the Psy‐Phy spectrum and the Bit Induction Level will be used to test for new psychological or physical situations where a change in Bit Induction Level occurs. The example will be mapped to changes in how participants relate to the system on the Psy‐Phy spectrum. The case studies in Figures 1 and 4 should reveal patterns that can be used to better inform traditional vulnerability analysis.

Figure 1: The Psy‐BIR‐Phys matrix

171


Erik Moore To understand the matrix in Figure 1, consider the example of a command post. It must be a secure location where restricted data flows in and authoritative commands flow out. It must be resilient in the face of human attack or natural disasters.

Figure 2: A public domain image of NORAD, a US military command center.

Figure 3: A virtual world command post constructed by the author. NORAD is a military command center as illustrated in Figure 2 in an open source image from commons.wikimedia.org with its physical location inside Cheyenne Mountain near Colorado Springs, CO. Being in the mountain is an inherent part of the security. As experienced by the author in a tour of the facility in the late 90s, limited physical access, the surrounding mountain, and barbed wire gauntlets created a space where operations could continue while the post was under extensive physical attack. These physical characteristics are part of its significant advantage in terms of resilience and it sits, as graphed above, at a physical level of 4, significantly a physical impact, but also having psychological value. The embedded communications and data feeds suggest it has a bit induced level of 2. While analog devices might power the facility, operations are substantially influenced by binary communications and data feeds flowing in and out. To remap operations to a virtual command post, one moves up the bit induction level to 5, in that all operations internal to the post are sustained by a bit state as generally outlined over ten years ago. (Filo et al 1998) Indeed there is no physical command post, and a set of binary states control flickering screens, microphones, and speakers that induce potentially geographically distributed participants to act as if they are in a command post. This suggests that the system has moved on the Psy/Phy scale to Psy4/Pys1. The minds of participants construct a room that does not exist in reality based on the stream of bit‐induced sensory data they receive. The barbed wire, the granite, and the guards are no longer outside the walls as visual cues might suggest. But command operations can continue with much the same procedure set. Bit induced artifacts include a new interface for accessing the room (client software on a PC), the resilience of running on geographically distributed infrastructure, and the vulnerability that the entire system can be brought down or infiltrated by digital attack. When the author was touring NORAD in the 1990s, it was stated that significant portions of the operation were being moved out of the mountain for convenience. Global threats had not diminished, but some things had changed. The most likely and persistent threat to command and control communications links was no longer bombs, but instead was attack through digital means. This implied that the value of vast granite walls was greatly reduced in terms of its ability to protect in relation to the most likely threat. The greatest vulnerability to the operation had become bit induced.

172


Erik Moore

Figure 4: Psy‐BIR‐Phys matrix of technology transitions

5. Pervasive increase in bit induction levels As illustrated in Figure 4, several functions of human capability, like designing, commanding, and controlling have moved with the Psy‐BIR‐Phys matrix as users employ new technologies. This section contains an overview of some of these transitions along with an illustration of where they are moving within the Psy‐BIR‐Phys matrix. One big question to ask is “How have the attack surfaces modified?”

5.1 The ASIC to FPGA shift Application specific integrated circuits (ASICs) are chips that have been long‐embedded into devices like TV remote controls, elevator telephones, and alarm clocks. These devices have fulfilled primarily static roles like keeping time, enabling buttons, or routing a call. But ACISs (Moradi et al 2011) have been replaced over the last decade with a new generation of more versatile chips called Floating Point Gate Arrays (FPGAs) that can be updated by software that change their function. While this is an advantage for manufacturers and indeed has value for end‐users, analysis of FPGAs are revealing new vulnerabilities. The FPGA is a higher level of bit induction than an ASIC in that its very function and structure may be bit‐ induced repeatedly. What a user perceives as a static chip may have been maliciously induced to do something quite different. A device with a speaker connected to an FPGAs might incorporate voice recognition to do targeted surveillance as a microphone. And all the sensors now embedded in televisions might be redeployed by reprogrammed FPGAs to provide large amounts of data about their location. This increase in the bit induction level calls for a new level of confirmation and auditing of the state of the digital devices we use, and a change in our perceptions of these devices. Unlike a personal computer where we assume diverse functions, consumers assume embedded devices have been designed with specific functions. Moradi’s (2011) work using power usage to uncover encryption keys in FPGAs suggests that FPGA‐enabled devices like weapons systems, communications gear, network infrastructure, and other devices are potentially subject to reprogramming to alternate purposes during firmware updates, when their encryption keys have been discovered. The pervasive use of FPGAs in portable consumer electronics, Supervisory Control and Data Acquisition (SCADA) systems in industrial infrastructure, military applications, and communications technologies, suggests that many core aspects of society are increasing in the level of bit induction. To discuss the changes in vulnerability, we can plot bit induction changes on the scale and analyze any movement on the Psy‐Phy scale so that we can adapt our approach in security practices. If we consider a speaker connected to a signal processer that has been redesigned from ACICS to FPGA in Figure 4, we might consider that the function of the device moves away from a static physical description and becomes more interactive with both the

173


Erik Moore expectations of the user and the programmer. Resetting the security analysis to understand that the device is moving in that direction helps in tuning the security model.

5.2 Computer hardware virtualization The virtualization of computer hardware using a hypervisor, causes “apparent” computer hardware to instantiate with agility on fast underlying hardware. This represents a significant increase in the level of bit induction of the hardware infrastructure we think of as data centers and networks. Because there is little change in the way we access remote computers like servers, the psychological/physical experience does not significantly shift. Therefore, the actual bit induction increases, creating new attack surfaces often without user awareness. Because the vulnerabilities of software as pervasive as modern hypervisors have such high value, adjusting the security posture to accommodate for the bit induction and the move away from the physical is an important part of the vulnerability to understand. Szefer and his team worked to address this vulnerability by actually reducing the level of bit induction. (Szefer et all 2001) They offer a model that moves the provisioning of services back to the hardware layer. While this might make a more vulnerable target on the temporal domain, Szefer explains how it eliminates significant aspects of the hypervisor attack surface. Awareness that the level of bit induction offers insights into the motivations for Szefer’s novel solution.

5.3 3D printers and security The automated creation of increasingly complex physical objects from digital designs is likely to radically change the landscape of vulnerability models. As we move towards a world where inventory and transportation costs can be radically reduced by having a 3D printer onsite, we will see such devices become pervasive and increasingly capable. This means that physical security models will need to accommodate digital intrusion in new ways. 3D printers that can create strong parts like nuts and fans will naturally be attractive for use in battlespace because of the ability to supply a vast array of maintenance parts on the fly in lieu of large warehouses and extensive supply chains. (Dortmans et al 2002) This type of increase in bit‐induction level in contrast to many other technologies moves ideas to the physical end of the Psy‐Phy spectrum from digital ideas. Intentionally faulty spare parts becomes the least of the soldiers’ concerns if a 3D printer were to be coopted through digital means while unattended physically by personnel. Weapons, keys, and other devices could be printed, ready and waiting for an infiltrator upon arrival.

5.4 Manipulation of situational perceptions As Edward Bernays said in the book Propaganda, “Is it not possible to control and regiment the masses according to our will without their knowing about it?” (Bernays 1928) Before bit induced reality, and indeed before Bernays published these words, leaders throughout the ages have used a variety of schemes to control public opinion covertly. Fabricating news stories, manipulating photographs, and starting rumors have all contributed to well‐documented manipulations of the masses throughout history. There is no need to go into it here. As we move towards higher levels of bit‐induced media, these methods have been greatly facilitated by PhotoshopTM, and by digital distribution channels that can be targeted to particular populations. Timothy L. Thomas describes a new level of psychological operations (PSYOP) as enabled by cyber technologies, labeled CYOP. He discusses full range of e‐leaflets, gray‐press broadcasts, audio frequency neural disruption, and ring tone propaganda. (Thomas 2007.) While the evolution of PSYOP to leverage digital tools was inevitable, Thomas points out the vast new range of unintended consequences that is occurring for two reasons. Digital CYOP still does not know discrete boundaries and is difficult to defend against. It can be used by religions, political parties, nation states, and terrorist groups. One of the most striking pair of examples that Thomas refers to is Hezbollah’s creation of scenarios against Israel using the US MicroProseTM game Special ForcesTM to glorify assassinations and the like. The counter example that Thomas describes is likely “Left Behind: Eternal Forces” where the player is a first person shooter who wins by converting non‐believers to Christianity or killing them. These products are both marketed through groups that have as their mission to support identity building in their respective communities. Reflecting on Thomas’ findings, the new CYOP and Counter‐CYOP battles might induce a stronger polarization of factional, national, religious, and political identities unless we develop ways to deflate their impact. The “Left Behind: Eternal ForcesTM” published by Inspired Media EntertainmentTM, would have stayed as perhaps only an issue of American culture wars intent

174


Erik Moore on game‐induced identity manipulation, except that it was slated for inclusion in military “Freedom Packets” with “Operation Straight Up” as part of the “America Supports You” program through the U.S. Department of Defense. (Weinstein, 2007) (Wills 2010) It was only from strong criticism from the U.S. media that the game was removed from inclusion in packages going directly to troops in Iraq. The “Left BehindTM” and “Special ForcesTM” examples suggests that actors across a wide spectrum are intent on using computer games to weaponize cultural and religious identity among both citizens and troops as a function of proselytizing. The converse may also be equally true and somewhat symbiotic, as suggested by Weinstein’s review of related activity at the Pentagon. What is clear is that the means for inducing identity modification has moved markedly to digital technologies. Another method of perceptual manipulation mapped in the Psy‐BIR‐Phys grid in Figure 4 is manipulated web surfing. Unlike previous efforts in mass social engineering, it is possible to influence perceptions by controlling what an individual user sees when doing an apparently limitless search of the Internet. This can be done by controlling search engines, or by filtering, tracking, and inserting content in Internet activity using web filters. The most common publicized example of this is the great firewall of China.(Clayton 2006) As we move towards a world where we have greater dependency on the Internet for knowledge discovery, knowledge confirmation, and information processing, we will become more susceptible to unperceived influence in our preferences, allegiances, and situational triggers. The reason that perceptual manipulation is moved to psychological level 5 in Figure 4 is because the user’s psychological state is being manipulated without their own awareness of the manipulation. Thus the effect appears to be without an external connection. Moving from PSYOP to Counter Intelligence, social media have also been moving social connections to a higher level of bit induction in many ways. (Phillips et al 2011) Fictitious Facebook accounts (personas at Bit induction level 5) garner real U.S. Department of Defense friends quite readily in an experiment they describe. At the same time friends and family post mission‐sensitive data and personal relationships, making them Internet accessible and tractable by the public at large and particularly hostile forces. This type of information could then be fed back into PSYOP activity as described above, spearfishing attacks, or tangible activity.

6. Vulnerability model analysis Vulnerability can be assessed using a scoring system or a vulnerability identification framework. The Psy‐BIR‐ Phys model can shed some light on additional implications within these systems. The example explored below is a scoring system. In the Common Vulnerability Scoring System (CVSS) as published through First.org, (Mell, 2007) there are three major components to vulnerability used to score a vulnerability level, the base metric group, the temporal metric group, and the environmental metric group. These metrics track the vulnerabilities that are relatively stable over time, vulnerabilities that vary over time (except for the users), and the environment in which users of digital systems work. Most relevant to the use of the Psy‐BIR‐Phys model is the base metric. The base metric is composed of several elements including an attack vector, defining whether the attack is local, remote, or from an adjacent network. Access complexity is another element within the base metric that could be informed by the use of the Psy‐BIR‐Phys model. Access complexity inventories hurdles like gaining local access such as being able to physically insert a USB drive. Some increases in bit induction becomes harder to define, particularly when it is a bit induced socialized identity. Social engineering attacks are described in the CVSS model, but the Psy‐BIR‐Phys model creates a more quantitative understanding. While there are other elements within the base metric, attack vector and access complexity should provide an opportunity to leverage the Psy‐BIR‐Phys model to see where it has potential to add value. As Mell et al describes the CVSS base metric category “attack vector,” there are three possible types: Local to the device, on an adjacent network, and on any access network to the Internet. When moving a command post from a physical to a virtual world, the meanings of these terms change and we should begin looking at the attack vector types in a new way. As we saw in Figure 1, when we move a command post from a bit induced level of 2, where it is bit‐enabled, to level of 5 where it is sustained primarily by digital means, the dominance of physical characteristics (at Psy 1/Phy 4) moves more to a psychological interaction at Psy 4/ Phy 1) as defined in the preceding level tables. What this suggests is that we are moving away from tracking physical adjacency in terms of the facility. Instead of attacking the facility by entering the room with a USB drive, one must gain access through other means and take data through other means. If the underlying network is

175


Erik Moore secure, then the bias will be for social engineering attacks to become more attractive. If one is operating in a virtual world, then infiltrating it becomes more about psychological acceptance of activities than physical access of individuals and technologies. Thinking broadly about how the parties interact in a virtual world space should lead to consideration of the endpoints of the communications system where humans actually exist as they interact within the virtual command center. While the local computer of the end user does not exist as a command console in the virtual command center, it does physically enable the team member to send commands to those devices, to interact with others, and to affect virtual interfaces within that system. Much like an underlying hardware enables the hypervisor, the end user’s computers, telecommunications connections, and servers provides a tangible infrastructure to facilitate command functions. This suggests that although the command functions have become more of a psychological interaction, and need to be dealt with accordingly, access is still quite physical and the access points, the computers of the actual users need to be secured with the level of assurance that each participant requires.

7. Conclusion While the examples introduced in this paper are by no means exhaustive, they provide an introduction to the Psy‐BIR‐Phys matrix that are intended to suggest both the value of the model and the methods of application. As the level of bit induction in human society increases, models like the one presented here will become increasingly necessary to cogently describe battlespace as it shifts between psychological and physical arenas through localized and distributed bit induction. Reflecting more broadly, human society itself is changing as we move to higher levels of bit induction, and how competing human priorities, accountability, enforcement, and conflict are all going through significant transformations that affect the destiny of humanity and the nature of our individual experiences.

References Bernays, E., (1928) Propaganda, Liveright Publishing Corp., 1928 Clayton R., Murdoch S., Watson, R (2006) Ignoring the Great Firewall of China, Lecture Notes in Computer Science, Volume 4258/2006 Dortmans, P., Curtis, N., (2002) Linking Scientific and Technological Innovation with Warfighting Concepts: How to Identify and Develop the Right Technologies to Win Future Land Battle, Land Warfare Conference, eds. Puri, B., Filippidis, D., Quinn, S., Brisbane, Australia Filo, Andrew S., Mark P., Morgenthaller, Glenn C., Steiner, Virtual Command Post, Lionhearth Technologies, Inc., CA, US Patent US006215498B1 Forum of Internet Response and Security Teams (FIRST), Vulnerability Scoring System Version 2.0, Mell, P., Scarphone, K. and Romanosky, S., (2007) CVSS A Complete Guide to the Common Vulnerability Scoring System 2.0 http://www.first.org/cvss/cvss‐guide.pdf Moradi, A., Barenghi, A., Kesper, T., (2011) On the Vulnerability of FPGA Bistream Encryption Against Power Analysis Attacks, Extracting Keys from Xilinx Virtex‐II FPGAs, CCS’11 Proceedings of the 18th ACM conference on Computer and communications security, Association of Computing Machinery (ACM) New York, NY 978‐1‐4503‐0948‐6 Philips, K., Picket, A., (2011) Embedded with Facebook, DoD Faces Risk from Social Media, CrossTalkThe Journal of Defense Software Engineering, Vol 25, No 6, May/June 2011 http://www.dtic.mil/cgi‐ bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA542587 Szefer, J, Keller, E., Rexford J., Lee R., (October 2011) Eliminating the Hypervosor Attack Surface for a more Secure Cloud, Association for Computing Machinery (ACM), Conference on Computer and Communications Security Thomas, T., (2007) Hezballa, Israel, and Cyber PSYOP, IO Sphere Journal, Joint Information Operations Warfare Command (JIOWC), San Antonio, Texas, http://www.dtic.mil/cgi‐bin/GetTRDoc?AD=ADA465336 http://www.first.org/cvss Weinstein, M., Aslan, R., (August 27, 2007) Not so Fast, Christian Soldiers, Los Angeles Times Wills, D., Steuter, E., (2010) Gaming at the End of the World: Coercion, Conversion and the Apocalyptic Self in Left Behind: Eternal Forces Digital Play, Reconstruction, Studies in Contemporary Culture, Volume 10, Number 1

176


Results From a SCADA-Based Cyber Security Competition Heath Novak and Dan Likarish Regis University, Denver CO, USA Novak667@regis.edu Dlikaris@regis.edu

th

Abstract: On April 1 2011, Regis University hosted the 7 Computer and Network Vulnerability Assessment Simulation (CANVAS) competition with a turnout of 68 event competitors and at least two dozen faculty and spectators. The event was a major success and provided Regis University with valuable recognition in the academic community focused on information assurance. The prevailing trends at the end of 2010, the interestingly-named Stuxnet malware, Critical Infrastructure Protection (CIP), and Smart Grid technology deployments, inspired the scenario for this cyber competition. Many government and industry-specific organizations have been stepping up efforts to heighten awareness amongst national organizations managing critical infrastructure, as well as authoring guidelines and policies for moving progress forward on secure infrastructure. In recent times, CIP has received much greater awareness by the United States Congress and other governmental agencies, such as the General Accounting Office (GAO), due to the trend towards “connectedness”, with distribution and communications systems being increasingly connected over TCP/IP networks. CIP is especially important due to the far-reaching damage that can be suffered by businesses, industrial and government facilities, and the general populace in the event of a successful cyber attack. Simulating a true utility environment for the purposes of a cyber competition scenario is next to impossible due to resource constraints and unavailability of specialized equipment. However, the essence can be captured, and this is exactly what we strived for in the CANVAS cyber competition in 2011. Our primary goal was to introduce a CIP theme to a cyber competition in order to raise awareness of these types of attacks, especially since many power utilities across the nation are pushing Smart Grid infrastructure in order to offer value-added services to customers and increase efficiencies in power generation and distribution, which will inevitably increase complexity and connectedness of power utility operations and customer home area networks that can be exploited by motivated actors. This paper will discuss these goals as well as some of the intricacies of developing the CANVAS cyber competition, including technical details, extensibility of CIP-focused cyber competitions, as well as the continued development and value of CIP simulation infrastructure. Keywords: CANVAS, CCDC, critical infrastructure protection, cyber competition, ICS, SCADA, virtualization

1. Introduction th

In 2011, Regis University hosted the 7 Computer and Network Vulnerability Assessment Simulation (CANVAS) competition, a defense-oriented cyber security competition, in which teams compete to analyze a given information system-specific scenario and write an executive summary. The final report is evaluated and scored by faculty to identify the competition winner. CANVAS was developed to be a collegiate defense-oriented competition by Michael Collins and Dino Schwietzer in 2004 (Collins, Schweitzwer, Massey, 2006). The event attracted 68 competitors broken up into 23 teams (3 participants per team), each with a single workstation and a BackTrack4 Linux LiveCD fully equipped with a plethora of security tools. The scenario had to be based on a vulnerable business, which allowed for a wide range of possibilities. Therefore, the host of the event needed to select a feasible scenario that could be implemented in an information systems environment. The Regis University principals brainstormed ideas for the event and decided that the best approach was to select a current high profile attack being publicized in the mainstream news. Scenario requirements included an event that could be assessed by the participants in a single day along with the use of visual aids, such as overhead projectors or TVs to relay hints or cycle scenario details. The prevailing news item towards the end of 2010 was the Stuxnet worm, which was described as disrupting the Iranian nuclear program through the modification of the configuration of programmable logic controllers (PLC) controlled by Siemens Step 7 software (Falliere, Murchu, Chien, 2011). The Regis University team decided that an attack based on Stuxnet, which included critical infrastructure component, should be incorporated into the event. Since Smart Grid technologies have gained greater steam in the power utility space, we felt that this topic was ripe for representation. There were three primary requirements of the scenario; we needed to implement a solid representation of an information systems environment that could be plausibly used by a power utility, integration of the desired “Stuxnet” attack theme, and the assessment should be able to be performed in the required timeframe (roughly six hours).

177


Heath Novak and Dan Likarish

2. Critical infrastructure protection Critical Infrastructure Protection (CIP) is getting greater attention by government and industry authorities due to recent incidents that highlight the dangers of vulnerable critical infrastructure ranging from water treatment plants to nuclear power generation facilities. These concerns cover a wide range of possible threat agents, (e.g. nation state actors, terrorists, hacktivists, hackers, etc...), with causes being either intentional or unintentional. Smart Grid technology adds another layer to the risk that various industry leaders recognize since many Smart Grid products are still in the early phases of integrating security into their products. There are still concerns that current utility infrastructures still contain insecure deployments that weaken the infrastructure used to manage power distribution, privacy, and safety. The complexity of new Smart Grid networks coupled with the increase in data propagation through a network leads to a larger attack surface and weaker security posture. Federal departments such as the General Accounting Office (GAO) and the Federal Energy Regulatory Commission (FERC) have pushed forward initiatives to drive stronger security measures through the industries of critical infrastructure (Wilhusen, 2012). The National Institute of Standards and Technology (NIST) have spent the past several years developing publications (SGIP, 2010) that highlight the levels of risk in various segments of a power utility information system infrastructure. Many areas are identical to the information system infrastructure in other industries, but there are obvious differences specific to critical infrastructure of power grids such as the substations, interconnections, and end user equipment such as smart meters (i.e. home area networks). The move to extend the TCP/IP network to the home area networks (HAN) increases the attack surface dramatically. The Electric Power Research Institute (EPRI) has placed emphasized cyber security initiatives, such as SCADA systems reviews, in order to address these risks earlier in the planning phase. EPRI even has dedicated a portion of their website to discuss how scenario planning and modelling can facilitate decision-making and “guide strategic investments,� which lends credence to the idea that scenario development and modelling in a cyber competition is a valid undertaking. (EPRI, 2012) Additionally, the North American Electric Reliability Corporation (NERC) has also been an influential leader in the development of programs focused on ensuring consistent improvement with security controls in power utility infrastructure (Wilhusen, 2012). Due to the specialized nature of power utility information systems, it was near impossible to integrate various facets of the infrastructure in our virtual environment for the competition. Instead, we simulated as much of the expected information systems as possible. This included the typical networking gear (e.g. switches, routers, and firewalls), server environments used for communications and managing critical equipment, and workstations used by fictitious power utility employees who manage the environment. We even included a PLC controlling lights that were used to simulate power failures in a map view of the Denver Metro area.

3. CANVAS scenario and infrastructure Our scenario represented a fictitious power utility company based in the Denver Metro (leveraging the fact that most of the competition participants are local residents) that is in the process of implementing a Smart Grid infrastructure and has requested the assistance of contractors to implement new technology to satisfy Smart Grid implementation requirements. The fictitious contractors do a poor job of integrating the necessary infrastructure during this transition period and leave the utility vulnerable to a cyber attack. Naturally, the power utility is attacked and critical SCADA (Supervisory Control and Data Acquisition) systems are being manipulated to disrupt power in the Denver Metro area. Customers lose power and the company needs a crack security team to evaluate the situation. Contrary to most utility companies, this fictitious utility company has far too many security vulnerabilities, providing ample avenues for investigation from which to generate the required report. The PLC we acquired was used to control lights on a map display and act as a visual representation of the power grid of the Denver Metro area. Regis faculty wrote code to manipulate the PLC to turn off the lights at predefined intervals, thereby representing a cascading outage effect. Our visual representation of metropolitan areas losing power added context and emphasized as a simulation the dangers of what could happen in the event of a successful cyber attack against the power infrastructure. The network and information systems environment was developed to emulate a power utility operations center and included several physical Cisco router, switches, and PIX firewalls in a redundant and load-balanced architecture, which worked well in maintaining performance requirements for the competition. The network was designed to load-balance the network traffic from the 23 teams taking part in the event. Each team was

178


Heath Novak and Dan Likarish broken up into three separate rooms at the Denver Tech Center campus, two rooms housing eight competitors each and one room housing seven competitors. Each team workstation had a 100MB Ethernet connection to an aggregation switch, which in turn connected to three Cisco PIX firewalls connected to three Cisco 2600 routers further upstream. Cisco proprietary Hot Standby Routing Protocol (HSRP) was used to maintain link and CPU redundancy in case there was a hardware failure. Internet access was offered to the competitors so they could do research or download necessary tools to aid them in their assessment. Server virtualization technology factored heavily into the design of the CANVAS information systems infrastructure because it offered efficient resource utilization by allowing us to virtualize the guest operating systems of the servers that the competitors would have to evaluate. With a single “baremetal” server we were able to deploy all of the “virtual” servers we needed for the event. We were able to develop reasonably-sized information systems infrastructure on limited hardware, which was exemplified by our leveraging existing infrastructure currently used to provide student labs in various technology-focused courses at Regis University for the cyber competition. The CANVAS virtual machines (VMs) were added as members of the assigned vSwitches in order to complete network connectivity to the Cisco routers used for the event. Additionally, virtualization provides a snapshot mechanism that allows you to save the state of a guest operating system (i.e. virtual machine). This is powerful, because we were able to save the pre-event state of the guest operating system which we could revert back to when the event was completed. We also can save images for forensic investigation at a later date as part of a course lab or other academic project. Hence, the VMs themselves became assets from which value can be drawn beyond the timeframe of the competition. We updated the Regis University virtualization infrastructure to use the VMware ESXi 4.1 hypervisor and installed a server with VMware vCenter Server 4.1 for managing the entire virtualization infrastructure. The internal network was set up to assigned addresses using Dynamic Host Configuration Protocol (DHCP) with an ample number of IP address leases for all competitors, while using statically-assigned addresses for the servers used in the fictitious power utility operations center. The Backtrack4 LiveCD distribution was provided to the competitors so they had a ready-made environment from which to run port scans, penetration tests and system exploits. DNS was implemented to make the environment more realistic as well. CANVAS VMs were given names corroborating that they were indeed part of the fictional Colorado Energy Company domain (coloenergy.com). The demilitarized zone (DMZ) contained two virtualized servers that acted as customer portal web servers; a Windows 2000 server with Microsoft IIS 5.0 and SQL Server 2000 as well as a Windows 2003 server installed with XAMMP installed. XAMMP is simply a bundle of web application software including, Apache, MySQL, PHP, Perl, FTP server, and PHPMyAdmin. XAMMP allows quick installation and configuration of web services. Older versions of Windows were used to simulate the use of older systems supporting legacy applications in power utility operation environments. Additionally, Damn Vulnerable Web Application (DVWA, http://www.dvwa.co.uk) was downloaded and installed in order to quickly build a vulnerable website. It is important to provide an environment that is not secured in order for the competitors to actually have something to write about in their report. Regis University did not have faculty or students available to build an insecure website, so DVWA was leveraged to build one fairly quickly. The servers in the DMZ embodied the efforts of our fictitious power utility to offer customers their own web portal for access to account and utilization information related to their electricity services. Both servers were fully patched, however the web applications themselves were not adequately secured. The web servers were naturally public-facing, so they were the primary means of infiltration by the fictitious attackers. The intent was to show that a SQL injection attack was used to gain privilege escalation on the web servers and from there the attackers gained entry into the private section of the power company network. Once inside the private network, the attackers were able to leverage unpatched management servers to gain access to the PLCs of various infrastructure equipment, modified configuration details or simply disabled various components to cause power outages in the grid. Unpatched systems (e.g. Windows servers and workstations) were a commonly recognized vulnerability in critical infrastructure environments due to the need to keep them highly available. Applying patches often incurs downtime, so this activity is often neglected. We felt that this facet was an important point to bring across so we represented this vulnerability in the competition environment. The emphasis was placed on the type of attack used, not how sophisticated the attack itself needed to be so malware code was not developed for the event and placed in the environment. Instead, we placed artifacts within the environment that mimicked some of the side effects of malware, such as open ports and unexpected communications in the network, to mimic command and control activity invoked by the attackers. This competition was about the final report, which needed to be written in a way that an

179


Heath Novak and Dan Likarish unknowledgeable executive can understand, so it was not necessary to introduce advanced artifacts that might be common in the wild.

Figure 1: General attack path for CIP scenario The attack path is as follows: 1) A successful SQL injection attack allowed the attackers to gain privileged access to the web servers in the DMZ, 2) Poor network security controls existed between the DMZ and the private network (e.g. weak or non-existent packet filtering), 3) Unpatched servers in the management network (including the server used to interface with PLC controls, 4) PLC exposed and manipulated, which caused havoc in power distribution systems. The final written report didn’t have to reference exact technical detail of the attack. However, it is expected that the final report did specify the attack path and the general details as to how the PLC was manipulated to cause the fictitious power outage. The report should also have pointed out all of the other discovered vulnerabilities in the environment, recommendations for mitigating the current risks, and minimizing further exposure in the future.

4. Pushing the boundaries The experience we had developing the CANVAS cyber competition and basing it on a CIP problem highlights areas that can be augmented and extended for use outside of cyber competitions. By leveraging the experience from this event and existing virtualization infrastructure, we should be able to expand on the concept and integrate more advanced aspects of a CIP problem. With additional development and focus on simulations of power distribution and actual smart meter technology, we can build an environment that would allow industry peers to augment their own organizational security by leveraging the simulations for testing attack and protection methods. Smart Grid software currently exists to help fill this void. For instance, The U.S. Department of Energy (DOE) at Pacific Northwest National Laboratory (PNNL) has developed an open-source software framework called GridLAB-D (http://www.gridlabd.org/) that can be used to simulate simultaneous simulation of the electric grid, including power flow, end-use loads, and market functions and interactions within a power grid (Hass, White, 2012). By integrating GridLAB-D into the information systems used in a competition environment we should theoretically be able to develop models of how certain attacks can be initiated in various domain areas of a power distribution system (e.g. smart meters, protocol activity over a TCP/IP network, management traffic, etc...), the subsequent characteristics of the environment following the attack, and possible mitigation techniques that can prevent serious outages or loss of privacy.

180


Heath Novak and Dan Likarish We can use GridLAB-D to expand on the common server technologies (simulating management systems) used in CANVAS to simulate the back-office distribution lines and endpoint smart meters to further flesh out a real power utility management and distribution infrastructure. Adding this extra level of realism can greatly enhance the experience of the participants of cyber competitions as well as offer an additional tool for evaluating security in smart grid environments. At the time of this writing, GridLAB-D currently lacks a communications network component (slated for release in Q3 of 2013) that can augment the realism of the simulation. However, the most recent release includes a C++ API, which offers an opportunity by motivated academic institutions to contribute to the work and introduce the desired features. For instance, the introduction of a communications network model is imperative for emulating the information systems infrastructure used for management and data aggregation, which can be leveraged for real-time interaction by both red and blue teams in a competition. Additionally, it is important to include industry standard industrial control system protocols such as ModBus and DNP3 to add additional realism to network forensics activities. The value of this can be exponential as it can be used as a training tool for employees of power utility companies and a platform that industry leaders can use to improve policy. Further research should be conducted in this area to answer the questions about the value proposition and feasibility. We encourage stronger collaboration with federal regulators, NIST (specifically the Smart Grid Interoperability Panel Cyber Security Working Group), public utilities, and certified academic institutions in order to improve transparency between stakeholders of critical infrastructure protection. There is great value in continuing to augment scenario development and threat modelling by leveraging cyber competitions, since we can integrate red and blue team activities and introduce a malicious/intentional aspect to a simulation. GriLAB-D is built primarily to simulate forecasting and reliability models with some consideration to unintentional disruptions (i.e. weather conditions that impact voltage levels). However, it should be possible to introduce malicious intent by leveraging the existing API and adding additional modules to include a distribution management system, PLC, and customer portal, which can be attacked by red teams to simulate the desired unauthorized access and modification of critical infrastructure components. Blue teams will gain greater insight into how these systems will work as they put effort into incident response and disaster recovery operations during the competition.

5. Conclusion The choice of scenario is important when developing any kind of cyber competition, but specifically with competitions that must result in a written evaluation, such as the deliverable in CANVAS or the Collegiate Cyber Defense Competition (CCDC). The reason for this is that context is important in writing a report. The judges act as the manager for the fictional company and score the competitors based on the content and how well they convey the messages that the fictional manager would need to hear in order to make the necessary decisions to improve their security posture. The final report itself acts as a good barometer for how the competitors would be able to write a similar report in the “real world.� A winning report would convey to the judges that the authors understand the various security vulnerabilities that they discovered and can come up with an actionable plan for mitigation. Thus, it is easy to see that the scenario itself factors significantly into the message being conveyed in the final report. Does the competitor understand the business? Do they understand what the primary assets are? What priority should be placed on the security controls to be implemented in the environment to protect the assets? In contrast, competitions in the Capture-the-Flag vein are more tactical in nature and rely less on scenario development. The CANVAS competition that Regis hosted can be used as a framework for future events with respect to infrastructure development, testing, benchmarking, and feasibility analysis. A competition based on more advanced topics within a CIP problem is greatly encouraged. The complexities being introduced into Smart Grid infrastructure are generating greater risks that need to be understood and managed. Introducing advanced elements in a cyber competition is valuable in that it raises awareness amongst non-stakeholders (e.g. competitors, judges, bystanders, etc...), develops a citizenry (i.e. potential workforce) with greater insight into the threats, and builds a framework that can be extended to stakeholders in associated domains (i.e. government, academia, industry, etc). Competition infrastructure (including information systems, network topologies, system logs, software, etc...) can and should be reused and organically extended to reasonably match industry standards so the participants get a realistic representation from which to learn from and enhance skills. Integrating third-party software like GridLAB-D to simulate

181


Heath Novak and Dan Likarish backend (i.e. power generation, transmission, distribution, data aggregation, etc...) can greatly enhance a cyber competition by introducing realism and the complexity that exists in the critical infrastructure industry. Finally, cyber competitions are used by government, industry, and academia as a useful method for evaluating talent, raising awareness of current threats, and providing training in a safe environment. Team-building and interpersonal communications are key skills that are strengthened through participation in a cyber competition. The stress that is experienced by competitors is real, but is without consequence to a critical environment. Thus, competitors can feel some of the stress one would experience handling incidents affecting critical infrastructure and through this experience can learn how to manage it in order to be more productive in similar environments in society. It is therefore imperative to leverage cyber competitions as a valuable tool to building a stronger workforce, and by extension a stronger industry, through the integration of CIP themes in cyber competition scenarios.

References Carlin, A., Manson, D., Zhu, J. (2010). Developing the Cyber Defenders of Tomorrow With Regional Collegiate Cyber Defense Competitions (CCDC). Information Systems Education Journal, Vol. 8 No. 14. ISSN: 1545-679X. http://isedj.org/8/14/ISEDJ.8(14).Carlin.pdf Collins, M., Schweitzer, D., Massey, D. (2008). CANVAS: A Regional Assessment Exercise for Teaching Security Concepts. th Proceedings of the 12 Colloquium for Information Systems Security Education, University of Texas, Dallas. ISBN 1933510-96-7. http://www.usafa.edu/df/dfe/dfer/centers/accr/docs/collins2008a.pdf Conklin, A. (2006). Cyber Defense Competitions and Information Security Education: an Active Learning Solution for a th Capstone Course. Proceedings of the 39 Hawaii International Conference on Systems Sciences, 2006. Center for Infrastructure Assurance and Security, The University of Texas at San Antonio. http://www.tech.uh.edu/caedc/documents/Conklin_HICSS39.pdf Ericsson, G. (2010). Cyber Security and Power System Communication – Essential Parts of a Smart Grid Infrastructure. IEEE Transactions on Power Delivery, Vol. 25, No. 3, July 2010. http://www.csit.qub.ac.uk/media/pdf/Filetoupload,286696,en.pdf Falliere, N., Murchu, L., Chien, E. (2011). W32.Stuxnet Dossier. Symantec Security Response. http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_dossier. pdf Hass, A., White, F. (2012). GridLAB-D: A One-of-a-Kind Energy Grid Simulator. Pacific Northwest National Laboratory (PNNL) News Center. https://legacy.regis.edu/OWA/redir.aspx?C=971ca73f138e43b099d23cf88079578d&URL=http%3a%2f%2fwww.pnnl. gov%2fnews%2frelease.aspx%3fid%3d948 Stoeffer, K., Falco, J., Scarfone, K. (2010). Guide to Industrial Control Systems (ICS) Security. National Institute of Standards and Technology (NIST) Special Publication 800-82. csrc.nist.gov/publications/nistpubs/800-82/SP800-82-final.pdf The Smart Grid Interoperability Panel (SGIP), Cyber Security Working Group (2010). National Institute of Standards and Technology Volumes Internal Reports 7628, Volumes 1-3. http://csrc.nist.gov/publications/PubsNISTIRs.html White, G. Ph.D., Williams, D., (2005). Collegiate Cyber Defense Competitions. The ISSA Journal, October 2005. https://www.issa.org/Library/Journals/2005/October/White,%20Williams%20%20Collegiate%20Cyber%20Defense%20Competitions.pdf Wilhusen, G. (2012). Cyber Security, Challenges in Securing the Electricity Grid. United States Government Accountability Office (GAO), Testimony Before the Committee on Energy and Natural Resources, U.S. Senate, GAO-12-926T. http://www.gao.gov/assets/600/592508.pdf

182


Design of a Hybrid Command and Control Mobile Botnet Heloise Pieterse1 and Martin Olivier2 1 Council for Scientific and Industrial Research, Pretoria, South Africa 2 University of Pretoria, Pretoria, South Africa hpieterse@csir.co.za molivier@cs.up.ac.za Abstract: The increasing popularity and improvement in capabilities offered by smartphones caught the attention of botnet developers. Now the threat of botnets is moving towards the mobile environment. A mobile botnet is defined as a collection of compromised smartphones controlled by a botmaster through a command and control network to serve a malicious purpose. This study presents the design of a hybrid command and control mobile botnet. It describes the propagation vectors, command and control channels, and topology of the design. The hybrid design explores the efficiency of multiple command and control channels against the following objectives: no single point of failure must exist in the topology, low cost for command dissemination, limited network activities and low battery consumption per bot. The objectives are measured with a prototype that is deployed on a small collection of Android‐based smartphones. In addition, the prototype is evaluated against mobile security software and anti‐virus software. The results indicate that current mobile technology exhibits all the capabilities needed to create a mobile botnet. Keywords: mobile, botnet, command and control, hybrid

1. Introduction The last few years saw a revolution in the development of cellular phones, transforming the devices from basic voice and text phones to all‐in‐one portable devices known as smartphones. Demonstrating functionality similar to that of a traditional computer, smartphone’s today provide interconnectivity capabilities such as Internet access, device‐to‐device communication, a wide variety of software applications. Improvement in smartphone capabilities and the popularity associated with mobile devices have caused malware developers to shift their focus towards mobile devices. During the first quarter of 2012 mobile malware increased by 1200% (Lardnios 2012). The sudden rise of malware coupled with the popularity of smartphones creates possibilities for new threats to emerge such as mobile botnets. Botnets are a well‐known threat to the users of the Internet and Personal Computers. They are responsible for the delivery of spam, collection of information, processing large quantities of data and causing distributed denial of service (DDoS) attacks (Grizzard et al. 2007). With the constant improvement of smartphone computing power and communication capabilities, malware developers are starting to introduce the concept of botnets to mobile devices such as smartphones. A mobile botnet is a network consisting of a collection of compromised smartphones, controlled by a botmaster through a command and control (C&C) network. The C&C network is the core of any botnet as it allows for the efficient dissemination of commands from the botmaster to all the bots. Traditional C&C technologies, such as those based on HTTP, are also useful in mobile botnets. However certain smartphone capabilities, such as SMS and Bluetooth can provide the botmaster with additional C&C channels to support command dissemination. Given the popularity of smartphones and the continuous rise in mobile malware, it is only a matter of time before mobile botnets become a dominate force in the development of mobile malware. This paper presents the design of a new mobile botnet, called the Hybrid Mobile Botnet, which exploits multiple C&C channels to disseminate the commands. The objective of this study is to explore the efficiency of multiple C&C channels and to raise awareness about the threats posed by mobile botnets. We also analyse the behaviour of the newly designed botnet by building a prototype and deploying it on a small collection of Android‐based smartphones. The remainder of this paper is structured as follows. We discuss the history of mobile botnets in Section 2, while Section 3 describes the model of the Hybrid Mobile Botnet. The design of the prototype, the execution and the results are presented in Section 4. In Section 5 we discuss the whether the objectives of the Hybrid Mobile Botnet are achieved and Section 6 concludes the paper.

183


Heloise Pieterse and Martin Olivier

2. The history of mobile botnets The history of mobile botnets does not date as far back as that of traditional botnets since mobile malware only started appearing in 2004. Still it took nearly five years before mobile malware displayed functionality that closely resembled that of botnets. The first was the Symbian worm Yxes (Apvrille 2010), which targeted Symbian phones running the OS9 operating system (OS). The malware was responsible for sending out SMS messages, retrieving the International Mobility Equipment Identity (IMEI) and the International Mobility Subscriber Identity (IMSI) numbers of the phone and communicating with remote servers. The ability of the malware to connect to the Internet was the key characteristic that many believed it was part of a mobile botnet. The malware had no C&C network and although it had the ability to contact remote servers, the processing of commands were limited (Apvrille 2010). Near the end of 2009 a new malware appeared that targeted Apple’s iPhones. The malware, later named ikee.B (Porras et al. 2010), included C&C logic and allowed the botmaster to have complete control over the infected iPhone. To propagate, ikee.B, searched Internet IP addresses for SSH services and then attempted to connect to the responding service as root by using the default password, “alphine”. The malware was responsible for archiving SMS messages and then forwarded the messages, along with other information collected from the phone, to a server located in Lithuania. Even though ikee.B had limited growth potential, it provided a foundation for the future development of mobile botnets (Porras et al. 2010). During 2010 security analysts discovered a new Trojan horse, Geinimi, targeting smartphones running the Android OS. Geinimi (Wyatt 2012) is the first Android malware to display functionalities closely relating to that of botnets. The malware opened a backdoor on the infected device and transmitted the collected information to a remote location. It also has the potential to receive commands from a remote server. Besides the basic botnet functionality, Geinimi raised the sophistication of mobile botnet technology significantly. The malware deployed an off‐the‐shelf byte code obfuscator to hide botnet activities and encrypted chunks of the C&C traffic (Wyatt 2012). The three versions of mobile malware described above established the platform for development of future mobile botnets. They revealed that it is possible to take concepts of botnets running on PCs and apply them to mobile devices. Indeed, it is possible to establish C&C of mobile botnets and the next section will discuss a hybrid approach.

3. Proposed hybrid mobile botnet The purpose of the proposed Hybrid Mobile Botnet is to explore the efficiency of multiple C&C channels against the following objectives: no single point of failure within the topology, low (monetary) cost for command dissemination, limited network activities and low battery consumption per bot. The design of the Hybrid Mobile Botnet consists of the following three main components: propagation vector, C&C channels and mobile botnet topology. Although multiple mobile botnet designs currently exists in literature (Geng et al. 2012; Singh et al. 2010; Xiang et al. 2011; Faghani & Nguyen 2012) our proposed mobile botnet is the first, to our knowledge, to use multiple C&C channels to disseminate commands. The use of multiple C&C channels makes this mobile botnet harder to detect, cost‐effective and more reliable.

3.1 Propagation vector The propagation vector is responsible for disseminating the malicious bot code to the smartphones. Common techniques for spreading the code include social engineering or vulnerability exploits. The Hybrid Mobile Botnet exploits the method of social engineering by tricking users into downloading a popular application that is infected with the malicious bot code. Such an application is well‐known and legitimate but the original code has been re‐engineered and repackaged with additional bot code. A user installs the application but is unaware of the additional configurations taking place in the background of the smartphone. For this propagation vector to succeed, the botmaster selects an application that is currently popular among smartphone users. In the following steps, the botmaster will reverse engineer the selected application and include malicious bot code without affecting the original code modules or their functionality. When the

184


Heloise Pieterse and Martin Olivier botmaster completes the repackaging of the application it is returned to the Application Market where it awaits downloading. The motivation behind deploying this propagation vector is two‐fold. Firstly, returning the malicious application to the Application Market provides this mobile botnet with the ability to reach a wide audience. Secondly, choosing a popular application also allows for the possibility that the malicious application can spread by word of mouth within social circles. It is because of these motivations that the Hybrid Mobile Botnet deploys via the propagation vector as described above.

3.2 Command and control channels The C&C channels are the most important component of a mobile botnet as it is responsible for disseminating the commands from the botmaster to the mobile bots. Due to the critical aspect of the C&C channels, it forms an attractive target for a defender trying to bring the mobile botnet down. To improve the robustness of the Hybrid Mobile Botnet, the following multiple C&C channels are utilized: SMS, Bluetooth and HTTP. 3.2.1 SMS C&C channel SMS is a popular service offered by the mobile phone network and is supported by most smartphones available today. There are multiple advantages provided by SMS that makes it a suitable channel for C&C. These advantages include (Zeng, et al 2012):

Ubiquity: Most smartphones can handle SMS messages.

Offline accommodation: A Service Centre stores the SMS messages if the recipient’s smartphone is turned off.

Hiding malicious content: A SMS message can hide malicious content.

Multiple send and receive channel options: For example sending SMS messages via online websites.

To design a stealthy unidirectional SMS C&C channel, the cost of sending the SMS messages and the prevention of the smartphone user detecting the received SMS messages must be taken into consideration. Currently there are services available that offer free SMS texting via web interfaces (for example Text4Free). Such websites offer the botmaster the opportunity to send multiple SMS messages without incurring any costs and possibly keeping his/her identity hidden. To prevent the smartphone user from detecting the commands being sent as SMS messages, every mobile bot will intercept all incoming SMS messages before they reach the inbox. SMS messages containing the specific passcode will be aborted while all other SMS messages will safely pass through to the inbox to avoid any detection by the smartphone user. 3.2.2 Bluetooth C&C channel Bluetooth is the second C&C channel for the Hybrid Mobile Botnet. The reason for selecting unidirectional Bluetooth as a C&C channel is simply because of its availability on most smartphones and it also provides a stealthy mechanism for command dissemination. There is however an important aspect of Bluetooth that must be taken into consideration. Bluetooth, like any other electronic component, consumes battery power. If the Bluetooth is left on indefinitely, it will quickly drain the battery of the smartphone which can lead to the discovery of the mobile bot. To minimize the consumption of battery power, the Bluetooth will only be active during specific period of the day and only for a limited time. These periods are known as periods of mobility and are defined according to Stability and Availability. For the purpose of this paper we defined three periods of mobility:

No Mobility:

Stability: Stability is high with no changes in geographical positioning.

Availability: Active for long periods, during nightfall and early morning hours, when people are sleeping.

Low Mobility:

Stability: Stability is moderate with infrequent changes in geographical positioning.

185


Heloise Pieterse and Martin Olivier

Availability: Active for moderate periods, during day time, when people are actively working.

High Mobility:

Stability: Stability is low with frequent changes in geographical positioning.

Availability: Active for short periods, during morning hours and late afternoons as people travel to their destinations.

From the three periods of mobility mentioned above, the period of Low Mobility provides the most stable time period for the longest available time and therefore the Bluetooth C&C channel will only be active during this period. 3.2.3 HTTP C&C channel The botmaster requires knowledge about the mobile botnet and all of the mobile bots that are actively participating in the mobile botnet. To retrieve the required information, both the SMS and Bluetooth C&C channels are inadequate. Therefore the mobile botnet requires an additional channel to transport the information. The additional channel utilizes HTTP and it allows a mobile bot to transfer information to the Control Server. The purpose of the bidirectional HTTP C&C channel is to forward information between a mobile bot and the Control Sever. The information include: mobile phone number, Bluetooth MAC address, geographical data, IMEI number and the IMSI number. Thus the HTTP C&C channel is purely there to support the construction of the ever changing mobile botnet.

3.3 Topology of the hybrid mobile botnet A mobile botnet consists of a collection of compromised smartphones that are organized into a structure, often referred to as the topology. The topology of the mobile botnet allows the botmaster to efficiently disseminate the commands to all the mobile bots currently participating in the mobile botnet. In order to describe the topology of the Hybrid Mobile Botnet, certain terminology must be clarified:

Mobile bot: a compromised smartphone that can assume one of two distinct roles: cluster head bot or receiver bot.

Cluster head bot: a mobile bot within a cluster botnet that directly receives the commands via SMS messages from the botmaster. It is also responsible for forwarding the received commands to the receiver bots in its assigned cluster.

Receiver bot: a mobile bot that receives commands from a cluster head bot.

Botmaster: the entity responsible for originating commands via SMS messages to a selection of cluster head bots.

Control server: a server managed by the botmaster that stores information about the mobile bots actively participating in the mobile botnet.

Active period: the time period that allows the cluster head bot to exchange commands with the receiver bots within its assigned cluster. Formed at a specific time and location.

Bot ID: the Bluetooth MAC address of a smartphone.

Bot list: each mobile bot contains a file that lists the Bot IDs of the other mobile bots within a specific cluster botnet.

The topology of the Hybrid Mobile Botnet is shown in Figure 1. For simplicity, the HTTP C&C channels to the Control Server are not shown. The global botnet consists of a collection of cluster botnets and its structure is dynamic due to the constantly changing cluster botnets. The cluster botnet, which also forms a dynamic structure, consists of a collection of mobile bots in close proximity. The dynamic property of the both the global botnet and the cluster botnets are due to the mobility of the infected smartphones.

186


Heloise Pieterse and Martin Olivier To allow communication to occur within the cluster botnet, the mobile bots utilize the Bluetooth C&C channel. The Bluetooth C&C channel requires the mobile bots to be within close range (10 meters) to communicate and therefore the cluster botnet is also location dependent. Due to the location dependence and the dynamic property of cluster botnets, it will only exist for a specific time period at a specific location. Thus only during the available active periods will the cluster head bot exchange the commands via Bluetooth and will continue until all the receiver bots within the cluster botnet have received the command.

Figure 1: Topology of the hybrid mobile botnet With the Bluetooth C&C channel being location dependent, it is inadequate to use it as the global communication medium between the cluster botnets. Therefore the SMS C&C channel is used to propagate the commands via SMS messages to all of the cluster head bots. Due to the possible large number of cluster head bots it will be impractical for the botmaster to directly send the command to all of the cluster head bots. To keep monetary costs low, the SMS C&C channel utilizes an arbitrary tree structured topology. This arbitrary property of the tree structure provides the cluster head bot with the ability to send a specific number (between one and five) of SMS messages. The arbitrary tree structured topology improves the stealth of the mobile botnet and increases the difficulty of predicting the flow of command dissemination. The dynamic topology increases the complexity of detecting the Hybrid Mobile Botnet, but it also complicates the process of command dissemination. Using the C&C channels as described above will allow the mobile bots to communicate in an effective and timely manner.

4. Prototype design and evaluation The prototype consists of a small collection of smartphones, infected with malicious bot code and capable of running on the Android OS (version 2.3.3 and above). The purpose of this prototype is to evaluate the effectiveness of the Hybrid Mobile Botnet on real devices and also measure the objectives as stated in Section 3. The visualization of the prototype is shown in Figure 2.

Figure 2: Topology of the Hybrid Mobile Botnet prototype

187


Heloise Pieterse and Martin Olivier The prototype was specifically developed for the Android OS. The selection of the Android OS as the development platform is two‐fold. Firstly, as of the second quarter of 2012 the Android OS is leading the market with 64.1% in smartphone sales, making it the most popular OS for smartphones (Van der Meulen & Pettey 2012). Secondly, besides the popularity, the Android OS also allows any user to create, develop and upload applications to Google’s Play store. It is because of the above mentioned reasons that we selected the Android OS as the development platform. The prototype consists of the following devices: Samsung Galaxy Pocket, Samsung Galaxy S2 and Google’s Nexus 7 tablet. Each device is infected with exactly the same piece of malicious bot code. During the execution of the prototype, the battery consumption, data consumption and Anti‐virus applications will be evaluated. The prototype is designed to execute during hourly intervals (see Figure 3), instead of daily intervals, to easier evaluate performance and execution. With every hour interval the time period between twenty past and twenty to, for a total of 40 minutes, represents the period of Low Mobility (see Section 3.2.2) and is the time when the mobile bots will be active. For the first two hours of execution, the prototype collects GPS data at ten minute intervals. During the last interval, the collected GPS data of that period of Low Mobility are uploaded to the Control Server via the HTTP C&C channel and the response from the Control Server will include the address of the elected cluster head bot and the command dissemination flag. The command dissemination flag indicates whether the following period of Low Mobility will perform command dissemination or will continue with the collection of GPS data. Only the botmaster can start the process of command dissemination by sending the command via a SMS message to the cluster head bot. If the response from the Control Server is positive for the start of command dissemination, then during the following active period the mobile bots will only focus on sending or receiving the commands. When the mobile bots complete the process of command dissemination and the execution of the received command, it returns to the collecting of GPS data.

Figure 3: Timeline of execution of the prototype

4.1 Execution of the prototype To successfully track the execution of the prototype, the tPacketCapture application was installed on the Nexus 7 tablet (the only device to support the tPacketCapture application). This application performs packet capturing without the requirement of rooting the device. The captured data are saved as a PCAP file and can be viewed using Wireshark. Using this application makes it possible to capture the communication occurring between the mobile bot and Control Server. The Nexus 7 tablet will monitor the HTTP traffic that occurs across a Wi‐Fi connection with this application. During the execution of the prototype, the Nexus 7 tablet acts as a receiver bot and only the receiver bots collect information from the device. Upon receiving the command, these mobile bots must collect the IMEI and the IMSI numbers of the device and send the information to the Control Server. For the purpose of this experiment, the IMEI and the IMSI numbers were simulated on the Nexus 7 tablet since the tablet had no mobile network connectivity.

188


Heloise Pieterse and Martin Olivier

Figure 4: Captured data of the mobile bot’s network activities The data captured while executing the prototype on the Nexus 7 tablet is displayed in Figure 4. The lines highlighted in black are the captured packets that directly relate to the execution of the prototype and will be looked at more closely in the remainder of this section. The first connection made to the Control Server is via the UploadInfo.php script (see Figure 5). During this connection the receiver bot collects the mobile phone number and Bluetooth address of the smartphone and forwards it the Control Server. The mobile botnet requires this information to uniquely identify each mobile bot.

Figure 5: Captured data sent via UploadInfo.php script Multiple connections occur between the receiver bot and the Control Server during 08:40:04 and 08:40:06 via the UpdateLocationData.php script. During each connection the mobile bot uploads the collected GPS data (see Figures 6, 7 and 8).

Figure 6: Collected GPS data for the 20 minute interval

Figure 7: Collected GPS data for the 30 minute interval

189


Heloise Pieterse and Martin Olivier

Figure 8: Collected GPS data for the 40 minute interval After the last connection, the Control Server responds back with the Bluetooth address of the elected cluster head bot (28:98:7B:3A:8A) and the command dissemination flag (currently set to false). The false property of the flag means the following active period will continue with collection of GPS data (see Figure 9).

Figure 9: Response from control server The next connections occur between 09:40:11 and 09:40:13, during which the collected GPS data is once again uploaded to the Control Server. The response received from the Control Server has however changed (see Figure 10). The command dissemination flag is set to true, meaning that the botmaster has sent the command via a SMS message to the cluster head bot somewhere between 08:40 and 09:40. Thus the next active period will perform command dissemination and execution.

Figure 10: Response from the control server Figure 11 reveals that the receiver bot has successfully received and executed the command. The mobile bot then forwards the IMEI and IMSI numbers to the Control Server via the UploadStolenInfo.php script.

Figure 11: Stolen IMEI and IMSI numbers This step‐by‐step analysis of the captured packets show that this prototype executed correctly, without any complications. This prototype thus illustrates that the current mobile technology exhibits all the capabilities required for developing a mobile botnet.

4.2 Evaluation This section discusses the evaluation of the prototype on the mobile devices, focussing on battery and data consumption of a mobile bot, as well as the analysis of anti‐virus applications. The prototype executes for a period of three hours and the same experiment was replicated on three separate occasions. 4.2.1 Battery consumption A significant decrease of battery power will potentially alert the smartphone user of the presence of the mobile bot which in return can lead to the bot’s discovery. Therefore, throughout the execution of the prototype, the consumption of battery power was closely monitored.

190


Heloise Pieterse and Martin Olivier To effectively monitor the consumption of battery power, the GSam Battery Monitor application was installed on all the smartphones participating in the execution of the prototype. This application can monitor each application or service individually and determine the consumption of battery power. During the three separate experiments, each mobile bot only consumed 0.1% of the battery power over a period of three hours. Thus during 24 hours the smartphone will only lose 0.8% of its power while the mobile bot executes. Thus the execution of the mobile bot will have little influence on the battery, increasing the difficulty of detecting the mobile bot. 4.2.2 Network activities A sudden increase in data consumption can potentially alert the smartphone user of the presence of the mobile bot. After the execution of the prototype, each smartphone was analysed to determine the data consumption of the mobile bot (see Figure 12).

Figure 12: Data consumption of the prototype On average, each mobile bot consumes 5.427KB during its execution with a standard deviation of 0.846KB. Thus during a monthly period a mobile bot will consume less than 200KB of data. This low consumption of data will not alert the smartphone user and also increase the difficulty of detecting the mobile bot on a smartphone. 4.2.3 Anti‐virus analysis Four mobile anti‐virus applications (AVG Anti‐virus, Avast Mobile Security, Lookout Security & Anti‐virus and Norton Anti‐virus & Security) were installed on a smartphone prior to the execution of the prototype. All the anti‐viruses were active during the execution of the prototype but failed to identify any malicious activities. After the execution of the prototype, all of the anti‐viruses performed a scan of all the available applications on the smartphone. None of these scans reported of any malicious application. Also discovered during the evaluation of the anti‐virus applications is the fact that they share most of the same permissions as the mobile bot application (Access location data, Read identity info and Access messages). So it becomes impractical to determine whether an application is malicious or not by simply looking at the permissions. Analysis of four anti‐viruses shows that new malicious mobile malware can go undetected. This inability of the anti‐virus applications to identify mobile bot activities and the sharing of multiple permissions improves the secrecy by which the mobile botnet can operate.

5. Discussion The purpose of the prototype was to explore the efficiency of multiple C&C channels against the following objectives: no single point of failure within the topology, low cost for command dissemination, limited network activities and low battery consumption per bot. In terms of the first objective, the Hybrid Mobile Botnet accomplishes this by ensuring each mobile bot contains a file (Bot list) with all of the Bot IDs (Bluetooth addresses) of the other mobile bots within a certain cluster. Thus, if the Control Server should become unavailable the Hybrid Mobile Botnet will still be able to function to a certain degree by using the information in the Bot list.

191


Heloise Pieterse and Martin Olivier By forming cluster botnets that can communicate to mobile bots in their assigned cluster via Bluetooth ensures that the overall cost of communication within the mobile botnet stays low. The limitation of the amount of SMS messages that can be sent from individual mobile bots also allows the cost of communication to be low, thus meeting the second objective. From the evaluation of the prototype (see Section 4.2) it is possible to conclude that the following objectives, namely limited network activities and low battery consumption, are met. Each mobile bot consumes on average 5.427 kilobytes while executing during the period of low mobility and by only connecting a limited number of times to the Control Server ensures that the objective of limited network activities are met. Only 0.1% of the smartphone’s battery power was consumed during the execution of the prototype, confirming the compliance of the very last objective of this mobile botnet design. This study demonstrates that a cost‐effective and stealthy mobile botnet is possible through implementation of a hybrid architecture using existing smartphone communication channels.

6. Conclusion As smartphones become more powerful, they become the ideal targets for mobile malware. One such threat that smartphone users are currently facing is that of mobile botnets. In this paper, we proposed the design of a new mobile botnet that utilizes multiple C&C channels, namely Bluetooth, SMS and HTTP. To analyse and measure the performance of this model, a prototype was designed and executed on a small collection of smartphones. From the analysis of this prototype, it is possible to conclude that this mobile botnet exhibits the following qualities: cost‐effectiveness and stealth. Future research will focus on improving the mobile botnet design by encrypting all of the communication occurring between the mobile bots and the Control Server, randomizing the mobile bot activities so that it becomes even more difficult to detect and also exploring other possible C&C channels for command dissemination. The ultimate goal of this research is to discover techniques by which future mobile botnets can be detected on smartphones.

References Apvrille, A. (2010), "Symbian worm Yxes: Towards mobile botnets?", 19th Annual EICAR Conference. Faghani, M.R. and Nguyen, U.T. (2012), "SoCellBot: A new botnet design to infect smartphones via online social networking", 25th IEEE Canadian Conference on Electrical and Computer Engineering. Geng, G. Xu., G. Zhang, M., Guo, Y., Yang, G. and Wei, C. (2012), "The Design of SMS Based Heterogeneous Mobile Botnet", Journal of Computers, Vol. 7, No. 1, pp. 235‐243. Grizzard, J.B., Sharma, V., Nunnery, C., Kang, B.B.H. and Dagon, D. (2007), "Peer‐to‐peer botnets: Overview and case study", Proceedings of the first conference on First Workshop on Hot Topics in Understanding Botnets, USENIX Association, pp. 1. Lardnios, F. (2012), McAfee: Mobile Malware Explodes, Increases 1.200% in Q1 2012. [Online] Available: http://techcrunch.com/2012/05/23/mcafee‐mobile‐malware‐explodes‐increases‐1200‐in‐q1 2012/ [Accessed 8 October 2012]. Porras, P., Saïdi, H. and Yegneswaran, V. (2010), "An Analysis of the iKee. B iPhone Botnet", Security and Privacy in Mobile Information and Communication Systems, pp. 141‐152. Singh, K., Sangal, S., Jain, N., Traynor, P. and Lee, W. (2010), "Evaluating bluetooth as a medium for botnet command and control", Detection of Intrusions and Malware, and Vulnerability Assessment, pp. 61‐80. Van der Meulen, R. and Pettey, C. (2012), Gartner Says Worldwide Sales of Mobile Phones Declined 2.3 Percent in Second Quarter of 2012. [Online] Available: www.gartner.com/it/page.jsp?id=2120015 [Accessed 1 October 2012]. Wyatt, T. (2012), Security Alert: Geinimi, Sophisticated New Android Trojan Found in Wild, [Online] Available: http://blog.mylookout.com/blog/2010/12/29/geinimi_trojan/ [Accessed 4 July 2012]. Xiang, C., Binxing, F., Lihua, Y., Xiaoyi, L. and Tianning, Z. (2011), "Andbot: towards advanced Mobile botnets", Proceedings of the 4th USENIX conference on Large‐scale exploits and emergent threats, USENIX Association, pp. 11. Zeng, Y., Hu, X. and Shin, K.G. (2012), "Design of SMS commanded‐and‐controlled and P2P Structured mobile botnet", WISEC '12 Proceedings of the fifth ACM conference on Security and Privacy in Wireless and Mobile Networks, pp. 137.

192


Functional Resilience, Functional Resonance and Threat Anticipation for Rapidly Developed Systems David Rohret, Michael Kraft and Michael Vella Computer Sciences Corporation, Inc., San Antonio, USA drohret@ieee.org Mkraft5@csc.com mvella3@csc.com Abstract: Traditionally, network‐centric rapid‐development teams primarily concentrate on functionality; integrating commercial off‐the‐shelf (COTS) and government off‐the‐shelf (GOTS) technologies to meet specific goals for a compressed development schedule. In most cases basic security measures will be implemented, but there is an assumption that the integrated COTS and GOTS systems are secure; despite the introduction of new development and integration into existing programs of record. Functional resilience for rapidly developed systems is rarely considered and can result in operational failure in real‐world scenarios due to long recovery times following an attack or adverse effect. To compound the problem, techno‐centric assessment teams approach their targets from one perspective (or technology) at a time, preventing discovery and mitigation of vulnerabilities that can be exploited by an adversary who is goal‐oriented, not technology‐ centric or prohibitive. Sandia National Laboratory’s (SNL) Information Design Assurance Red team (formerly the Information Operations Red Team Assessment (IORTA)), lists eight separate red teaming methods, to include (SNL, 2007): 1. Design assurance red teaming 5. Red team benchmarking 2. Red team hypothesis testing 6. Operational red teaming 3. Red team gaming (scenario play) 7. Analytical red teaming 4. Behavioral red teaming 8. Penetration testing Each of SNL’s methods are depicted as a separate action completed by multiple teams or subject matter experts. For rapid development or deployment technologies, this model of independent assessments presents three shortcomings that prevent an accurate portrait of a rapidly developed system’s security posture: 1. The inability to assess for functional resilience or resonance and anticipated threats due to disparate red teams and data collection methods, and collaboration 2. Time and resource constraints on rapidly fielded technologies 3.The inability to identify vulnerabilities created through cross‐domain/technology integration SNL’s methodologies were developed based on traditional acquisition and development models that allow for extended testing, mitigation, and re‐ testing. Although SNL’s red teaming definitions have become widely accepted, it does not support current DoD acquisition processes for rapidly fielded systems and falls short in accurately identifying and mitigating vulnerabilities before a technology is deployed.Adaptive Red Teams (ART) research and employ adversarial techniques to exploit vulnerabilities that are overlooked by traditional assessment teams in the form of false negatives and/or false positives. ARTs employ all eight of SNL’s red teaming actions simultaneously, and use any means available to acquire their intended effects, including physical, environmental, and social factors combined with other technologies or methods that can be used to disrupt, attack or otherwise compromise a given technology. To accurately vet technologies and operational processes for vulnerabilities and shortfalls, an ART will emulate an adversary by using the adversary’s tools, methodologies, and tactics in an attempt to attain similar goals. This approach is an effects‐based, goal‐driven assessment and not a technology‐centric approach to mitigating risk in experimental, rapidly developed, and deployed operational technologies and systems. This paper will briefly define a framework for adaptive red teams and introduce the processes necessary to integrate security and resiliency into development practices for rapidly developed systems and programs integrating COTS and GOTS technologies and systems. Keywords: adaptive red team, functional resilience, functional resonance, preliminary threat assessment, security posture

1. Adaptive red team (ART) framework methodology defined ARTs utilize specialized skill sets that enable them to identify, validate, demonstrate, and mitigate vulnerabilities for rapidly developed emerging and operational technologies. This includes in‐depth knowledge of, and experience with, ancillary project requirements that may not be recognized as intricate to the technology or process being assessed. Ancillary requirements include, but are not limited to:

Measuring functional resiliency: ability to withstand or recover from an attack or adverse effect

Measuring functional resonance: negative effects associated with integrating new technologies with existing components/systems

Threat anticipation: using analytical data to predict, prevent and mitigate future adverse effects

193


David Rohret, Michael Kraft and Michael Vella

Logistical requirements: using logistical data (shipping, notifications, orders, etc.) to disrupt delivery and deployment of a system

Communications: intercepting, altering, or deleting communications information prior to and during operations to include Operations Security (OPSEC) and social engineering/social network attacks

Local Area of Responsibility (AOR) infrastructure attacks: power, water, fuel, etc.

Environmental factors: natural and man‐made

An ART will conduct research, testing, and demonstrations, to determine the processes and equipment required to successfully accomplish comprehensive vulnerability assessments on rapidly fielded and emerging technologies. Information related capabilities (IRC) and variables outside of computer network security (CNS), such as radio frequency (RF) security, electronic warfare (EW), operations security (OPSEC), military deception (MILDEC), social networking, and military information support operations (MISO), are included in the overall vulnerability assessment and reporting.

2. ART techniques and procedures Few technological concepts are fully realized in their first instantiation. Most complex systems or technology concepts tend to form iteratively and incrementally over time and are often a process of exploration and experimentation that unfolds during the development process; following multiple design meetings and technical reviews by operators and decision makers (the users). An ART begins with the establishment of a baseline assessment process using the research and demonstration procedures described in the following paragraphs, each tailored as required with project manager and developer concurrence. The following processes are scalable and can be tailored to specific or unique requirements for each assessment, based on availability, resources, and technologies being developed and/or integrated. Preliminary Threat Assessment: The ART will create a Preliminary Threat Assessment (PTA) of the identified or assumed technologies and operational deployment guidelines for the targeted system(s). This includes reviews of the requirements (type of assessment and vulnerabilities associated with technologies/components) and associated ancillary requirements (functional resiliency, functional resonance, logistics, and integration). PTAs are accomplished at the earliest development stage of a project and are intended to help guide secure development practices. Once completed, an initial architectural view of the system(s), to include operating procedures, will be developed using an adversarial perspective. Because the PTA is developed at the outset of a development project it will be maintained as a living document; updated to include new technologies, development requirements, etc., until a baseline is established or alpha product completed. The ART will identify the capabilities required for an assessment and create a threat document outlining the adversary’s ability to attack or compromise the systems described. This document will provide the project management team the following information:

The initial research on an adversaries expected ability to attack or compromise their system

Vulnerabilities commonly associated with similar technologies/processes

The level of effort required to assess (attack) the targeted systems (assessment scope)

A PTA document is based on open‐source data or information available to hackers and adversaries that the ART identifies through research. The PTA not only provides the scope of the effort, but also accurately identifies if an ART assessment is necessary for a specific development project. Once a technology or development design is base‐lined, and the technologies have been identified, the PTA can be developed into a preliminary vulnerability report. Preliminary Vulnerability Analysis: The PVA is a detailed analysis accomplished after the PTA. The PVA will extend the research already accomplished and identify in detail all technologies and integrated systems associated with a development project, to include the ancillary requirements. An ART will use the PVA to identify software (operating systems, applications, and web services), specific hardware platform requirements, RF communication systems and data links (if any), and characteristics of each system. As part of the PVA, the ART identifies network architecture and trust relationships, security measures in place, radio frequencies used, and unique or proprietary equipment/capabilities developed

194


David Rohret, Michael Kraft and Michael Vella specifically for the project. ARTs will conduct detailed vulnerability research identifying open-source vulnerabilities for the identified technologies, as well as accurate adversarial techniques and methodologies. Research for the team will include acquiring associated exploits and the investigation of probable adversaries for specific technologies in the intended areas of operation to include:

Adversarial target analysis: Terrorist groups, cartels, gangs, attack cells, cyber criminals.

Review open‐source intelligence sites to include: Open Source Intelligence (OSINT), GOTS intelligence gathering on known adversaries, and tactics, techniques, skill levels by AOR.

Negative factors an adversary could leverage to include: Weather and vegetation attenuation (wireless); Electrical grid, remote network connectivity, and other remote issues; time and location attack issues; rural or inactive RF environment to identify signals; Urban or active RF environment to mask signals, and the human factor.

PVA reports are provided to project managers in order to develop a risk assessment determination and validate the scope of the assessment. The PVA will also provide the ART with the information necessary to develop a detailed assessment plan based on documented adversarial TTPs.

3. Assessment pre‐planning and reporting Having completed the preliminary phase, the ART will develop an assessment plan based on the PVA and data received from project managers. When applicable, the ART will utilize RF propagation modeling and simulation (M&S) tools to predict effects the environment will have on propagation waves and available adversarial attack vectors for both RF and network‐centric systems. The team will consider pre‐exploitation testing against mock technologies for effectiveness and create an assessment plan based on adversarial goals and effects identified in the PVA report. Goals and effects may range from novice to first‐world capabilities, or both, depending on the required scope of the assessment.

4. The assessment The assessment consists of passive testing, reconnaissance, an active assessment, and validation testing. The ART test plans require repeatable test procedures for anomalies or questionable data to increase the accuracy of their findings. To better define each process the following basic overview is provided:

Passive Scanning and Network Reconnaissance: After the initial reconnaissance for network footprinting and RF scanning, the ART will research identified vulnerabilities and acquire or develop network exploitation tools and identify detected signals‐of‐interest (SOI). The ART will study the intended installation/deployment plans to identify exploitable processes introduced by operational procedures. Once these steps are completed, the team will verify RF parameters to match the expected sponsor specifications and attempt to capture and demodulate SOI to extract and modify data bits for exploration and deception (if applicable). The team will also accomplish network‐centric enumeration to identify weaknesses in the command and control process. Ancillary operations will be reviewed for assessing human interaction and/or logistics and other areas of operational support. Figure 1 provides a sample assessment flow diagram for a full network‐centric system assessment.

Actively assess rapidly developed systems: In this phase, the ART will conduct every available attack methodology to include network‐centric, radio frequency, environmental, human factors, etc., in an attempt to compromise, deny or alter data, or any other documented adversarial goal.

Follow‐on testing: Follow‐on testing will be accomplished where anomalies or questionable data were identified. After completion of the assessment, the team will review and analyze the data and provide project managers with initial significant findings.

The ART will conduct both the network and RF assessments simultaneously, in coordinated attacks if applicable. The ability to use multiple attack vectors simultaneously against a common target (with a common goal) will identify vulnerabilities in systems of systems that. when assessed individually, are undetectable. An example would be the ability of an ART team to utilize a vector signal generator to modulate exploitation code at a frequency that is non‐standard for a network assessment team. In one case study, an ART team successfully captured data broadcast at 4.3 GHz using 802.11 protocols with an Agilent spectrum analyzer and played it back after injecting malicious logic at distances far greater than previously anticipated. The compromised system then provided a network backdoor for more traditional attacks. In a second case study a security system using ground sensors, laser triggers, and digital cameras (networked to a common control

195


David Rohret, Michael Kraft and Michael Vella center) was used to secure a large campus facility. The layered security made it impossible for a red team, using adversarial tactics individually, to meet its goal of bypassing security and breaching the campus undetected. In this instance, our adaptive red team using counter lasers to fuzz the triggers, a VSA to spoof the ground sensors into displaying a calm state, and a network intrusion that disabled the cameras, allowed easy access to the campus grounds, undetected. The actual TTPs used included several days of conditioning the security personnel into accepting temporary camera outages due to light rain. The adversarial TTPs were identified from open‐source intelligence sites, and documented for the client organization, after which they altered their security policies and installed redundant sensor technologies.

Figure 1: Assessment flow diagram (Rohret and Jett, 2005) Following an assessment, the ART will accomplish the assessment report in a reasonable time to provide value‐ added input to both the development team(s) and the decision makers. The detailed report will use common technical report writing standards that accurately and concisely represents the security posture of the project. Technical results will be demonstrated using repeatable processes and mitigations/work‐a‐rounds for validated vulnerabilities. The results of additional and repeated testing will be documented and represented in the final security rating, showing corrective actions when applicable.

5. The adaptive red teams’ role in measuring functional resilience and resonance The International Council on Systems Engineering’s Resilient System’s Working Group defines resiliency as, ‘the capability of a system with specific characteristics before, during and after a disruption to absorb the disruption, recover to an acceptable level of performance, and sustain that level for an acceptable period of time’ (Incose.org, 2012). Figure 2 provides an accurate visualization of a resilience model that describes a resilient process while demonstrating the scope of achieving resiliency through a developmental process. In order to maintain command and control, advantage in the battlefield, and throughout cyberspace, the level of resiliency must be identified in rapidly developed systems in order to provide system users and the decision makers the level of capability they will require during a conflict or event. The ART is uniquely situated to provide the testing data necessary to accurately measure a systems resiliency while accomplishing vulnerability assessments. Through vulnerability assessments, ARTs will also collect data to determine a systems’ resonance characteristics, specifically, interdependent technologies and processes that can affect the system being developed, or will be affected by the development project during an adverse event.

196


David Rohret, Michael Kraft and Michael Vella

Figure 2: Resilience model for systems development (Bodeau and Gaubart, 2011)

6. The adaptive red team’s role in defining system resilience The traditional role of a red team is to identify and exploit every possible vulnerability for a system or process in order to expose and mitigate weaknesses for the purpose of creating a more viable system or process. The traditional scope is narrow, confined to a narrow range of requirements or technologies based on a similarly narrow set of objectives and goals. An example would be an assessment of a specific technology or process with the objective of obtaining certification or acceptance for use on an existing network. ARTs are not restricted to singular goals or objectives and will expand their scope to ancillary processes and technologies that may affect the overall capabilities of the systems being assessed. The measurement or testing of a systems’ resiliency is an example of the ART’s expanded responsibilities. In order to provide robust technologies in a rapid development process, the assessment process must contain a measurement of effectiveness for resiliency that accurately demonstrates functional resiliency at a given point in each development process. Figure 3 displays a resiliency measurement model where T0 represents the time of an adverse event and TR represents the time of acceptable recovery. The goal of resilience measurements throughout the development process is to decrease the loss of capability and the time to recovery (referred to as Area A in figure 3) for each development phase until an acceptable recovery time and capability loss is achieved.

Figure 3: Measuring resiliency (Kimmance, 2012) In order to accurately measure resiliency, the systems’ developers and user communities must define the minimum capability for a system to function and the minimum time a system can be non‐functioning before mission degradation or failure is achieved. Although it is not the responsibility of the ART to define system attributes, assessment teams should be expected to provide expert advice in the determination of both values. Figure 4 is an example of a cyber‐resilience flow diagram where resiliency, for a network‐centric system can be

197


David Rohret, Michael Kraft and Michael Vella displayed as individual components and where each component can be assigned time values based on current development/assessment testing. Following an adverse event, system capability and time to recover values can be represented by capability and TR values, thereby demonstrating an accurate resilience measurement. A collateral benefit of this process is the identification of the weakest link; allowing development and process teams to focus resources on specific problems, that when mitigated, will decrease TR and increase capability. Following an assessment, the ART will have the data to populate a resiliency chart or data sheet for the targeted system(s). By incorporating resiliency testing into the assessment plan, little or no additional time or other resources will be necessary to observe, acquire data, and document the targeted systems. Assessments require repeatable processes in order to validate vulnerabilities and to reduce the number of false positives and false negatives. Repeatable processes also provide multiple data collections for an accurate representation of system resiliency. Furthermore, as ARTs complete their vulnerability tests, they will be able to identify which type of attack or vulnerability is most effective, allowing system designers to correct the issues in earlier stages of development.

Figure 4: Cyber resilience engineering in context (Bodeau and Gaubart, 2011)

7. Determining functional resonance The determination of a system’s functional resonance is rarely achieved and the problem is compounded in rapidly fielded projects due to compressed development schedules and budget constraints. Furthermore, the use of COTS and GOTS technologies as part of the overall solution may actually inhibit the mapping of functional resonance as COTS and GOTS technologies will appear as ‘black‐box’ technologies to rapid fielding integration teams. An RF integration team will often depend on the vendors or manufacturers specifications, and in many cases, the engineers will not have an accurate portrait of the interdependencies of the system they are developing. Figure 5 displays a method for representing an ontology‐based interdependency that will be valid for most development projects. A functional resonance diagram can easily be recorded for a cradle‐to‐grave development project, where the system design teams maintain continuity throughout the developmental processes to include system updates and maintenance. This will not be the case for most rapid development projects, which utilize technologies already established as programs of record or are proprietary and without complete system data. Additionally, the integration of disparate systems will compound the issue as integration may include technologies with no defined functional resonance models. An example would be the use of remote network‐centric command and control technologies used to manipulate or collect data on radio frequency and electronic warfare‐centric technologies; until recently both technologies were treated by many integrators as wholly separate domains. ARTs also possess the ability to discover system resonance during passive and active assessments. The actions and processes an ART takes to assess a system includes identifying the test systems boundary of influence, which identifies affects on interdependent systems and processes even if there appears to be no known dependencies or logical relationships at the outset of testing. This is often identified by observing and documenting adverse affects on related software that may be sharing libraries. Often described as anomalies,

198


David Rohret, Michael Kraft and Michael Vella the repeatable testing required to reduce or eliminate false positives provide ARTs the data and documentation to accurately map systems resonance.

Figure 5: Resonance model to discover systems interdependencies (Rohret and Jett, 2005)

8. Anticipating future threats Whereas data collected by vulnerability assessment teams can be analyzed to identify functional resiliency and resonance data, providing a clear view of functional resonance for their system(s) requires additional research and expertise. The Functional Resonance Analysis Method (FRAM) provides a means to describe anticipated effects using the idea of resonance arising from the variability of current system performance (Hollnagle, 2012). To arrive at a description of functional variability and resonance, and to lead to recommendations for anticipating and dampening future vulnerabilities (risk management), a FRAM analysis consisting of four steps can be used to provide insight and guide the direction of development. The four FRAM steps are: 1. Identify and describe essential system functions, and characteristics using basic characteristics of system performance (referred to in the FRAM methodology as system aspects)

A description of the systems functionality and identified interoperability’s (functional resonance)

Types and severity of documented vulnerabilities and software flaws associated with a systems technologies

Known maintenance and failure rates

2. Characterize the potential variability of the functions in the FRAM model, as well as the possible actual variability of the functions in one or more instances of the model

What are the frequency of vulnerabilities identified for technologies associated with system development

What trends exist for types and severity of vulnerabilities and system failures; what are the key technologies or components associated with each

3. Define the functional resonance based on dependencies/couplings among functions and the potential for functional variability (functional resonance data) 4. Identify ways to monitor the development of resonance either to dampen variability that may lead to unwanted outcomes or to amplify variability that may lead to positive outcomes (includes functional resilience data)

199


David Rohret, Michael Kraft and Michael Vella

Trends will change as adversarial goals change

Identify and scope similar technologies for future trend data (look‐a‐head variability)

By identifying and charting a systems past performance, as it pertains to vulnerability and mitigation data, the mean time between newly released vulnerabilities, similar data for similar systems, and the development of a statistical model for a specific system can be developed to predict the types of exploits or attacks that will occur. This information coupled the average time to develop a corrective action for each probable adverse affect will provide a functional resonance model for a specific development system. Additional research will be required by the ART in the form of identifying adversarial goals and trends and identifying new and emerging technologies that can be used to adversely affect a developmental or operational technology (programs of record). Much of the data and information will already be captured in the Preliminary Vulnerability Report (PVA) which is accomplished at the outset of an assessment. The data can be represented using a trends chart that will individually identify system components (or software), associated vulnerabilities, mitigations, and non‐adversarial adverse effects (equipment failure, environmental factors, user error, etc.).

9. Assigning security posture ratings to rapidly fielded systems Identifying system vulnerabilities provides technical managers and development teams the necessary data and information to help secure their projects. A security posture rating is a representation of the level of security a system or technology has achieved at a given point in time. Security posture ratings are used primarily by decision makers to assist in determining whether or not to alter the development processes or deploy the technology in an operational status. Because ratings can be misconstrued, security posture ratings should be based solely on quantifiable data and subjective measures should be left to the user or customer community to determine. Mission information may not always be available to the ART, but it is required for an accurate security posture rating. One mission set may not require the same level of security as another mission set, allowing the security posture to vary for a single system or technology. Length of deployment, environment, threat risks, and other factors can change a security posture at any time; therefore it is necessary to concentrate on the stated objectives when finalizing the data. When formulating a system’s security posture rating the following information should be included in the overall decision:

Number and severity of vulnerabilities not mitigated

Anticipated vulnerabilities or adverse effects (functional resonance data)

Resiliency measurement

Integrated systems and their current security postures (resonance)

Required human interaction (social engineering/networking, human error)

Mission: what services/capabilities are essential to the mission the technology/system will be supporting

Generic rating charts can be developed that provide a technical security posture ranking based solely on an ART’s findings and a strict interpretation of the terms secure and not secure. Tables 1 and 2 are a simple representation of defining the current state of security for a network‐centric system. Table 1 provides categories of vulnerabilities and a severity ranking for each category. Table 2, using the information in Table 1, provides the actual security ranking. Once a systems’ security posture has been determined, technical and operational managers can identify the progress between tests, and more importantly, identify if the minimal acceptable risk is being met within the scheduled development time frame. Figure 6 provides a simple method of reporting progress against a user‐ defined acceptable risk at defined development milestones.

200


David Rohret, Michael Kraft and Michael Vella Table 1: Categories of vulnerabilities ranked by severity

Table 2: Security posture ratings based on defined criteria

Figure 6: Red team security postures identify measures of acceptable risk

10. Black Swan events for testing functional resiliency and resonance Black Swan is a metaphor for the Black Swan Theory developed by Nassim N. Taleb (Taleb 2005), which attempts to explain:

The disproportionate role of high‐impact, hard‐to‐predict, and rare events that are beyond the realm of normal expectations in history, science, finance and technology

The non‐computability of the probability of the consequential rare events using scientific methods (owing to the very nature of small probabilities)

The psychological biases that make people individually and collectively blind to uncertainty and unaware of the massive role of the rare event in historical affairs

201


David Rohret, Michael Kraft and Michael Vella Black swan, within the realm of testing and operational evaluation of technologies, refers to unexpected events of large magnitude and consequence and the effect these events have on both the technologies and their users in a realistic environmental setting. In theory, black swan events provide cataclysmic failure of the systems being assessed using adversarial techniques or natural disaster scenarios. These events provide an excellent means to fully detail and document systems’ functional resonance and resiliency. Black swan events can be staged immediately following a scheduled demonstration or assessment, providing more realistic and accurate results.

11. Summary Conventional assessment and red teaming events focus on specific technologies, system, or processes as they pertain to traditional government, military, and corporate acquisition models. Rapidly developed and fielded technologies cannot be assessed using traditional models and require innovative experimentation, analysis, and investigation that are based on adversarial TTPs. To fully vet inherent and introduced vulnerabilities, adaptive red teams must become goal oriented rather than technology‐centric in their assessment approach. Furthermore, to fully realize the effects and attack or adverse action has on a system or system of systems, functional resilience and resonance must be accurately measured. The data captured by an ART during the adaptive assessment process provides the necessary information to populate formula, spreadsheets, and other measurement models with accurate resiliency and resonance data.

References Bodeau, Deborah, J. and Gaubart, Richard. Cyber Engineering Framework. MITRE Technical Report MTR 110237. Sep 2011. Hollnagle, Eric. FRAM: The Functional Resonance Analysis Method: Modeling Complex Socio‐technical Systems. Ashgate Publishing Co., USA. 2012. Incose.org/practice/techactivities/wg/rswg/. Measuring Functional Resilience. July 2012. Kimmance, James, Dr. Infrastructure Risk and Resilience.BCI Workshop, Bristol. 2012. Rohret, David, M. and Jett, Andrew. Red Teaming: A Guide to Non‐kinetic Warfare. Aardvark Global Publishing, Salt Lake City, UT. 2005. Sandia National Laboratories. Red Teaming for Program Managers Guide. January, 2007. Schmitt, John, F. Dr. Defense Adaptive Red Team (DART) Manual. 2006. Taleb, Nassim Nicholas. Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. New York: Random House and Penguin. 2005.

202


What Lawyers Want: Legally Significant Questions That Only IT Specialists can Answer Yaroslav Shiryaev University of Warwick, Coventry, UK yaroslav.law@gmail.com Abstract: For the last decade international lawyers and IT specialists are brought together to conferences on issues of cyber‐security. With various topics covered from such different perspectives, a clash of educations occurs. Lawyers are rarely able to understand the deep technological discussions, while legal presentations might seem too philosophical for the IT professionals, leaving them wondering, what do lawyers want and why. In this environment legal questions that cannot be answered without the deep technological knowledge possessed by the computer experts, should be formulated carefully and very precisely. Therefore, with emphasis on the jus in bello, this article aims to outline a list of issues that inevitably require joint lawyer‐IT specialists dialogue and explain their significance from the point of view of international law. These issues include possibilities for digital “marking” of internationally protected objects online required under the existing humanitarian law, developing a “distinctive sign” for cyber‐combatants, forewarning the enemy of incoming attacks (“carrying arms visibly”) and re‐evaluating the concept of “vicinity” to dangerous installations in the context of cyber‐space. Keywords: jus in bello; armed conflict; cooperation; cyber‐combatants; cyber‐attacks; cyber warfare

1. Introduction The present article is a deviation from the mainstream form of legal writing and represents an attempt at direct interdisciplinary engagement of an international lawyer with computer experts. It concentrates on what the author sees as the three most important questions in international humanitarian law vis‐à‐vis cyber‐ attacks that jurists cannot answer themselves. Serious clarifications from the IT specialists are therefore required on all three of these issues. The article further tries to explain in non‐philosophical terms why these questions are so important.

2. Applicability of the International Humanitarian Law Contemporary international humanitarian law does not require the existence of a formal state of war and applies to all situations of armed conflict or military occupation. The absence of specific references to cyber‐ attacks does not exclude them from the scope of the laws of war, since the latter were clearly developed to tackle futuristic means and methods of warfare. Therefore, cyber‐attacks can trigger the applicability of international humanitarian law “to the extent that they can give rise to all required constitutive elements of an international or non‐international armed conflict” (Melzer, 2001). From the way international humanitarian law conflict treats kinetic and non‐kinetic weapons it would seem that consequences of any cyber‐attack that can result in destruction, damage, injury or death between states would automatically initiate an international armed conflict. Though certain scholars like (Dörmann, 2004) refer to Additional Protocol 1 (hereinafter AP1) to the Geneva Conventions (hereinafter GC) and claim that cyber‐neutralization of crucial military targets would do the same, there is no practical reason why international humanitarian law has to apply immediately to such situations if not followed by an actual attack, since these actions do not raise direct humanitarian concerns. The same logic applies to minor cyber‐attacks on objects crucial to the financial well‐being of a country (banks, stock exchanges etc.): merely “damaging” state economy is not sufficient to initiate an international armed conflict. On the other hand, what will likely trigger it is a cyber‐strike that can “destroy” state economy completely – since it might indirectly, but inevitably cause significant suffering among the general population. The precise level of mental or physical anguish needed to automatically start an international armed conflict is debatable. The threshold for “initiating” an internal conflict is much higher and remains unlikely in the cyber context, since it requires violence to be protracted, attackers formally organized and exercising control over a part of territory.

203


Yaroslav Shiryaev On the other hand, the application of international humanitarian law to cyber‐attacks which are carried out in the context of ongoing international or internal armed conflict is almost never objected. Consequently, cyber‐ strikes in support of conventional military operations must conform to the laws of war in all possible scenarios where a sufficient nexus between cyber‐attacks and an ongoing war exists.

3. Technical question 1: How to mark medical transport International humanitarian law creates an obligation to mark air‐ and seaborne medical transport with distinctive emblems or in special cases to use distinctive signals in order to guarantee their adequate protection. Since long‐distance cyber‐attacks are conducted in an isolated fifth dimension of warfare (cyber‐ space), visual contact with the attacked physical object may be lacking (unless conducted as part of a bigger operation with reconnaissance units or satellites). That raises the question how to “mark” the computers of medical transport online in order to inform the attacker of their special status, as well as to ensure its respect (Melzer, 2011). Importance from a legal perspective The degradation of civilian technology and electricity shortages, that inevitably follow any serious armed conflict, make computerized pharmaceutical factories, hospitals or food preparation facilities an unlikely luxury for any of the belligerents. On the other hand, modern medical ships and aircraft (but not medical vehicles), are prone to cyber‐attacks. A cyber‐attack can interfere with the proper work of navigational systems of a medical ship or otherwise jeopardize the safety onboard. The sinking of Italian cruise ship Costa Concordia in January 2012, though not a medical vessel, demonstrated the tendency to sometimes entirely rely only on computers for maritime navigation and the disastrous consequences it may bring. Although even minor cyber‐strikes may be said to be violating the obligation to “respect” hospital‐ships set down in Article 1 of the 1907 XII Hague Convention (hereinafter HC), the corpus of international humanitarian law is more preoccupied with protecting the ships against “attacks”. While Article 20 of the GC1 narrows the strikes down to those “from the land”, the ICRC Commentaries (1949) make it clear that this Article serves as a reminder in the context of land warfare and all attacks against medical ships are forbidden. The same principle applies to infecting vital computers on board of a medical airplane: in case the cyber‐ attacks seriously endanger the safety of flights, such cyber‐strikes will be in violation of numerous provisions of the GCs prohibiting “attacks” on medical planes, while failed attempts to interfere with the work of the aircraft‘s computers will not. Aside from specific norms that protect them, medical ships and planes are also covered by more general provisions on medical transport and medical personnel that call for respect and protection. Moreover, the Rome Statute criminalizes attacks against them as war crimes. Since medical air and sea crafts are normally used for the removal and transfer of the wounded and sick, a cyber‐attack that seriously threatens a medical airplane or ship during an armed conflict, would also violate the provisions protecting persons hors de combat – an act constituting grave breach of the GCs and a war crime on its own. Finally, worth noting is that cyber‐attacks against a vessel can theoretically breach these provisions without jeopardizing the safety of the vessel itself by corrupting medical databases, leading to improper treatment (Gervais, 2011) or incorrect blood transfusions (Kelsey, 2008), although, the chances that such computerized databases will be used during a war are minimal.

4. Technical question 2: Explaining “vicinity” to malware Article 56(1) of AP1 forbids attacking military objectives located in the vicinity of the installations containing dangerous forces (above all, nuclear power plants). Though this phrase initially meant physical closeness, a question must be raised whether it should be re‐evaluated in light of cyber‐attacks. Cyber‐vicinity must involve military objects that are connected to the installations, the infection of which is likely to transfer onto the computers controlling the “dangerous forces”. In order to be legal, the damaging malware has to be programmed in a way that would to the maximum extent avoid the release of dangerous forces. One must therefore ask, is this possible?

204


Yaroslav Shiryaev Importance from a Legal Perspective While some states made reservations in relation to applicability of AP1 to nuclear weapons, they do not exclude attacks against ordinary atomic facilities and materials from the scope of the Geneva Conventions. Objectively, uncontrolled nuclear energy and ionizing radiation remain one of the most hazardous “forces” on the planet, capable of causing “unspeakable sickness followed by painful death, [affecting] the genetic code, [damaging] the unborn and [rendering] the Earth uninhabitable” (Shahabuddeen, 1996). The cyber‐strike on the Ohio atomic power plant in 2003 shows that nuclear reactors are being targeted and they remain vulnerable to such attacks. A cyber‐attack that manages to cause a long‐lasting catastrophic failure of a nuclear power plant will inevitably affect “health, agricultural and dairy produce and the demography” of thousands (Weeramantry, 1996). For example, accidental explosion of the graphite‐ moderated Chernobyl reactor in 1986 was “approximately that of a half‐kiloton bomb, about one twenty‐fifth of the (…) Hiroshima blast” (BBC, 2011) and released 5.2 million terabecquerels of radiation, prompting deaths, countless illnesses and evacuation of up to 350 000 persons (Guardian, 2011). By October 2011 Fukushima damaged boiling‐water reactors had released about around 42% of the Chernobyl amount (Japan Times, 2011) with more than 90000 evacuated (Japan Echo, 2011), while the third largest nuclear accident, on the Three Mile Island in 1979, caused the discharge of about 560 gigabecquerels of radioactive iodine and led to evacuation of approximately 200 000 people (Japan Echo, 2011). Aside from being indiscriminate and capable of causing great suffering and death, radioactive contamination has significant negative impact on nature. That means that any cyber‐attack that results or may result in radioactive pollution can be considered an illegal method of warfare, and in more severe cases an environmental modification technique. Regardless of motive and possible military advantage (Brown, 2006) even if it is proportionate (Schmitt, 2002) such conduct is prohibited under international humanitarian law, as long as its consequences are intended or are expected to reach the high threshold (Fenrick, 1999) of widespread, long‐term and severe damage to the natural environment. Though, perhaps coincidental, the Rome Statute criminalizes launching attacks in the knowledge that they will cause such damage. Consequently, a cyber‐strike does not have to be “directed” – introducing it into the targeted system will suffice, if the expected damage is “clearly excessive in relation to the concrete and direct overall military advantage anticipated” (Schmitt 2002).

5. Technical question 3: How to make cyber‐warriors “recognizable at a distance” and/or their cyber‐arms “visible” While a distinctive digital signature of used malware remains desirable in cyber‐space (in order to prove that a cyber‐attack emanated from a lawful combatant), it remains unclear how to make cyber‐warriors “recognizable at a distance”. If interpreted grammatically, it becomes an outdated phrase that has no legal meaning in the age of cyber‐warfare, since the distance between human eyes and the PC monitor does not exceed one meter. If understood as calling for recognition of the attacker beforehand, it would jeopardize the possibility of a surprise attack that sometimes remains paramount to overcoming adversaries’ cyber‐defense. Combatants are generally required to distinguish themselves from the civilian population while they are engaged in an attack or in a military operation preparatory to an attack. This rule is not absolute, however: in case the nature of the hostilities does not permit distinguishing oneself, the combatants can still retain their legal status if they carry arms openly during attacks and in military preparations. According to the ICRC commentaries (1949), carrying arms “openly” does not mean the same thing as carrying them “visibly” or “ostensibly”, since “surprise is a factor in any war operation, whether or not involving regular troops” (ICRC Commentaries, 1949). Arms must be carried visibly however, for recognition purposes, if the combatants do not have a distinctive sign (ICRC Commentaries). How to best make malware “visible” in such case, remains for the IT specialists to decide. Importance from a Legal Perspective According to international humanitarian law, only combatants, i.e. members of the armed forces of a party to a conflict, have the right to directly participate in hostilities. Lawful combatants are not held responsible for

205


Yaroslav Shiryaev acts in war which would otherwise be unlawful, as long as they do not violate the laws of the armed conflict themselves (Brown, 2006). This principle is equally applicable both to conventional and cyber‐warfare. Though this comes at a price of being more vulnerable to attack (Kretzmer, 2009), if cyber‐attackers satisfied all the criteria of being lawful combatants, they would enjoy prisoner of war status upon their capture. On the other hand, falling out outside that definition removes their privileges (Shackelford, 2009) and makes the attackers liable for criminal prosecution, while the state sponsoring them will be in violation of international humanitarian law (Green, 2000; Delibasis, 2006). The requirements first proposed in the 1874 Brussels Declaration draft – that all forces must be commanded by a responsible person, have a fixed distinctive sign recognizable at a distance, carry arms openly and conduct operations lawfully – have been incorporated into the current international humanitarian law and represent a “widely accepted irreducible minimum for combatant status” (Watts, 2010). A special category of combatants (and upon capture, prisoners of war) commonly referred to as levée en masse includes all “inhabitants of a non‐occupied territory, who on the approach of the enemy spontaneously take up arms to resist the invading forces, […] provided they carry arms openly and respect the laws and customs of war” (1899 HC2, 1907 HC3, GC3). Considering e.g. spontaneous cyber‐attacks against NATO in the Kosovo War, it is plausible that violent invasion (but not other types of aggression) of one state might result in its civilians legally resorting to cyber‐strikes against the aggressor’s military and civilian infrastructures. According to the ICRC commentaries (1949), a mass levy can only be considered to exist during a short period of the actual invasion, i.e. cyber‐attacks become illegal once the enemy retreats or if his aggression results in occupation. The levée en masse can operate without a sufficient organization and state control, which ideally suits the non‐hierarchical coordination environment of cyber‐space (Melzer, 2011). Since members of the mass levy are not required to wear uniforms or use distinctive emblems (and consequently, use digital signatures on the malware they employ), they have a legal obligation to make their cyber‐arms visible to the enemy prior to the attacks.

6. Possible solutions Rule 72, Comment 5 of the Draft Tallinn Manual (2012) provides an example of how respect for medical transport in cyber‐space can be achieved. Namely, it suggests that one warring party can notify its opponent “that the files containing its military medical data have the unique name extension “.mil.med.B” and that this naming convention will not be used on any file that is not exclusively medical”. This suggestion solves the problem in the most basic scenarios, i.e. when state agents manually access adversaries’ computers on board of medical transport and thus learn of its protected status. However, on its own, it is likely to remain insufficient for the purposes of ensuring protection against automatically spreading malware. Viruses, worms, trojans, logic and time bombs will therefore need to be programmed in a way that allows them to discriminate protected objects (i.e. for the purposes of this paper, medical transport and objects in the vicinity of the installations containing dangerous forces). In theory, this can be done through a direct approach by making malware scan for files with certain extensions like “.mil.med.B” (see above) before copying itself and unleashing its destructive potential. Alternatively, it can look for special certificates that could be issued and embedded in software or hardware by internationally recognized organizations, such as the UN or ICRC. When it comes to making cyber‐warriors “recognizable at a distance” and carrying “arms openly”, letting an adversary know in advance of a planned attack will clearly be unacceptable for many states. A somewhat better band‐aid solution could be informing the enemies of the cyber‐strike at the moment when damage is already being done, either through diplomatic channels or by ensuring that transmitted malware contains distinctive, encrypted, yet easily recognizable signatures of the belligerents, preferably agreed upon in advance. The problem with this approach rests with the preparatory stage of cyber‐attacks, i.e. when rootkits and backdoors are being set up: the secrecy surrounding these acts, make persons engaged in them fall under the definition of spies (GC4) rather than combatants. While international law does not prohibit espionage per se, it does strip spies of combatant and potential prisoner of war statuses.

206


Yaroslav Shiryaev

7. Conclusion The present article, however short, outlines three most important issues that require interdisciplinary cooperation and explains their significance for international lawyers. It will remain but a half of the larger picture, until IT specialists chose to answer the three stated questions and share their wisdom and knowledge in order to solve these three most actual legal dilemmas.

References Brown, D. (2006) “A Proposal for an International Convention To Regulate the Use of Information Systems in Armed Conflict”, Harvard International Law Journal, Vol. 47, No. 1, pp 179‐222. Delibasis, D. (2007) The Right to National Self‐Defence in Information Warfare Operations, Arena Books, La Vergne. Dörmann, K. (2004) “Applicability of the Additional Protocols to Computer Network Attacks”, ICRC, Available at: http://www.icrc.org/eng/resources/documents/misc/68lg92.htm (Last accessed: November 2012). Fenrick, W. (1999) “War Crimes, para 2(iv) and (v)”, in: Otto Triffterer, Commentary on the Rome Statute of the International Criminal Court, Nomos, Baden‐Baden. Gervais, M. (2011) “Cyber Attacks and the Laws of War” SSRN, Available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1939615 (Last accessed: November 2012). Green, L. (2000) The Contemporary Law of Armed Conflict, MUP, Manchester. Kelsey, J. (2008) “Hacking into International Humanitarian Law: The Principles of Distinction and Neutrality in the Age of Cyber Warfare”, Michigan Law Review, Vol. 106, No. 7, pp 1428‐1450. Kretzmer, D. (2009) “Rethinking Application of IHL in Non‐International Armed Conflicts”, Israel Law Review, Vol. 42, No. 1, pp 8‐45. Melzer, N. (2011) “Cyberwarfare and International Law”, UNIDIR Resources, Available at: http://unidir.org/pdf/activites/pdf2‐act649.pdf (Last accessed: November 2012). Schmitt, M. (2002) “Wired Warfare: Computer Network Attack and Jus in Bello”, International Review of the Red Cross, Vol. 84, No. 846, pp 365‐399. Shackelford, S. (2009) “From Nuclear War to Net War: Analogizing Cyber Attacks in International Law”, Berkeley Journal of International Law, Vol. 25, No. 3, pp 192‐251. Watts, S. (2010) “Combatant Status and Computer Network Attack”, Virginia Journal of International Law, Vol. 50, No. 2, pp 392‐444. Dissenting Opinion of Judge Weeramantry in Legality of the Use or Threat of Nuclear Weapons (Advisory Opinion) (1996) ICJ Rep. “1949 Conventions & Additional Protocols, & Their Commentaries”, ICRC, Available at: http://www.icrc.org/ihl.nsf/CONVPRES?OpenView (Last accessed: November 2012). Dissenting Opinion of Judge Shahabuddeen in Legality of the Use or Threat of Nuclear Weapons (Advisory Opinion) (1996) ICJ Rep. (2012) Draft of the Tallinn Manual on the International Law Applicable to Cyber Warfare. Available at: http://issuu.com/nato_ccd_coe/docs/tallinn_manual_draft?mode=window&backgroundColor=%23222222 (Last accessed: November 2012). (2011) “How Does Fukushima Differ From Chernobyl?”, BBC, Available at http://www.bbc.co.uk/news/world‐asia‐pacific‐ 13050228 (Last accessed: November 2012). (2011) Nuclear Crises: How Do Fukushima and Chernobyl Compare?”, The Guardian, Available at http://www.guardian.co.uk/world/2011/apr/12/japan‐fukushima‐chernobyl‐crisis‐comparison (Last accessed: November 2012). (2011) “Fukushima Health Concerns”, Japan Times, Available at http://www.japantimes.co.jp/text/ed20111108a1.html (Last accessed: November 2012). (2011) “Three Major Nuclear Accidents: A Comparison”, Japan Echo, Available at http://japanecho.net/311‐data/1016/ (Last accessed: November 2012).

207


The Weakest Link – The ICT Supply Chain and Information Warfare Dan Shoemaker and Charles Wilson University of Detroit Mercy,USA shoemadp@udmercy.edu wilsonce@udmercy.edu Abstract: This paper proposes a unified model of best practice for ICT supply chain risk management (SCRM). Ensuring proper ICT‐SCRM practice is an important national priority because of the vulnerability of current supply chains to attack by nation states and other adversaries. This paper presents a comprehensive set of standards based lifecycle practices designed to address ICT product integrity concerns in the global marketplace. Keywords: national security policy, ICT product integrity, risk management, threat mitigation, federal program response, security training and certification, malicious software, counterfeiting

1. The absolute need for product integrity in an age of infowar Hence that general is skilful in attack whose opponent does not know what to defend; Sun Tzu, 496 BC Not much has changed in the 2500 years since Sun Tzu wrote those words. From Chancellorsville to Normandy the aim of any successful attacker has always been to “hit them where they don’t expect it”. And right now, one of the places that Americans least expect attack is from the products and services that they so blissfully trust. Nevertheless, the almost total lack of control over the security of the supply chain that provides those products represents a significant avenue of attack for any bad actor seeking to subvert our everyday way of life. For example, we build defense systems out of components that are derived from global sources. Thus, it would be a simple matter for a foreign nation state, terrorist group, or even an individual to compromise a purportedly secure system through a third or fourth tier supplier situated in a country out of sight and control of the prime contractor. Moreover, because of this is the case nobody knows for sure whether the parts that comprise our national defense infrastructure are actually what they were intended to be, or whether they are counterfeit, or contain maliciously inserted objects (Evans, 2012). And as a consequence, vulnerabilities created by insecure information and communications technology (ICT) supply chains will have to be addressed if we ever want to be sure that our adversaries cannot, “destroy power grids, water and sanitary services, Induce mass flooding, release toxic/radioactive materials, or bankrupt any business by inserting malicious objects into the (ICT) components that comprise our infrastructure” (Clark, 2003).

2. ICT supply chain security is not the same problem as supply chain security ICT products are developed through a global supply chain. Those supply chains are no different from any other conventional organizational operation in that they are created to accomplish a specific purpose. In the case of supply chains the purpose is to supply a product or service through coordinated work involving several organizations. The problem is that ICT supply chains produce products that are either abstract, like software, or so infinitesimally complex that they cannot be overseen and controlled by conventional means. Thus ICT supply chains create a different set of assurance problems for managers. Proper supply chain risk management (SCRM) practice address those assurance problems by providing a consistent, disciplined environment for developing the product, assessing what could go wrong in the process (i.e., assessing risks), determining which risks to address (i.e., setting mitigation priorities), implementing actions to address high‐priority risks and bringing those risks within tolerance” [Alberts, 2009]. Typically, supply chains are hierarchical, with the primary supplier forming the root of a number of levels of parent‐child relationships. From an assurance standpoint, what this implies is that every individual product of each individual node in that hierarchy has to be secure as well as correctly integrated with all other components up and down the production ladder. Because the product development process is distributed across a supply chain, maintaining the integrity of the products that are moving within that process is the critical part of the process and the weak link analogy is obvious here.

208


Dan Shoemaker and Charles Wilson Therefore, whether the product is a common household item or sophisticated military hardware, the activities within that product’s supply chain have to be consistently rational and precisey controlled in order to ensure against sabotage, or unintentional harm. That requires a coordinated set of consistently executed activities to enforce visibility into the process. Yet, because the development process is usually taking place in a number of global locations, typically at the same time, the requisite level of understanding and control over the process is hard to achieve [Alberts, 2009]. The increasing trend toward building systems out of purchased parts just exacerbates that problem [Redwine, 2006]. As a result, the National Institute of Standards and Technology estimates that exploitation of ICT defects costs the U.S. economy an average of $60 billion dollars annually [Newman, 2002]. However, the real concern is not cybercrime; it is that the exploitation of a point of failure in an infrastructure component like power or communication could lead to disastrous consequences (Clark, 2003). Therefore, it is not surprising that the U.S. government has begun to address the problem of ICT product integrity through a comprehensive program, which is designed to get better supply chain risk management (SCRM) practices into the workforce.

3. Supply chains and product integrity As we said earlier, there are a number of levels of parent‐child relationships involved in a supply chain. The number of those levels varies depending on the complexity of the product. Nonetheless, each component in the supply chain has to satisfy the explicit purpose that defined its placement in the hierarchy. Therefore, at its core, the purpose of the ICT supply chain risk management function is to ensure the integrity of disparate objects as they move from lower level construction up to higher level integration. In a recent report (March 23, 2012), the General Accounting Office (GAO) summarized the product integrity issues that supply chains face. ICT concerns fall into five categories. Each category has slightly different implications for product integrity … “Installation of malicious logic on hardware or software, installation of counterfeit hardware or software, failure or disruption in the production or distribution of a critical product or service, reliance upon a malicious or unqualified service provider for the performance of a technical service and installation of unintentional vulnerabilities on software or hardware” (GAO, 2012, p.1). Malicious logic is embedded in a product to fulfill some specific purpose. Malicious objects are by definition not part of the intended functionality. Therefore, in order to find and eliminate any instance rigorous testing and inspection is required. Embedding a malicious object in a product is always a hostile act. Thus, the assurance that a product is free of malicious code should be a high priority with any ICT customer. Nonetheless since it is hard enough to ensure the quality and security of the functions that ought to be present, it is asking for a lot to expect that functions that should not be present should also be identified and eliminated. As a result it is almost impossible to estimate how much malicious code currently resides in ICT products. Because the decision to embed a piece of malicious logic in a product is intentional, one of the most effective ways to ensure against the presence of such objects is to maintain strict oversight and control over the work of all suppliers within an organization’s supply chain. Counterfeits execute product functions as intended. They threaten product integrity because they are not the same as the actual part. Generally the purpose of a counterfeit is to save money, or supply a feature that the maker is otherwise incapable of providing. As a result counterfeits embody shortcuts in product quality, or security that can fail in many ways. Because they function like the original part it is often hard to spot a counterfeit in a large array of components. Consequently it is critical that the customer knows their suppliers and fully understands their business and technical practices prior to engaging in any purchases. The problems caused by breakdowns in the supply chain mirror the problems encountered in conventional manufacturing, in that the failure lies in the operation of the actual supply chain itself. The same is true with the technical service concern. From the standpoint of product integrity, a failure to deliver a critical part is comparable to a denial of service attack. In that the actual product is not the cause of the problem. Instead it is the absence of a necessary ingredient in the production process that causes the harm. So efforts to mitigate risks to product integrity tend to concentrate on identifying and managing single points of failure. From a technical service standpoint, the focus is on learning whether the supplier’s operation is capable of delivering the product as specified. Since, supplier capability is at the center of any outsourcing decision so it is

209


Dan Shoemaker and Charles Wilson important to find out in advance if the contractors that comprise the supply chain possess all of the capabilities required to do the work. Specifically, suppliers have to prove that they are capable of developing and integrating a secure product. Overall capability is usually demonstrated by the supplier’s past history with similar projects as well as their documented ability to adopt good software engineering practices. That includes the ability to ensure that the sub‐contractors that a prime contractor employs are trustworthy. The issue of unintentional vulnerabilities is just a specific application of the supplier competence problem in that defects in software and hardware occur because of failure in the development process. By definition the installation of unintentional flaws is not a hostile act. However since the problem is so pervasive the sheer number of exploitable vulnerabilities placed in ICT products makes it a major concern. There is an extensive body of knowledge (BOK) in ICT product assurance, however since the steps necessary to ensure product integrity have to be instituted, managed and sustained in a logical way, best practices are often not followed, or performed half‐heartedly. The result is that common defects in ICT products are exploited by a growing array of criminal and other bad actors.

4. Toward a unified model of supply chain risk management best practice It should be clear from the GAO report that an effective supply chain risk management process calls out three common sense principles; “know and control your suppliers”, “adopt rigorous assurance practice at the component level” and “rationally plan for failure”. A very large percentage of the counterfeiting, supply chain critical point of failure breakdowns and capability concerns can be mitigated by simply ensuring that every one of the entities up and down the supply chain is under strict management oversight and control, while unwanted functionality and development failures can be addressed by strict product assurance at all levels in the supply chain ladder. Then, when the inevitable failure does occur there is a well‐defined strategy to ensure that the problem is identified and resolved in a rational fashion. The Department of Homeland Security uses the term Enterprise Software Security Framework (ESSF) to describe the specific process to ensure the reliability of purchased products (Steven, 2006). The aim of an ESSF is to factor everybody’s responsibility for achieving secure software into a “who, what, when” structure of defined roles and interrelationships. To create this structure DHS suggests that the ESSF must include roles and responsibilities beyond the typical system and software security roles seen in most organizations. The particulars of that idea are implemented by creating a unified model of best practice, which can then serve as a template for workforce role definition. The resulting picture of ideal best practice would then serve as the basis for developing specific recommendations, as well the strategy for implementing them The problem is that, there is currently no single unified model of best practice for ICT supply chain risk management. A unified model incorporates all practical actions and theoretical knowledge into a comprehensive picture of how to do the work properly. Therefore, that model has to capture and interrelate all of the detailed know‐how from all of the logical fields. Material from several conventional fields could conceivably be part of the practice of ICT supply chain risk management.

hardware and software engineering

systems engineering

information systems security engineering

safety

security

reliability

testing

information assurance

project management

In addition, it is probably valid to consider areas such as intelligence analysis and even the law as potential parts of the discipline.

210


Dan Shoemaker and Charles Wilson Given the fact that the know‐how necessary to ensure effective ICT SCRM practice is spread across a diverse set of areas, it was considered necessary to identify and categorize a standard framework of unduplicated, logically‐related lifecycle best practices. The recommendations of the ISO 12207‐2008 and the ISO 16085 Standard were adopted as the foundation for this work. ISO 12207 was selected because it defines a standard set of activities for customer and supplier organizations. ISO 16085 is designed to work with 12207 to create a lifecycle set of standard ICT risk management practices. Organizing the recommendations of these two international standards into a single framework produced a set of lifecycle process groups; which drilled down to logically related lifecycle processes ‐ within groups, which drilled down to, a set of lifecycle activities required to carry out each individual process, which drilled down to an explicit set of logical tasks within activities that can be represented as discrete set of observable behaviors. Those behaviors characterize a correct sequence of actions that can be performed to carry out each task. Explicit practice specifications can then be customized from the general model based on audience. From an evaluation standpoint the ability to perform each task can be confirmed through a set of observable actions The integration process identified seven discrete areas of general practice:

Area 1: Project Initiation and Planning Practices

Area 2: Requirements Development and Bidding Practices

Area 3: Source Selection and Contracting Practices

Area 4: Supplier Project Management Practices

Area 5: Customer Agreement Monitoring Practices

Area 6: Product Testing Verification and Acceptance Practices

Practice Area 7: Project Closure Practices

ISO 12207 provides a commonly accepted set of best practices for conventional IT procurement for each of these categories, while activities and tasks specified in ISO 16085 provide an excellent collection of assurance best practices. This makes it possible to construct a detailed picture of SCRM practices and activities that can be used in building a definition of workforce roles as illustrated below (ISO 16085, 2006, ISO 12207, 2008). Table 1: Unified model of practices for supply chain risk management Procurement Program Initiation and Planning Develop the concept to acquire (business case). Define project scope and boundaries. Develop an acquisition strategy and/or plan . Define constraints. Make decision to contract. Identify and mitigate outsourcing definitions. Install risk management process. Perform product assurance risk assessment. Develop product assurance risk mitigation strategies. Ensure product assurance risk monitoring Product Requirements Communication and Bidding Issue written requests to prospective suppliers. Standardize elements of the RFP. Document SCRM needs and requirements in the RFP. Specify SCRM terms and conditions. Specify information security features. Specify acceptance criteria for COTS integrations. Implement common criteria (if required).

211


Dan Shoemaker and Charles Wilson Create a specification. Specify SCRM measures and metrics. Create assurance language for a statement of work (SOW). Assure requirements for C&A in SOW. Ensure SOW specifies SCRM education and training. Develop SOW to acquire COTS. Provide SCRM language in instructions to suppliers. Ensure response reflects the specified capabilities. Ensure supplier has submitted adequate information. Specify initial product architecture. Specify product assurance case management procedure. Specify product assurance lifecycle. Specify product requirements and traceability criteria. Source Selection and Contracting Specify evaluation criteria. Ensure standard product assurance evaluation criteria. Specify assurance criteria in the Source Selection Plan. Perform contract negotiations. Perform project/contract management. Plan to oversee product assurance reviews and audits. Ensure competent product assurance professional(s). Oversee the supplier’s delivery of product assurance. Define milestones for assurance statements. Define how performance will be evaluated if an SLA is used. Define the role that product assurance plays in product C&A. Define how the product architecture will be managed. Define what will be reviewed from an assurance perspective. Define how often the risk management plan will be updated. Define how often product assurance risks will be evaluated. Provide a mechanism for elevating product assurance issues. Devise an issues resolution plan and process. Define circumstances for intelligence updates. Define how corrective actions will be monitored. Define how product assurance savings will be measured. Define how experience level will be monitored. Define how to identify key product personnel. Define how key personnel will be monitored. Define how assurance training program will be monitored. Supplier Contract Execution Create a management framework for the ICT project. Select a lifecycle model. Select processes, activities, and tasks and map them to lifecycle model. Develop a plan to manage the quality and security of the project. Develop document project management plan(s).

212


Dan Shoemaker and Charles Wilson Implement and execute the project management plan(s). Monitor and control progress throughout the contracted lifecycle. Develop the software product using internal resources. OR develop the software product by subcontracting. Buy off‐the‐shelf software products from internal or external sources. Monitor the progress of the project. Manage and control the subcontractors. Ensure all contractual requirements are passed to subcontractors. Interface with the independent verification, validation, or test agent. Interface with other parties as specified in the contract and project plans. Coordinate contract review activities and interfaces. Conduct joint reviews in accordance with ISO standard specifications. Perform verification and validation to satisfy that requirements are met. Make reports available as specified in the contract. Customer Agreement Monitoring Monitor supplier's activities using the Software Review Process. Supplement monitoring with verification and validation as needed. Ensure necessary information is provided in a timely manner. Customer Acceptance Prepare for acceptance based on the acceptance strategy. Prepare test cases, test data, test procedures, and test environment. Define the extent of supplier involvement in acceptance. Conduct acceptance review and acceptance testing of the deliverable. Accept product from supplier when all conditions are satisfied. Arrange to make customer responsible for configuration management. Project Closure Make payment or provide other agreed consideration to the supplier. Install the product in accordance with established requirements. Ensure agreement terminates when payment is made. Transfer responsibility for the product or service to the customer. Provide assistance to the customer in support of the delivered product.

5. Conclusions This paper has proposed a set of supply chain risk management activities and practices. These activities and tasks comprise an initial picture of the knowledge needed to correctly and effectively conduct a practical SCRM process. The paper derives this set of activities and practices from established models of practice for acquisition and supply of software and systems, along with additional risk control elements. This paper suggests that it will be possible to create a true body of knowledge in supply chain risk management using this set as a starting point. Supply Chain Risk Management would probably not be an important concern if it were not for the fact that supply chain failures pose a potential threat to our way‐of‐life. Thus it is important to ensure that the practices in this field represent the best possible response. The first step in ensuring a proper response is too create a body of knowledge in best practice for the field, which is based on a top‐level classification of its practices and activities. Nonetheless, the question still remains, why are the actions outlined here something that an organization should consider, especially given the time and cost required to prepare and execute an effective,

213


Dan Shoemaker and Charles Wilson substantive program? The potential for highly destructive attacks directed through supply chain vulnerabilities is a reality in cyberspace. That is because the way we acquire our products and services lends itself to that kind of warfare. Whether the adversary is a nation state, or a jihadist with control over a fourth tier supplier in some third world country it is presently far too easy to cause serious harm though the insertion of malicious and counterfeit object into the systems that underlie our national cyber‐infrastructures. Given the swiftness of technological change it is excusable that organizations might not have sufficiently prepared themselves to counter that emerging threat. It is inexcusable however to know that those attacks are likely to occur and stand idly by without doing anything about the situation. This paper provides one suggested way of doing something about the problem.

References Giles, Lionel (trans. 1910), “Sun Tzu on the Art of War”, at http://classics.mit.edu/Tzu/artwar.html Ruus, Kertu (2012) "Cyber War I: Estonia attacked from Russia". European Affairs. FindArticles.com. 09 Feb, 2012. Clark R.A. and H.A. Schmidt (2003), “A national strategy to secure cyberspace,” The President’s Critical Infrastructure Protection Board, Washington, DC Newman, Michael (2002). Software Errors Cost U.S. Economy $59.5 Billion Annually. Gaithersburg: National Institute of Standards and Technology (NIST) Redwine, Samuel T., ed (2006). Software Assurance: A Guide to the Common Body of Knowledge to Produce, Acquire and Sustain Secure Software, Version 1.1. Washington: U.S. Department of Homeland Security Alberts, Christopher, and Audrey Dorofee (2009). A Framework for Categorizing Key Drivers of Risk. Rep. no. CMU/SEI‐ 2009‐TR‐007. Pittsburgh: Software Engineering Institute, Carnegie Mellon University GAO Report to Congressional Requesters (2012), “IT Supply Chain: National Security‐Related Agencies Need to Better Address Risks”, United States Government Accountability Office, March 23,2012 International Standards Organization (2008). Systems and Software Lifecycle Process ‐ ISO/IEC 12207‐2008 International Standards Organization (2006), Systems and Software Lifecycle Process Risk Management – ISO/IEC 16085. ISO Evans, Gareth (2012), “Flying fraudulently – how a weak supply chain became the USAF's worst enemy”, 25 September 2012, airforce‐technology.com, accessed September 2012 Steven, John (2006), “Adopting an Enterprise Software Security Framework”, IEEE Security & Privacy, March/April 2006

214


Thirst for Information: The Growing Pace of Information Warfare and Strengthening Positions of Russia, the USA and China Inna Vasilyeva and Yana Vasilyeva Kuban State University of Technology, Krasnodar, Russian Federation inna1523@gmail.com foryanav@gmail.com

Abstract: The progress of modern information technologies makes any society very vulnerable. Each breakthrough of mankind into the future fails to release it from the weight of past mistakes and unresolved problems. When economic wars due to the integration of national economies become too dangerous and unprofitable, global military conflict is capable of leading to the extinction of all life on the planet. The war gets new directions and qualities: information warfare with a great thirst for comfort in it. Today information resources have become the wealth of the country like its minerals, production and human resources. The rapid development of informatization worldwide, specifically in the USA, China and Russia, and its penetration into all spheres of the vital interests of an individual, society and the state, have caused, besides doubtless advantages, the emergence of a number of significant problems. The urgent necessity to protect the information along with being protected from it has become one of them. Considering that nowadays economic potential is increasingly defined by the level of development of the information structure, the potential vulnerability of economy from information influences grows in proportion. Turning information into a commodity has led to a sharp aggravation in international competition for possessing the information markets, technologies and resources; the information sphere considerably determines and greatly affects the economic, military, social, political and other components of the national security of the country. Information has become a global, inexhaustible resource and mankind has entered a new era of civilization - the era of serious exploitation of this resource. This is no longer the world where the material base was the subject of fierce rivalry. Now the key to success will be in the proper management of information capacity, i.e., strategic planning. Information is truly the foundation of life! The geopolitical confrontation and information warfare between the United States and China will be a major factor in world politics in the twenty-first century. This increasing tendency is pushing Russia towards further increasing the development of information warfare along with ensuring national security; the formation of an open dialogue between civilizations; and resistance to the threat of conflict in the field of information. Exhaustion of natural resources of the planet, their consumption and growth of the population do not contribute to the reduction of information warfare. Therefore, the positions of Russia, the United States and China will be strengthened due to deeper integration into the world information space. This paper has highlighted the quintessence of what causes the great thirst towards reaching information comfort and leadership, along with information warfare terrifying serious threats and modern global geopolitical tendencies. Keywords: information, information warfare, Russia, the USA, China.

1. Introduction From society living in information poverty, we have almost instantly moved to the community suffering from information overload, with tendencies towards further communication concentration. The avalanche flow of information has surged on the individual, without giving him an opportunity to fully comprehend this information. Nowadays, the human world under the reign of Internet, mass media and advertising is the world managed by information, i.e. the world operated by shows. The theater of information hostilities stretches from a secret office to a personal computer and is operated on various fronts. The amount of information warfare (IW) financing demonstrates that leadership in this sphere is considered as one of the main ways to achieve national strategies. Information warfare is not some hazy futurology sector but a real `discipline`, which is being studied and developed, gaining more and more secretive, deeper forms. And it is not important how many types, configurations or weapons it has: what matters is hidden behind these words. Information warfare is in managing what we see and hear: it is denial of access to information; special software; but above all, it is a complete control of information. Information warfare in the Information Age is about controlling the ‘infosphere’. The threat is difficult to understand and potentially very dangerous. The information warfare `cocktail` has a huge number of components. However there are four basic fields: military, social, political/diplomatic and

215


Inna Vasilyeva and Yana Vasilyeva financial/economic. Information warfare has almost infinite layers of knowledge of various aspects, values, directions, and with a growing structure, constantly improving and developing. Information civilization transforms not simply the status of information with its positive effects, but also dramatically expands the negative sides. There is a powerful tool with no limits in front of us. Varying information warfare intensities have become an explicit sign of our days. Information itself has truly become like pure air for the inhabitants of the earth, and it can be both the purpose and the weapon! Modern information technologies are changing not only the usual lifestyle, but the concept about good and evil, justice and victim; in the end they do change the human being himself. You cannot turn the Information warfare off! We are approaching such a step of development when no one is a soldier, but all of us are participants in military actions. Information Warfare Is Everybody’s Battle! Today’s reality with its telecommunication computing systems and psycho technologies is dramatically changing the surroundings. Separate tiny information brooks have formed a solid stream. If previously it was possible to `pool` definite information channels, today all surrounding barriers have been destroyed. The time for information interaction between the most distant points has been reduced to nearly zero. After several months, even general five year plans become outdated. We live in a world where information spreads quickly; today it is the infrastructure. Information warfare is the war of decisions and control, the war of knowledge and intelligence. The purpose of this warfare is slowly but surely changing from “saving oneself and destroying the opponent” to “saving oneself and managing the opponent”. The aim is not directly to achieve information superiority, but to manipulate the enemy (or one’s own population) with false or adapted information; the resultant damage may not be immediately visible or detectable to the untrained eye. The actual distinction between the states and international groups is unclear. Organized criminal groups are teaming up with the government at different levels. Not all information wars are working in the interests of the main industrially developed countries. More often they tend to work for minor political forces. All elements of information technologies, resources, systems, as well as the cogitative part of human activity, can be the information target and have a potential for reprogramming (impact on mentality) any system (human) in order to achieve the desired goal. Information warfare, conducted in different spheres, has become total and multi-leveled. Now the decision-making system of the opposite side is the primary target and the main task is to organize the manipulation of their decision-making process. It is like an umbrella term for multi-faceted, interdisciplinary strategies that blend physical and virtual events. The information weapon is, first of all, the algorithm. The use of the information weapon means the right input data for the system to activate certain algorithms, and in the case of their absence to make active algorithms generate the necessary algorithms. Versatility, secrecy, numerous forms of software and hardware, radicalism of influence, sufficient choice of time and application place, and finally, profitability make the information weapon extremely dangerous: it is easily disguised under security facilities and even allows the performance of offensive actions anonymously, without declaring the war. Achieving success in any war, above all in the information one, is impossible without the availability of reliable information and intelligence. For these purposes, foreign intelligence services use a variety of techniques and methods: from monitoring the mass media to the most sophisticated ones, including industrial espionage and technical reconnaissance. Foreign intelligence technical services developed a global structure of investigation: multifunctional intelligence space systems, ground signal intelligence and radiolocating centers, strategic planes, marine systems and complexes of technical intelligence. While spending on these activities has not been reduced, the purchase of new technical soft and hardware for them includes spacebased spy satellites, unmanned aircraft, etc. The bet is on `smart` weapons, guided by satellites, microwave bombs and drones. However, the greatest value in the information warfare was gained by the image component, assuming negative impact on the opponent’s reputation which subsequently would lead to neglect and discredit of interests of the opponent in the world community. The success in economics, politics, science and military can be achieved by how fast it is possible to receive the necessary and reliable information. It is also important not to allow the use of this information by an opponent, competitor or enemy.

216


Inna Vasilyeva and Yana Vasilyeva The borders between economic and military battlefields have become increasingly blurred. Information warfare can be used to accomplish many traditional military goals, such as destroying enemy infrastructure targets, disabling defense systems, or attacking civilian targets; it may create new opportunities to manipulate enemy information and perception. Due to the massive intelligence requirements, knowing the design and vulnerabilities of the enemy information infrastructure is essential for developing IW capacity. Gathering this intelligence requires frequent probing. IW can also be used in a `non-military` way against individuals and whole societies. The information weapons could more likely be used in the near future as terrorist weapons rather than on the battlefield by the regular armies. Recent events have shown that victory even in war depends not only on the number of troops and tanks, but is defined by whose weapon is `smarter` and how it incorporates the latest achievements of science, technology and informatics. Future war will be won by the ones with better IT-specialists. The information era carries the same dangers. Here the dualism is shown: true information has misinformation. Influence occurs in either case, but in absolutely different directions. Incidents occurring by information perversion are proven by modern economical crises which have a purely informational nature. Just as the industrial era had conventional wars, the information one has information warfare. And just as the wrong use of high-tension current may cause injury or even death, the wrong use of unreliable information can lead to an almost complete decrease of intellectual potential of the human population. Everyone knows that politics is concentrated economy; war is based on purely economic motives. A new economy which is based on information services is starting to arise. In the near future, corporations should develop a policy of information defense operations, or they will have few chances to survive, since modern industrial espionage has a new information quality. Information is truly a precious commodity. Global informatization has led to the creation of a single global information space, with the processing, storage and exchange of information between subjects of this space - people, organizations and states in it. It is quite clear and obvious that new information technologies and possibilities of a fast exchange of political, economic, scientific, technical and other special information in production, management and public life are for the good of mankind. But just as rapid industrial growth has created an ecological threat to the earth, and the progress of nuclear physics has raised a danger of nuclear war, informatization can become a source of serious planetary problems too. The number of Internet users is constantly growing and is now over 2 billion. Every day, going to swim in the oceans of the Internet, people are attacked mainly not by Trojans, worms and viruses, but by the information processing of their consciousness. Want it or not, information viruses imperceptibly get into a human being`s system and cause serious errors which not only change the way of thinking and character, but can also lead to mental and physical disorders. Information infrastructure and mentality of the opponent (`human network`) will be the main objects of damage. The primary resource is knowledge as the human capital. Information warfare is fundamentally not about information technologies (IT). It is about people, both those in military and civil society, both the supporters and opponents. While the sophistication of weaponry is important, ultimately the targets are people and the battlefield is their perceptions of the information they are receiving. In order to win, or to avoid losing, it is necessary to understand the aspirations, motivations, and intentions of people, their own, their enemies, and those who are observing from outside or who are caught in the middle. Informatization of such important spheres as finance and banking, transportation, communication service, power grids, water supply, defense and national security, structures ensuring stable work of ministries and departments, as well as the evolution of electronic management of technological processes in production, give a comfortable living space for cyber-terrorism and crime. Cyber-crime has reached a level of being among international problems. In the United States this type of criminal activity is in the third place after arms and drugs traffic. The concept of cyberwar comes down to three basic actions: deter, prevent and resolve conflicts in the digital field. Cyber attacks are one of the greatest threats to international peace and security in the 21st century. Securing cyberspace is an absolute imperative. It was proved that a single individual is capable of waging cyber war at a level we previously attributed only to intelligence agencies or crime syndicates. Cyber attacks can be launched

217


Inna Vasilyeva and Yana Vasilyeva from literally anywhere, including cybercafĂŠs, open Wi-Fi nodes, and suborned third-party computers; not only can a regular person wage information warfare with a laptop, but it can also be done by a computer engineer working for the government. They operate primarily outside the international legal framework and do not require expensive or rare machinery. Despite expensive computer information systems, the existence of weaknesses in data protection is revealed. As a result, we get constantly increasing expenses and efforts to protect information. Security is a process, not a product. One of the main reasons is lack of education in this field. Only proper knowledge can stop incidents and mistakes, provide effective protection, or prevent a crime. The most vulnerable spot in any security system is the human factor. Most computer users care little and know less about security. Perhaps tens of millions of computers today are bots, capable of being controlled by dishonest others. Those concerned about security are finding out that the borders between systems are becoming fuzzier and gauzier. Therefore the protection of data in computer networks becomes one of the most urgent issues. Today, the strategic geopolitical advantage and economic prosperity of any country is mostly dependant on the degree of its involvement in the information sphere. Information is an essential basis for making decisions in production, objects of civil and military infrastructures, public authorities and daily life. In comparison with other countries, the United States (US) has a significant advantage in the field of development and use of information and telecommunication technologies and it has the highest level of computerization. Based on the established practice, the United States consolidates the dominant positions not only in the political, economic and military spheres but also in the global information infrastructure. This information dominance produces an ironic asymmetry. The United States is both powerful and vulnerable. The main strategic US priority is to be active in cyberspace in order to secure world leadership. After all, it is the USA which annually sustains enormous losses from cyber-crime and leaks of commercial information. The United States openly bets on information warfare techniques to achieve superiority in cyberspace and conservation of leadership positions in the 21st century. The signals intelligence collection and analysis network `Echelon` is a key component and great ‘helper’. The USA continues to improve the concept of information warfare. The main direction is in expanding the applicability of techniques and ways of this warfare. On a level with the USA, China is another world leader in charge of information warfare. In the upcoming decades, the nation that appears to hold the greatest potential for developing into a true rival for the United States is China - the main geopolitical opponent of the United States. The strengthening of China, obviously, will lead to a new configuration of geostrategic forces in the world, i.e., a new structure of international relations. The unavoidability of the future geopolitical confrontation with the United States demands from the Chinese leaders to be carefully prepared for information operations under modern conditions. It should be said that information geopolitics is most restricted in China. They pay a lot of attention to the development of mass media and the Chinese Internet. China is actively promoting the concept of a Special Forces network (battalion-sized units), which should consist of highly qualified computer experts, trained at the state universities, academies and special centers. The active youth is the most welcomed, especially Internet users. Thus, the main priority is the strategic course on concept development of the effective use of Information warfare to achieve main political and economic targets. China is currently executing a patient and deceptive form of information warfare designed to advance its economic state, maintain its national unity, significantly improve its technological and military capabilities, and increase its regional and global influence with minimal or no fighting and without alarming the West; using information warfare based on its strategic heritage to achieve its national interests. Despite accusations that China is a major source of online attacks against the United States, officials declare that they will cooperate in the further development of mechanisms of cyber defense and control over cybercrime. Both countries have greatly moved forward in the progress of technological solutions and the joint fight against cyber-crime that will help to prevent a global crisis in this area.

218


Inna Vasilyeva and Yana Vasilyeva However, as China continues to increase economic, military, and political strength, it is essential that American strategists devote greater study to understanding this possible adversary. Many in the American defense community worry that China’s growing presence in component manufacturing provides it with plenty of opportunities for mischief — which it may not be shy to take advantage of. Russia and China are the strongest opponents to perceived US domination with the potential to field strategic IW capabilities. Although it is a high tempo construction, - Russia is among countries where the creation of its own secure information infrastructure is still being built and is in the process of developing. The most important tasks in information security for Russia are: realization of the constitutional rights and freedoms of citizens of the Russian Federation in the field of information activities; improvement and protection of the national information infrastructure; Russian integration into the world information space; resistance to the threat of confrontation in the field of information. The strategy of information defense is increasing with the creation of centers for training specialists like special information troops which will work with external and internal audiences. Russia needs state integration ideology: ideology of intellectual and spiritual freedom, patriotism and majesty. It is necessary to learn advanced information technologies and the introduction of effective models of strategic analysis and management. The national security of Russia in the 21st century will generally depend on the effective functioning of the information environment of the society. The main goal for global cooperation is the establishment of the international legal regime that will regulate military activities of states on the basis of principles and norms of international law. The Russian position here is also to prevent the arms race and military conflicts. There should be an agreement under the auspices of the United Nations providing international information security with norms and principles of international law spreading into the information arena. It is absolutely clear that relationships between the USA and China will be the main factor that will determine the state of our planet in the next decades. China is emerging as the United States’ real primary rival in the 21st century. Russia needs to continue increasing its economic and political potential and define a balanced strategy in the conditions of the geopolitical conflict between the USA and the People's Republic of China. Russia is interested in keeping up the constructive dialogue and cooperation with the US and the further strengthening of relations with China. However, there is also a highly important and urgent matter of international cooperation in the adoption of legal instruments (e.g., the Convention), to provide information security in the conditions of open cyber space and transparency of the borders. The main purpose of the Convention can be to ensure national security and independence of the states from the `informational expansion` of countries with a well-developed information infrastructure and management of information weapons. Information Warfare is forced by nations that have highly developed technologies which on the battlefield will therefore be used mostly by them. Unfortunately, today most potential enemies do not have the technological capability. IW can successfully be used against them. The enemy has to have high tech weapons and communication to use IW in a practical manner. Therefore, countries which are developing these new kinds of weapons and tactic are also the most vulnerable. Information warfare has become the most important geopolitical factor defining the destinies of countries and civilizations. The balance of power in the world has changed and we are moving fast to a completely different structure. There is a rapid formation of one global society through the deployment of information and communication revolution with the geostrategic information antagonism between leading countries of the world for achieving superiority in the world information space. At the present stage when basic knowledge of the mankind is being accumulated in various modern civilizations, information warfare is being represented as a war of civilizations – the war for a place under the sun facing the tendency of dwindling resources of the planet. Similar to material resources, primary goods, energy, labor and financial resources – information becomes a strategic national resource for society - one of the main treasures of the economically developed state.

219


Inna Vasilyeva and Yana Vasilyeva Today's world no longer has lack of information and industrial products; on the contrary, it is distinguished by its surplus. It means that the purpose of information warfare is not just any protection of information, but also the protection from information and the promotion of someone else’s vision of the world. For most people, all modern wars are shown in a genre of information series. These series pull out the audience from reality, creating a feeling of watching some fascinating film product. Of course, this process is accompanied by the escalating level of emotional perception and empathy. However, further feeling of the show comes to an end, and the saved-up emotions are reconstructed into images of `allies` and `enemies`, the `bad` and `good` states, honest and evil politicians, etc. Low cost of technical tools that can be used in the information warfare, dramatically expands the range of its possible participants. They can be individual countries and their intelligence agencies, criminal and terrorist groups, commercial companies and even those who are acting without criminal intentions. Because of the mass culture it is even possible that information warfare can be used against its own people. Effective geopolitical information potential can be created only when the information sphere of the state is successfully operated and secured from outside influences. Interdependence of states is increasing worldwide. The globalization of the world financial system has resulted in a dramatic rise of influence of financial factors in national security. In the arena of information confrontation we now see not just national states but also blocks of countries united by common international political interests. Thus, the necessary resources (material, technical, human, intellectual) can be in completely different parts of the world, but work perfectly as one single organism for providing information warfare. This kind of warfare became a legitimate means of political struggle. The role of information warfare in international politics is growing every year. Its image aspect is especially significant. The information and psychological factor will be the most important in world politics. But information warfare affects not only the mass consciousness, but also the decision-making process of the world political elite. Therefore, the results of information confrontation have real financial, economic and geopolitical consequences for the states. In an ideal world, states would work together to eliminate the Information warfare and cyber threat. Global cooperation may be a reality one day, but unless something changes to pressurize sanctuary states into changing their behavior, there is no impetus for them to do so. Despite general peace among developed countries, there are still plenty of opportunities for conflict. The information society is our new reality which connects local civilizations and information universals. It is a society which can survive only if tolerance of people`s individual perceptions becomes the lifestyle of society. If we correctly realize specifics in a context of the worsening of global problems, we should take the path that leads to neutralize the terrible potential of information warfare. However, we should consciously work to create adequate mechanisms for limiting the scope of violence, on the one hand, and enhancing tolerance, on the other. Humanity has not yet realized what a terrible weapon is in its hands. It is time for consolidation. It is time to strengthen our defenses against this growing danger to avoid an awful future! We face the same challenges and problems and should take joint actions! People need safety and confidence in the future of their children and the whole country. In the information warfare we should have a shield of faith, hope and love along with a sharp sword for protection. Unnecessary to say that it is very difficult and intense, but I truly believe that this is something that can bring all of us, the whole world, together - against terrorists, drug dealers, pornography and other criminal activities. And it will no longer matter under what surveillance we are, as long as it is the key to our security and protection, with fair rules and right regulations.

References Arquilla, J. and Borer, D. (2007) Information Strategy and Warfare: A Guide to Theory and Practice (Contemporary Security Studies), Routledge, New York. Carr, J. (2011) Inside Cyber Warfare, O’Reilly, USA. Hathaway, M.E. (2008) Cyber Security: An Economic and National Security Crisis, The Intelligencer: Journal of U.S. Intelligence Studies, Vol. 16, No. 2, Fall, pp 31–36. Libicki, M. (1996) What is Information Warfare? National Defense University Press, Washington, D.C. Libicki, M. (2009) Cyberdeterrence and Cyberwar, [online], RAND.org, http://www.rand.org/pubs/monographs/2009/RAND_MG877.pdf Rastorguev, S. (2000) Information Warfare Formula, Radio and Communication, Moscow, Russia (in Russian).

220


Investigating Hypothesis Generation in Cyber Defense Analysis Through an Analogue Task Rachel Vickhouse1, Adam Bryant2 and Spencer Bryant1 1 711th Human Performance Wing, Wright‐Patterson Air Force Base, Ohio, USA 2 Riverside Research, Beavercreek, Ohio, USA rachel.taylor8813@gmail.com adambryant11@gmail.com Abstract: Sensemaking refers to the process of constructing a mental representation of a situation, that includes the objects, data, events, intentions, and inferences involved in a situation. Our previous work in studying reverse engineers’ mental models revealed that their sensemaking activities can be decomposed into seven core processes that appear in a regular pattern. In this pattern, people work through the seven processes in a “sensemaking cycle” to interpret and understand situations by refining, confirming, and disconfirming a number of hypotheses about what the information in the environment means. The processes in the sensemaking cycle are: creating a goal representation, planning an approach, carrying out the plan, sensing information, interpreting information, updating knowledge, and generating a hypothesis. Many tasks associated with cyber defense, such as malicious software analysis and network intrusion monitoring involve making sense and connecting background knowledge to information from complex displays. By investigating the last process in the sensemaking cycle, hypothesis generation, we can better understand how it supports sensemaking and how to best design tools to aid cyber analysts in making sense of the displays. In this study we investigated hypothesis generation to discover the processes by which people generate the guesses that guide them in their interaction with the environment. Ultimately this supports their sensemaking in the task. To do this, we developed an abstract hypothesis generation task to aid in our analysis of interactive sensemaking work and conducted a verbal protocol study in which participants completed the task. We collected verbal and performance data from 13 participants, who were shown a display in which eighteen cards were presented, each with a different face and combination of character arities. The participants were then able to ask a variety of yes or no questions about the arities displayed and use that information to refine their hypothesis. The participants’ goal was to accurately identify the character that was selected by the computer. We collected concurrent verbal protocols and had multiple raters code the protocols according to Bryant’s sensemaking cycle taxonomy and observed that participants demonstrated the same pattern that was observed from Bryant’s previous study of sensemaking in reverse engineering. Keywords: sensemaking, reverse engineering, cognitive task analysis, verbal protocol, cyber defense

1. Introduction Analysis tasks in cyber defense involve quickly making sense of complex information streams in pursuit of information to support a decision. Previous studies investigated how analysts make sense of complex information when reverse engineering executable programs from code disassembled into x86 instructions (Bryant, 2012). During performance in their tasks, cyber operators engage in a process of “sensemaking” to elaborate a mental model of their situation, that can be thought of as their “situation model” (Kintsch 2000; Endsley 2000; Rodgers 2012). Bryant (2012) identified a seven‐part “sensemaking cycle” that reflects the information foraging and learning elements involved in the sensemaking process. The processes in the sensemaking cycle are: creating a goal representation, planning an approach, carrying out the plan, sensing information, interpreting information, updating knowledge, and generating a hypothesis (Figure 1). In this study, we wanted to develop a refined description of how people generated hypotheses through integrating their background knowledge with information gained while interacting with a complex information display. To access the essential elements of hypothesis generation without getting lost in the semantic content of software reverse engineering, we designed an abstract hypothesis generation task that mirrors the way that reverse engineers developed hypotheses about their tasks (Bryant, 2012). We developed a game‐based experiment interface that could allow us to solicit information from non‐expert participants, then we had 13 participants play the game and collected video screen captures of their performance and concurrent verbal protocols as we had them think aloud during the task. The rest of the paper describes this research and proceeds as follows: Section 2 presents background in the cognitive process of sensemaking, situation understanding, and hypothesis generation. Section 3 describes the design of an abstract hypothesis generation task that helped us narrow the scope of the sensemaking process. Section 4 describes the development of the task environment to help us operationalize the task. Section 5

221


Rachel Vickhouse, Adam Bryant and Spencer Bryant outlines the administration of a verbal protocol. Section 6 presents the data from the verbal protocol study, and Section 7 explains how the results of the study are useful in designing better interactions in cyber defense analysis tasks such as reverse engineering binary software.

2. Background The process of making sense of a situation, or sensemaking, can be seen as the development of a mental model (Johnson‐Laird, 1983), sometimes called a situation model (Zwaan, 1998). A situation model is a hypothesized representation of a person's knowledge of the current elements in his or her environment during the performance of a task (Endsley 2000, Zwaan 1998). We refer to the process of sensemaking as encompassing a person’s interaction with a task environment, the person’s learning the mental model during the task, the person’s inferences and reasoning, and the way in which the person integrates knowledge, beliefs, and information. A person’s mental model with respect to a task is defined as a structure containing the elements in the task environment that are important to accomplishing the mission: the objects, actions, events, and relationships involved in the situation. A situation involves an information‐seeking goal, a plan to achieve the goal, a set of actions to carry out the plan, a sensing operation, information integrating operation, an operation to update the mental model, and an operation to generate a hypothesis (Bryant, 2012). There has been a growing body of work in naturalistic decision making that describes how people use and manipulate mental models at an abstract level to achieve intuitive, implicit, or tacit reasoning. A high‐level description can often only provide insight into the design of an intuitive or intelligent interface. A more mechanistic description of the sensemaking process could allow direct implementation of the knowledge structures and algorithms required to develop, work with, and gain inference capabilities from a functioning situation model. Mechanistic descriptions to the problem solving process have for many years been represented by the states of the problem (defined by a discrete set of state variables), operators (functions to change state variables), and selection rules to determine which operators to apply (Newell, 1972). In these types of applications, the model of the situation is often taken for granted in that the agent starts its task provided with most or all of the background knowledge necessary to complete the task, and all that remains is the search through the state space of state variable configurations. Predefined background knowledge can come in the form of an encoded declarative or semantic memory or a description of the problem state specified in a formalism such as STRIPS or PDDL. In these cases, the agent carries out a planning algorithm to construct the optimal path using the structure of the provided representation as the data structure to search over (Russell and Norvig, 2003). In other cases, an agent might learn a behavior function through a supervised learning algorithm on a large data set or it might use reinforcement learning to increase the reward on particular state‐action pairs and eventually develop learned preferences for some paths over other paths (Sutton and Barto, 1998). More recent examples show that combining reinforcement learning and associative learning techniques may be useful for learning the proper actions in more dynamic environments in which the reward structure of the environment changes (Veksler, 2012). But in many cases, the learning algorithm provided to the agent typically involves an objective function that the agent is to optimize, and the learning task involves making decisions and selecting actions that maximize the numerical value returned by the objective function. In these cases, much of the work is already done by the modeler in specifying the problem (Rogers, 2008).

3. Abstract hypothesis generation task The sensemaking cycle can be seen as a cycle where the person moves between having an information‐seeking goal and gathering information from the environment. In this theory of sensemaking, the information‐seeking goal is generated because a hypothesis leaves an unanswered question. There are a number of different types of hypotheses involved in a real‐world problem‐solving task. Hypotheses involve connecting a mental model (or schema) to the information structure of the environment. Constructing a situation model in a task involves a number of activities, such as:

Recognizing attributes from a stored schema,

Recognizing an object based on how its attributes match those of a stored schema,

Recognizing other relevant attributes from the object that match some other schema,

Learning the values of the object's attributes through sensory information,

222


Rachel Vickhouse, Adam Bryant and Spencer Bryant

Learning a relationship exists between two objects through spatial and temporal association,

Determining the affordances of an object, to see how it can be manipulated and what additional information can be gained,

Interacting with an object through its affordances to learn its attributes,

Creating names or labels with which to refer to the object (usually based on its attributes), and

Revising other pieces of information (the agent's “beliefs”).

Figure 1: Sensemaking cycle from Bryant et al., 2012 In each of these tasks, the hypothesis involves the assignment of values from a mental schema to one or more of the attributes of an entity, event, or relationship in the environment. In binary software reverse engineering tasks, assigning values to objects was manifested in the following ways:

Naming a basic block of assembly code with a meaningful semantic label (assigning a name attribute to a basic block),

Determining whether one instruction’s execution causes a breakpoint at another instruction to be hit (assigning the two instructions as attributes to a “causes” relationship,

Determining if a subroutine is the location where the code reads user input (assigning an event schema to a subroutine’s attributes),

Determining if the program has anti‐debugging code (assigning an anti‐debugging attribute to the program entity), and so on.

In a study of reverse engineers (Bryant, 2012) participants were asked to solve a challenging reverse engineering task by disassembling and making sense of an executable program from assembly language representations using the freeware OllyDbg debugger (OllyDbg, 2012) the IDA Interactive Disassembler (Hex‐ Rays, 2012), and a hexadecimal editor. In the reverse engineering task, the participants spent a great deal of time in four primary activities:

Navigating through code trying to find a particular location

Simulating execution of the program in either large chunks (walking through breakpoints) or small steps (stepping over individual instructions)

223


Rachel Vickhouse, Adam Bryant and Spencer Bryant

Translating small fragments of assembly instructions into higher‐level representations such as pseudocode or block diagrams

Explaining the program by determining the ‘entities’ in the code, and determining how the entities were related (Bryant, 2012)

In all of these tasks, the primary concerns involved entities, attributes, and relationships. Entities are any conceptual objects or patterns in the debugging environment to which a person can assign a name. Attributes are the properties that an entity has and they consist of pairs of slots (or data types) and values. Relationships refer to the category of connections that one entity has to another entity that can be having a similar feature in common, being temporally related, being spatially related, or being causally related. Relationships between entities can be entities themselves, with each of the entities participating acting as attributes and the type of relationship and any value constraints on the relationship as another attribute. In Table 1, a relationship between two subroutines in a program is constructed as a “causes” relationship, subject to the constraint that the attributes are on the left‐hand side and the values are on the right‐hand side. Table 1: An example of a relationship as an entity entity1 entity2 relationship‐type constraint

RELATIONSHIP subroutine_4056 subroutine_9F2E causes if value in EBX > 5 when entity1 is executed

In the reverse engineering task described in (Bryant, 2012), each of these activities involved the generation of hypotheses to narrow down information‐seeking activities. In navigation aspects of the task, the hypotheses involved which attributes an entity could be represented with, which entities were important to the task, and the location of different entities. In simulation aspects of the task, the hypotheses involved the value of a particular register after the next instruction executes, whether or not a breakpoint or code section would be executed, or the value of a reference location after a section of code executes. In translation aspects of the task, the hypotheses involved assigning meaningful semantic labels to individual fragments of assembly code and whether those assignments were valid and useful. Finally, in explanation aspects of the task, the hypotheses involved whether two entities in the code are temporally related (one entity comes before another entity), whether they are spatially related (for instance if one entity is below or near to another entity in the debugger’s visual display), or whether they are causally related (one entity’s execution causes the execution of another entity). Given a representation involving entities, attributes, and values, these hypotheses can be distilled into whether or not an entity has a given attribute‐value pair. We abstracted the problem to an assignment of attributes from a mental schema to attributes of a target‐entity in the environment. The target‐entity is the current focus of a person’s attention and the mental schema contains other ‘hypothetical’ entities that can match the target‐entity. Given an entity in the environment that contains a set of attribute‐value pairs and a mental schema that contains a separate set of entities, we can represent the process of generating a hypothesis as finding an entity assignment from the schema with attributes that do not contradict the target entity, but which most closely match the attributes of an entity in the environment. An algorithm to carry this out is given in Table 2. When the GENERATE‐HYPOTHESIS algorithm executes, it returns an entity that best matches and does not contradict the target entity from the environment. Upon formalizing the problem, it became clear that the task of generating a hypothesis was very similar to the categorization problem in which a person tries to find which category matches an individual entity based on the assignment of attribute values, with the exception that in real‐world sensemaking tasks, hypotheses are iterative, and they build off one another by providing information from the environment that a person uses to add to the mental model and generate an information‐seeking goal for the next iteration. We implemented a game task environment (Figure 2) that matched the requirements of the hypothesis generation task, namely in that it required participants to guess a target‐entity (the secret person) based on partial knowledge of its attributes (skin color, hair color, sex, shirt color, and so on) from a set of existing

224


Rachel Vickhouse, Adam Bryant and Spencer Bryant schemas (the people lined up in a grid to the right). The purpose was to design a simple hypothesis generation task that would allow us to focus on the processes involved in making “common sense” inferences, but that lacked the semantic content of the larger‐scale reverse engineering task. This task (called the Guess‐Who task) was chosen because it simplifies information‐seeking and information‐processing elements from the reverse engineering task analyzed in (Bryant, 2012). We hypothesized the data from this task would also provide additional support for the seven‐part sensemaking model displayed in earlier studies. Table 2: A simple algorithm to generate a hypothesis function GENERATE‐HYPOTHESIS(target‐entity, schema) returns entity inputs: target‐entity, consisting of partially‐filled attribute, value pairs schema, the current mental model consisting of entities for each entity in schema for each attribute in entity for each target‐attribute in target‐entity if CONTRADICTS(attribute, target‐attribute) continue to next entity else if MATCHES(attribute, target‐attribute) then increment attribute_match if attribute_match > previous_attribute_match then best_match <‐ entity return best_match

4. Task environment design The task environment was designed so that it generated random faces constructed of evenly‐distributed sets of arities. With eight 2‐arity features, one 3‐arity feature, one 5‐arity feature, one 6‐arity feature, one 7‐arity feature, there were 161,280 different possible combinations of schemas. A novel stimuli was generated by drawing each of the individual attributes of the schema (hair, nose, eyes) and assembling them based on a random ordering when the game was started. To ensure the values were truly random the Mersenne Twister library was used (Matsumoto, M. 1998). The 2‐arity features are skin color, sex, mouth shape (smiling, frowning), lip size, mustache existence, beard existence, hat existence, and eyewear existence. Nose shape has an arity of 3, eye color with 5‐arity, hair color, with 6‐arity, and shirt color with 7‐arity. The random stimuli were assembled in order to keep participants from learning the underlying distribution of attributes in the task rather than using a sensemaking process to determine the best match. A pilot study revealed that participants quickly acclimated to the underlying information structure in the task environment if the distribution of attribute sets is held constant. Since we had multiple arities of attributes, we expected to see people use the attributes that provided the most information (the ones with the smallest arity) to generate a hypothesis which would lead to their next guess. The game was developed in two stages, a pilot stage and a release stage. The pilot featured 18 stock images of character faces, with preset names and arities. A subset of the available arities (eye color, hair, headwear, etc.) was used to build a basic arities list. The stock characters had arities that discriminated the population into roughly equal halves regardless of the feature picked, leaving not much freedom for creative approaches, and some arities such as eye color were difficult to see. This prompted the idea of a random feature‐set (within bounds, such as women rarely have beards and mustaches, or are bald). In the release stage, a character portrait‐creator was made to build up characters from a layered feature templating process, where a random face structure based on sex was used, and arities were randomly chosen and placed in order on top of the face structure (Figure 2). Names for each character were randomly chosen from a list of sex‐discriminated names. The “Mersenne Twister” algorithm was also used to create randomness. Arities were made easier to discriminate one from another, and a more discrete and larger set of arities was used. This was done to keep participants from learning a specific arities salience.

225


Rachel Vickhouse, Adam Bryant and Spencer Bryant

5. Verbal protocol analysis Verbal Protocol analysis was used as data collection method of hypothesis generation information from the 13 participants. This analysis is a method to use qualitative data in a quantitative way. The purpose of this experiment method is to allow a formulation of knowledge representation understanding in cognitive performances. The proper instruction to participants on how to verbalize their thoughts allows an avenue that will not alter the sequence of thoughts mediating the completion of the task. This protocol of the participants verbal “think aloud” can then be accepted as valid data. The idea of participants using a “think‐aloud” method to verbalize their thought process must be separate from giving reports of their thought process in order to be credible. To create a stronger connection between the brief verbalizations and their thought process the detection of reaction times, error rates and eye movements could be collected. In this study the verbal protocol data was coupled with reaction times and performance data of the screen to conduct a strong data producing study. Three methods of data collection were used in this experiment; verbal recording, screen shot recording, and the java log files that were captured. The program CamStudio (CamStudio 2010) was used to record all of the verbalizations along with the screen display of the activity. This allowed for a connection to be later drawn between the time a participant clicked or responded to environment in comparison to their verbalizations inside this computer task.

Figure 2: A game task environment to investigate hypothesis generation Verbal protocol data was collected and analyzed from 13 participants performing a hypothesis generation task. In the task the participant’s goal is to select the computer chosen character. The participant was able to ask “yes” or “no” questions about 12 specific arities to best hypothesize which character was the answer (Figure 2). Each trial has a set guess limit of six. If the participants delivered their final guess correctly before they used up their six guesses, they accrued one point. The points accumulated until the program was closed. Participants did not lose a point for wrong guesses although they did not gain a point either. At the start of each participant’s session they were prompted with instructions. Each participant was also given an initial written protocol that described the experiment with instructions. The participants were able to perform a practice trial before data collection began. In the practice trial the experimenter went over how to select a question, how to read the answer, how to turn over cards, the scoring and point system, and how to select a final guess. The experimenter also explained the recording component and asked them to practice verbalizing their thoughts into the microphone before we began. The session

226


Rachel Vickhouse, Adam Bryant and Spencer Bryant allowed for 15 minutes to perform the task. The average number of trials a participant completed in that time frame was six. After each participant finished the trial their data was collected into an electronic folder for analysis. The video was played using the slowest speed feature to capture all of the verbalizations inside of a spreadsheet. Each complete thought was displayed on a single excel row (Table 3). Before data was released to volunteer coders, it was arranged in a chart format including all seven Sensemaking processes along with a column “delete” and “additional”. In the delete column a volunteer coder would indicate all verbalizations that were not of value. Each of the seven categories were described and given an example to provide standardization (Table 3). Table 3: Coded representation of a portion of participant 3's verbal data

Sense

Interpret

Update

Additional

Carry

6 sec

0

0

0

0

0

0

0

0

0

1

1

0

0

0

0

0

0

0

0

It is a male

5

0

0

0

0

0

1

0

0

0

Del

Plan

Hypothesis

Goal

Time Stamp

Trial 1 I'm going to see if it is a male or female first

Verbiage

So I am going to delete all the girls

6

0

1

0

0

0

0

0

0

0

what if this looks like a girl but it is really a guy?

13

0

0

0

1

0

0

0

0

0

Alright, so you only have 2 people that have hats, 3 people that have hats

19

0

0

0

1

0

0

0

0

0

2 people that have glasses, so you eliminate 3 that have dark skin,

24

0

0

0

0

1

0

0

0

0

2 that are smiling, 4 that are smiling, 5 that are smiling, 6 that are smiling

30

0

0

0

1

0

0

0

0

0

So is it not smiling, that may be your best guess?

34

0

0

0

0

0

0

1

0

0

It's not smiling

39

0

0

0

0

0

1

0

0

0

6. Results In completion of this experiment the verbal data permitted a task analysis to be completed. The sequence of each of the 13 participants elicited almost an exact strategy for executing the hypothesis generation task. The participant would begin the task with an initial observation on the characters presented through glancing at the questions and characters during an information gathering method. A first question or “guess” was made on the characters using one of three natural discriminators: boy or girl, hat or no hat, or whether the character had dark or light skin. There was over a 70% selection of the boy or girl discriminator asked by each participant for the first four trials of the task. It was observed that after selecting the question to ask each participant would then interpret the answer by acknowledging the answer verbally and then turning over all corresponding cards. Once the cards from the previous question were faced down the remaining cards would be reviewed to find the next salient feature. In order for the participants to make accurate inferences on the remaining salient feature they would count aloud and compare the statistical results of the remaining feature questions. The feature that would extract the most number of facedown cards would be the next selected question of the participant’s. The participants were verbally making hypotheses on which feature was the most salient along with what character attribute they perceived to be the winning guess. This process observed of the participants is supported with the performance and time data collected through the verbalizations. This observed process was counted in four to six repetitions until a final hypothesis is generated about the computer chosen character when all of the cards are facedown except one. If the participant interpreted each answer correctly and turned down supporting feature cards the participant was normally correct and given a point. However, in approximately one to three trials participants would end the trial with the wrong character

227


Rachel Vickhouse, Adam Bryant and Spencer Bryant selected. This happened when one of the steps was completed improperly leaving incorrect cards remaining at the end of the trial. Rarely data showed the participant selecting a question testing a character feature that did not support the most salient option, but instead was a riskier choice. The choice would lead to more cards being facedown, but only if the feature asked was correct. If the participant was correct the remaining options would decrease to a smaller number than using the regular method leaving a 50/50 split. The results give insight into how hypotheses are generated in broad spectrum and to this task environment. From this study preliminary data is learned about how people filter information. The participants mostly use salient features when they don’t have any expectation about how they will reduce the information. The cut is executed by reducing their information space the fastest using salient features. This analysis was completed through verbal data alone and claims about the experiment are not being done at this time. The experimental portion of this study is only the preliminary results and should be used to progress an experiment in a future study. The current experiment results were not conducted in a well controlled experimental setting and did not include enough participants. The purpose of this study was to gather information rather than testing a scientific hypothesis.

7. Conclusions The abstract hypothesis generation task helps the researchers represent a situation model and a process to elaborate it. Using the participants’ verbal reports as data, a calculated process model was created and simple hypothesis generation could be displayed and better understood after data analysis. This information about the more general hypothesis generation processes involved in sensemaking are useful in developing better user interfaces and visualizations of data in other complex software systems that involve extensive information foraging to help a user develop a mental model. Since the interaction with a simple hypothesis generation task allows us to represent hypotheses as “filling out” information about a schema, it provides us the ability to generate other hypotheses that may be useful to an analyst, allow analysts to see representations of the same data, and allow them to perform “what‐if” queries on their data to eliminate other candidates and filter the data down to that which is meaningful. In a reverse engineering task, these generated hypotheses may include presentation of meaningful names for a subroutine, presentation of likely data structures that require further investigation, presenting the user with locations that specify input‐ handling code, or presenting likely entry points to a program. In each of these cases, it may be possible to reduce analysts’ cognitive workload by having them “fill out” attributes of a set of given hypotheses about their data, rather than make them generate representations in their minds, reason about those representations, and keep track of it all. Additionally, by helping users see the most important attributes of a data set as most salient, it can help lead them to make better hypotheses, and thus seek for the most important information faster in the task environment (or analysis tool). Future work includes the design of this process as well as ways to integrate hypothesis generation algorithms into the workflow of existing tools without creating additional difficulties for the analyst.

Acknowledgements This research was supported in part by an appointment to the Student Research Participation Program at U.S. Air Force Research Laboratory, Human Effectiveness Directorate, Warfighter Readiness Research Division, Cognitive Models and Agents Branch, administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and USAFRL.

References A. Newell, H. S. (1972). Human problem solving. Anderson, J. (2007). How can human mind occur in the physical universe. Bryant, Adam. (2012). Dissertation Endsley, M. R. (2000). Situation Awareness Analysis and Measurement. Johnson‐Laird. (n.d.). Mental models: Towards a cognitive science of language, inference, and consciousness. Kintsch, W. (2000). Metaphor comprehension: a computational theory. Psychonomic bulletin & review, 257‐266. Laird, P. J. (1983). Mental models. Matsumoto, M., & Nishimura, T. (1998). Mersenne twister: a 623‐dimensionally equidistributed uniform pseudo‐random number generator. ACM Transactions on Modeling and Computer Simulation, 3‐30. Newell, A. (1972). Human problem solving. OllyDbg. (n.d.). RS Sutton, A. B. (1998). Reinforcement learning: An Introduction. Cambridge University Press. S. Russell, P. N. (2003). Artificial Intelligence: A Modern Approach. Zwaan. (1998). Situation models in language comprehension and memory. Psychological bulletin.

228


PhD Research Papers

229


230


The Potential Threat of Cyber‐Terrorism on National Security of Saudi Arabia Abdulrahman Alqahtani Department of Politics and International Studies, The University of Hull, UK qahtaniasa@hotmail.com Abstract: Throughout history, there have been many events and dangers that threaten state security, causing heavy loss of life, disease, injuries, destruction of property, displacement of large numbers of people and heavy economic losses. Political unrest on international and local levels and recent technological developments increase the seriousness of threats against national security. The concept of security has evolved gradually, especially since the disintegration of the Soviet Union, and end of the Cold War. The lingering impact of the policy of the bipolar world has blurred the image of relations between states. However, it provides an opportunity to understand and identify new threats and emerging conflicts, in addition to many unsolved problems. Simultaneously, globalisation has changed international rules and norms, in order to facilitate the rapid flow of capital and technology, with a weakening of national barriers. Non‐governmental actors now play key roles in international politics, some as a threat, and others bridging the gap between communities and nations. In these circumstances, the role of the state began to suffer and the accepted traditional concept of power was challenged. Today, a major issue of such concern worldwide, arousing heated debate at both national and international levels, is terrorism. The threat of terrorism has never been as prominent as it seems to be at the present time. Terrorism is an old phenomenon that has existed since the emergence of human societies, but the threat of terrorism has increased steadily over the past 30 years. With technological and technical progress, the actions of terrorists have become more dangerous and destructive, while the perpetrators of such acts are becoming more elusive. There are few parts of the world that have escaped terrorism since late 1960s (Mythen and Walklate 2006). The phenomenon of terrorism is changing, while the motives of terrorism remain the same. The world today faces new and unfamiliar kinds of weapons. The international system, intelligence systems, security procedures and tactics which are expected to protect people, nations and governments, are not able to meet this new and devastating enemy. The methods and strategies developed to combat terrorism over the years are providing ineffective, as the enemy no longer attacks only with hijacked plans, truck bombs or suicide bombers. Terrorists may engage in cyber‐terrorism, the use of cyberspace to launch attacks. The integration of the virtual and physical worlds, is a weakness confronting security agents (Collin 1996). This paper outlines a PhD proposal, which seeks to design an effective framework for the potential threat of cyber‐terrorism on national security, compared with conventional terrorism by addressing three main themes: awareness ‐ vulnerabilities – response, important in assessment of any security threat. According to Denning (2000) to understand the potential threat of cyber‐terrorism, we should consider two factors: first, whether there are targets that are vulnerable to attack, and, secondly, whether actors have the ability and motivation to attack them. In this proposal a preliminary review of relevant literature will be introduced, followed by the research questions to be addressed and the proposed methods to address them. Then the expected time frame will be considered. Keywords: terrorism, cyber‐terrorism, national security, Saudi Arabia

1. Introduction: Research title The proposed thesis will focus on cyber‐terrorism and assess its potential threat to national security, as compared to conventional terrorism in Saudi Arabia. So, therefore a suggested title is, “The Potential Threat of Cyber‐terrorism on National Security of Saudi Arabia”.

2. Literature review "If you ask 10 people what ‘Cyber‐terrorism’ is, you will get at least nine different answers! When those10 people are computer security experts, whose task it is to create various forms of protection against ‘Cyber‐terrorism’” (Gordon and Ford 2003). Gordon and Ford (2003) highlight the inconsistency of understandings of cyber‐terrorism even among people representing government agencies charged with protecting critical infrastructure components and assets. However, they say this is not surprising because of the lack of documented scientific support to integrate the various aspects of computer‐related crimes. The term "Cyber‐terrorism" was first coined by Barry Collin, a senior research fellow at the Institute for Security and Intelligence in California, in the 1980s (Collin 1997). No single definition of the term has yet gained global acceptance. Tagging a computer attack as "Cyber‐terrorism" is problematic because of the difficulty of determining the intention, identity, or political motivations of an attacker with certainty.

231


Abdulrahman Alqahtani Security expert Dorothy Denning defines Cyber‐terrorism as "politically motivated hacking operations intended to cause grave harm such as loss of life or severe economic damage." (Denning 2001). In an earlier, widely cited description of Cyber‐terrorism, she explains: "Cyber‐terrorism is the convergence of terrorism and cyberspace. It is generally understood to mean unlawful attacks and threats of attack against computers, networks, and the information stored therein when done to intimidate or coerce a government or its people in furtherance of political or social objectives” (Denning 2000). As she adds, for such attacks to be described as cyber‐terrorism, the attack should lead to violence against persons or property, or at least cause damage to generate enough fear, for example, attacks that lead to death or injury, explosions, plane crashes, water pollution, or severe economic loss. Cyber‐terrorism is a kind of politically motivated terrorism perpetrated through the use of computers, information, networking, and infrastructure technology, in order to carry out destructive and malicious terrorist activities. (Gallagher, et al. 1998) In view of the critical importance of these basic elements, experts believe that cyber‐terrorism is more damaging than conventional terrorism (Knake 2010). The risk lies in the fact that cyber‐terrorism could target infrastructure relating to government records, air traffic control, control of dams, medical records, as well as the financial and commercial infrastructure (Hansen, et al. 2007). The advent of cyber‐terrorism together with deficiencies in information security could result in interventions that could cause breaches of national security, and loss of physical assets, digital assets, finance, consumer confidence, and even life. Such intervention could take various forms, such as the interruption of vital services, and processes of exploitation (such as identity theft, theft of strategic information dissemination of misleading information, misuse of control systems in the management of physical infrastructure such as dams and air traffic, or to create fear for coercive purposes), and the destruction of information (De Borchgrave and CSIS Homeland Defense Project. 2001; Redins 2012) . There are two main trends of cyber‐terrorism attacks: The first type includes political acts aimed at the imposition of damage, which are very similar to acts of conventional terrorism, for example, attacks on the Danish web as acts of revenge in response to the cartoons of the Prophet Muhammad published by a Danish newspaper. The second type is non‐political actions which rely on information and communication technology, and are implemented by hackers, such as “Virus attacks (either blind or targeted at certain types of systems) Dos&DDoS. Racketeering and blackmailing Unauthorized access to private, corporate or government systems with the intention of viewing, copying and/or destroying data." (Gorge 2007). Current studies indicate that cyberterrorists are using the Internet as a means of carrying out hostile activities. However, previous research has not adequately explained the activities of cyber‐terrorism and their effects. Future studies in this area may provide a better understanding and interpretation of the ideas and communication patterns of cyberterrorists. This would lead to the development of a strategic framework and policy for combating cyber‐terrorism (EU 2012; UN 2012; Yunos 2009) The cyber‐attacks suffered by Saudi Aramco mid‐August this year, was one of the largest and most damaging cyber‐attacks in the history of installations and industrial companies around the world since the advent of the internet. Aramco’s computer network was infected with a computer virus, in an unprecedented act of sabotage. The virus caused the deletion of data for three‐quarters of Aramco PCs (about 30, 000) replacing them with the image of a burning American flag. The virus, named "Shimon" after a repeated word embedded in the code is believed to be an Iranian response to the virus, "Flame", with which Iranian Oil Systems was attacked last May. Subsequently, the Gulf States have become more responsive to the issue of cyber‐security. For example, the United Arab Emirates in September announced the creation of a new agency, to implement a national plan to combat threats to online security (BBC 2012; Levelle 2012; Perlroth 2012).

3. Research problem The modern digital age has been called the information age, the knowledge era or the intelligence era (Rowe, et al. 1996). Societies worldwide have become permanently dependent on electronic devices and networks to manage leisure, control of utilities, surgical procedures and exploration of space. Attacks on these networks and systems to achieve political goals are nothing but a kind of terrorism. While the first targets are computers

232


Abdulrahman Alqahtani and networks, the general public are the ultimately victims. This issue must be addressed by political and security leaders, not left to departments of information and communication technology on the pretext that it is a technical issue difficult to understand. Saudi Arabia is a developing country characterized by very rapid development in the economy in general, and in the critical infrastructure of communications and information networks. This enormous development coincides with increasing reliance on these networks and communications in providing basic services needed by the people and the government on a daily basis. National security of any state is based on several elements, including the political system, government and critical infrastructure, to achieve the well‐being of the state and the people. Since these elements rely on computer systems and networks for control, management, operations, SCADA (supervisory control and data acquisition) and communications, any defect or paralysis as a result of cyber‐terrorism, will be very costly and damaging to state organisations, and thus are considered a potential direct threat to national security. Given Saudi Arabia’s role locally, regionally and globally in protecting the global energy market and keeping prices stable, any potential threat to the national security of Saudi Arabia also may affect global security. It could result in the destabilisation of international oil prices and consequent economic damage and political tension. Saudi Arabia is actively pursuing the development of critical infrastructure, along the lines of the developed world. It increasingly relies on computers systems and networks in government organisations, oil companies, banking, utilities and other services. This opens opportunities that could be exploited by terrorist organisations to launch cyber‐attacks to disrupt and disable these services. Since "cyberspace" transcends physical boundaries, increasing communication through cyberspace increases exposure to traditional adversaries, new and growing groups. Drug traffickers and organised crime could join this space in order to take advantage of advanced cyber‐tools to launch their attacks. Therefore, cyber‐attacks could supplement or replace conventional attacks. This dramatically increases the complexity of the problem and expands the vulnerabilities that must be faced. The resources at risk are not only the information stored in cyberspace, but all of the critical infrastructure components that rely on information technology. While Saudi Arabia has suffered conventional terrorism for some time, and has adopted measures to confront it, the possibility of shift from conventional terrorism to cyber‐terrorism leaves a security and legislative gap. Saudi Arabia is aware of its danger and vulnerability, nor prepared for response.

4. Research objectives The main objective of this study is to determine the status of the potential threat posed by cyber‐terrorism towards national security compared with conventional terrorism. This objective can be achieved by obtaining data from critical infrastructure components and the general public. The aim is to derive insight into the extent of awareness and knowledge of cyber‐terrorism (as a new threat) compared to conventional terrorism (a familiar threat) in Saudi Arabia, and what it means for national security. It aims to identify vulnerabilities that can be exploited to launch terrorist attacks, whether cyber or conventional, in order to determine the main threats to national security at the present time. It will investigate any plans in response to terrorism and if so, their main focus. This will contribute to the identification of witch of the two types of terrorism is more severe and intense. It aims also at obtain an assessment of the disparity between the components of critical infrastructure. This will help to identify the level of threat to each of these components, and so help to establish priorities among decision makers for developing solutions. While probably not possible to prevent the threat of cyber‐terrorism, effective planning and response would reduce losses and damages, and speed recovery after terrorist incidents. By identifying the state of awareness ‐ vulnerabilities ‐ response of the average citizen, the research will assist in determining the necessary steps in the process of protecting individuals and their information, as achieving self‐security is the first step for achieving national security. The benefit desired from this study is that the state, represented by critical infrastructure components and the general public, is fully aware of the threat posed by cyber‐ terrorism, fully knowledgeable of its vulnerabilities,

233


Abdulrahman Alqahtani and able to respond appropriately in case of attack. This would help in reducing the physical and moral effects which are the main objective of terrorism. It would also assist decision makers in crisis management, based on full knowledge of the situation.

5. Research importance Information and Communication Technology (ICT) has brought interest in exchange between continents and countries, and has therefore become one of the pillars of modern society. Whilst bringing many wonderful benefits, on the other hand, it poses new risk and security concerns. With the growing scope and increasing number of users of this technology, terrorist attackers, hackers and intruders spend hours attempting to penetrate, steal or gain access to important information, for purpose of material and moral extortion. This research derives its significance from the importance of its themes. Saudi Arabia's national security is of top priority to the Saudi government. What will help to achieve this is to understand, identify and predict early indicators that may pose a threat to national security.

6. Proposed research design Hakim (2000) stated that “research design is the point where questions raised in theoretical or policy debates are converted into feasible research projects and research programmes that provide answers to these questions” Because Saudi Arabia has suffered and is still suffering from conventional terrorism, the comparison of the potential threat of cyber‐terrorism with conventional terrorism will facilitate the assessment process because of the earlier experience in combating conventional terrorism, by security officials, experts and even ordinary people. Basically, the research will examine and explore and assess the potential threat of cyber‐terrorism against national security by assessing the following three factors: awareness ‐ vulnerabilities ‐ response. It will review the literature on the impacts of conventional terrorism, and then it will make a comparison between the two types to develop a framework for the potential impacts of cyber‐terrorism in the future, to assess the potential threat, as the latter is a choice available to terrorists by using modern technology. Interviews will be conducted with officials and experts in components of critical infrastructure, to question them about the state awareness ‐ vulnerabilities and response, whether existing or potential, because of terrorism, whether conventional or cyber‐terrorism. The researcher works as a police officer in Saudi Arabia, and at the Centre for Research and Security Studies in King Fahad Security College, a centre concerned with security studies in the various security sectors. This will facilitate access to security officials and experts in critical infrastructure components. Also, a random sample of the community will be selected to carry out a survey of public opinion, through questionnaires. This sample will be selected from the capital city, Riyadh, where the demographic characteristics represent almost all regions of Saudi Arabia, and classes of Saudi society also. Because the issue of cyber‐terrorism may be new for them as compared to conventional terrorism, the researcher will develop multiple scenarios of cyber‐terrorism to help the public understand the subject and to respond to the questionnaire. This well help to give a clear picture to decision makers, security services and various government institutions and even the average person, about the threat to the community posed by cyber‐ terrorism compared to the impacts of conventional terrorism. The interviews and survey will be carried out in Saudi Arabia as a case study, because it is an environment which is suitable for the topic. Firstly, it has been vulnerable to terrorist activities for quite some time, and secondly it is a state that is newly relying on the environment of cyberspace for infrastructure and economic and health services. For these purposes, a mixed approach is appropriate. According to Maxwell (2012), qualitative research generally achieves one or more of the following five research objectives. Firstly, it is a method that helps the researcher to understand in depth the sense of events, situations, and actions for participants in the study. Secondly, it also facilitates consideration of the particular context within which the participants operate. Thirdly, it identifies unexpected phenomena and effects which might create new‐grounded theories in relation to the phenomena. In addition, qualitative research assists in having critical reflection on the practice by which events and actions take place.

234


Abdulrahman Alqahtani Furthermore, Campbell (1997) stated that qualitative research assumes that “reality is socially constructed and that variables are complex, interwoven, and difficult to measure”, which demands that the researcher seek insiders' points of view and engage personally in the research process. This study will use grounded theory, these types of studies end with a framework or a theory. Therefore, the methods of research will be mixed method which will focus on discovery, which requires more than simple verification of knowledge. As previously stated, the data collection will focus first on primary sources with regard to the conceptual framework of conventional terrorism and Cyber‐terrorism. Use will be made of official documents and government publications as a key to the analysis of the impacts of terrorism on national security. Data will also be collected from secondary sources such as books, periodicals, newspapers, journals, and monographs. These will contribute to the establishment of the theoretical framework of the topic, in addition to the analysis and discussion of the case study. According to Burnham and others, “The use of such combination of methods may provide complementary data which can strengthen the findings” (Burnham 2008). Figure 1: The following figure illustrates the research methodology in brief:

Figure 1: The research methodology in brief A major challenge in this research is the sensitivity of the information related to terrorism, and the potential reluctance of government agencies and private sector infrastructure providers in Saudi Arabia, to allow interviews with their officials and experts in the field. A possible alternative is an interview‐questionnaire with standardised questions, so that it does not require face to face interview with these officials or experts. Also the subject (cyber‐terrorism) is a relatively new for the sample, so it will be necessary to develop scenarios for cyber‐terrorist incidents, to facilitate the process of responding to interviews or surveys. In addition, the lack of a general consensus on the definition and interpretation of the terms "terrorism" and "cyber‐terrorism" is another hurdle. Hence, an attempt will be made to formulate a definition consistent with the purpose of the research. It must also be noted that interviewing is “time‐consuming and expensive to

235


Abdulrahman Alqahtani conduct” (Kumar 2011), especially when it involves the travelling of the researcher and the valuable time‐off of officials and experts to participate in the interviews. Another disadvantage of this technique comes from the fact that “quality of data obtained depends upon the quality of the interaction between interviewer and interviewers and skills of the interviewer in conducting face‐to‐face interview” (Kumar 2011).

7. Research questions According to Collis and Hussey (2009), Interpretive paradigm research questions “evolve during the process of research and may need to be refined or modified as the study progress”. However, in order to meet the objectives of the thesis, preliminary research enables the writer to identify certain relevant questions to be researched and answered: 1. Qualitative Questions To be answered through secondary data:

Does cyber‐terrorism exist as a reality? If so, what is its definition? What are the differences between it and cyberwarfare and cybercrime?

Is it easy or difficult to plan and execute cyberterrorist attacks?

Why do terrorists resort to information and communication technology to launch their attacks?

Is there a link between cyber‐terrorism and corporeal terrorism? Why?

Why and how do cyberterrorists make their attacks?

What are the potential targets of cyber‐attacks that affect the national security?

What are the potential impacts, if any, of cyber‐attacks on society?

To be answered through semi‐structured interview (officials and experts):

Is it possible for Cyber‐terrorism to expose society to the same effects as conventional terrorism? How? Why?

To what extent may Saudi Arabia (as a case study) be exposed to cyber‐terrorism? What are the potential impacts on it?

What knowledge do the critical infrastructure components and the general public have about cyber‐ terrorism and conventional terrorism?

Do the critical infrastructure components and the general public have awareness of cyber‐terrorism as compared to conventional terrorism?

Are there vulnerabilities in the critical infrastructure components and the general public that can be exploited by cyber‐terrorism or conventional terrorism?

Do the components of critical infrastructure have ready reaction plans available in the event of cyber‐ terrorist or conventional terrorist attacks?

What is the most likely threat to national security at the present time, cyber‐terrorism or conventional terrorism?

What is the disparity between the components of critical infrastructure in terms of awareness ‐ vulnerabilities and response) towards cyber‐terrorism and conventional terrorism?

2. Quantitative Questions To be answered through survey (general public):

What are the trends of general public opinion in terms of awareness ‐ vulnerabilities and response towards cyber terrorism as compared to conventional terrorism?

These questions will guide the research and enable the researcher to focus on what to study and how to conduct the research.

236


Abdulrahman Alqahtani

8. Thesis structure The study conducted to reach the above stated aims will consist of the following parts:

A review of previous literature which relates to conventional terrorism and cyber‐terrorism, their threat and effects on national security.

The second part will look at previous research into the topic, which is primarily Eurocentric as it is based on studies in Europe and North America. This section will include a large volume of research which will eventually help lead to the isolation of the best knowledge and understanding in the field of terrorism.

The third part will include a review and analysis of data collected from primary and secondary sources to form a picture of the assessment of the potential threat of cyber‐terrorism and conventional terrorism to national security, and thus it will be possible to develop a comparison between them.

In this part, data will be analysed from the case study of Saudi Arabia, based on interviews carried out with officials and experts of critical infrastructure components and the survey of the general public.

This part aims to compare the results obtained from the analysis of data collected from different sources, regarding cyber‐terrorism and conventional terrorism. According to Burnham and others, 'The primary advantages of the comparative method can be summarised under four headings: it allows us contextualise knowledge; to improve classifications; to formulate and test hypothesis; and to make predictions' (Burnham 2008).

The different found in part E will be analysed and explained in relation to the differences in culture, which is found in the literature section B.

The final part will include a framework for the potential threat of cyber‐terrorism on national security, compared with the effects posed by conventional terrorism. This will be the most practical and beneficial approach to be adopted by the policymakers, security services and even the international community. This information will be compared to the findings from previous research and will therefore help prospective researchers to identify any differences.

9. Time scale Research Stage / Year First Year Initial Literature Review Review of previous research on topic Second Year Review of previous research on topic Designing methodology and execution of preliminary study Final Research methodology and design Executing research Analysis of data Framework/ model development Third Year Framework/ model development Discuss findings and linking to aims Review findings Write up the study

Months 1 2 3 1 2 3 1 2 3

4 4 4

5 5 5

6 6 6

7 7 7

8 8 8

9 9 9

10 10 10

11 11 11

12 12 12

Figure 2: This is the graphical illustration of the time plan It would be very difficult to give an accurate estimation of the duration of individual steps throughout the research period. However, below is a rough indication of time distribution which will sum up to approximately 3 years:

Ten months of literature review will be needed to have a full idea about the topic explored and to review the latest developments.

Three months to review the previous research on the topic.

Designing the methodology and executing a pilot study will take 2 months.

Refining the research methodology and design would take 1 month.

237


Abdulrahman Alqahtani

Executing the main field work will require 3 months.

3 months are required to analyse the data.

3 months to develop a framework which will show what are the real threats cyber‐terrorism poses affecting national security, to be able to respond to them.

4 months to discuss the findings and establish the link to the research aims.

2 months for reviewing the data and discussion to make sure everything needed is available.

6 months to write the study up.

10. Conclusion This paper is an attempt to define the research questions, methods and research design for the PhD. It has presented some of the literature on cyber‐terrorism and some matters relating to it.

References BBC. (2012). Saudi Aramco Oil Giant Recovers from Virus Attack News Technology (http://www.bbc.co.uk/news/technology‐19389401). Burnham, P. (2008). Research Methods in Politics (2nd ed.). Basingstoke: Palgrave Macmillan. Campbell, T. (1997). Technology, Multimedia, and Qualitative Research in Education. Technology, Multimedia, and Qualitative Research in Education, 30(2), 122‐132. Collin, B. (1996). The Future of Cyberterrorism 11th Annual International Symposium on Criminal Justice Issues (http://afgen.com/terrorism1.html). Chicago: The University of Illinois. Collin, B. (1997). Future of Cyberterrorism: The Physical and Virtual Worlds Converge. Crime and Justice International, 13(2), 15‐18. Collis, J. and Hussey, R. (2009). Business Research : A Practical Guide for Undergraduate & Postgraduate Students (3rd ed. / Jill Collis & Roger Hussey. ed.). Basingstoke: Palgrave Macmillan. De Borchgrave, A. and CSIS Homeland Defense Project. (2001). Cyber Threats and Information Security : Meeting the 21st Century Challenge. Washington, D.C.: CSIS Press. Denning, D. (2000). Cyberterrorism, Prepared for Estimony before the Special Oversight Panel on Terrorism Committee on Armed Services U.S. House of Representatives (http://www.cs.georgetown.edu/~denning/infosec/cyberterror.html). Georgetown University. Denning, D. (2001). Activism, Hacktivism, and Cyberterrorism: The Internet as a Tool for Influencing Foreign Policy. In J. Arquilla and D. Ronfeldt (Eds.), Networks and Netwars: The Future of Terror, Crime, and Militacy (http://www.nautilus.org/info‐policy/workshop/papers/denning.html). Santa Monica: Rand. EU. (2012). The Eu Terrorism and Situation and Trend Report (https://www.europol.europa.eu/sites/default/files/publications/europoltsat.pdf): European Police Office. Gallagher, Borchgraze, Cillusso and Webster, W.H. (1998). Cybercrime Cyberterrorism Cyberwarfare: Averting an Electronic Waterloo: Center for Strategic \\& International Studies. Gordon, S. and Ford, R. (2003). Cyberterrorism? Symantec Security Response, (http://www.symantec.com/avcenter/reference/cyberterrorism.pdf): Symantec. Gorge, M. (2007). Cyberterrorism: Hype or Reality? Computer Fraud & Security, 2007(2), 9‐12, doi:10.1016/S1361‐ 3723(07)70021‐0. Hakim, C. (2000). Research Design : Successful Designs for Social and Economic Research (2nd ed. ed.). London: Routledge. Hansen, J.V., Lowry, P.B., Meservy, R.D. and McDonald, D.M. (2007). Genetic Programming for Prevention of Cyberterrorism through Dynamic and Evolving Intrusion Detection. Decision Support Systems, 43(4), 1362‐1374, doi:10.1016/j.dss.2006.04.004. Knake, R. (2010). Cyberterrorism Hype V. Fact. http://www.cfr.org/terrorism‐and‐technology/cyberterrorism‐hype‐v‐ fact/p21434. Accessed 24/11 2012. Kumar, R. (2011). Research Methodology : A Step‐by‐Step Guide for Beginners (3rd ed. ed.). Los Angeles: SAGE. Levelle, D. (2012). Cyberattack on Saudi Oil Company Aramco Reverberates The World (http://www.theworld.org/2012/10/cyberattack‐saudi‐aramco/). Maxwell, J.A. (2012). Qualitative Research Design : An Interactive Approach (3rd ed. ed.). Thousand Oaks, Calif. ; London: SAGE. Mythen, G. and Walklate, S. (2006). Criminology and Terrorism Which Thesis? Risk Society or Governmentality? The British Journal of Criminology, 46(3), 379‐398. Perlroth, N. (2012). In Cyberattack on Saudi Firm, U.S. Sees Iran Firing Back The New York Times (http://www.nytimes.com/2012/10/24/business/global/cyberattack‐on‐saudi‐oil‐firm‐disquiets‐ us.html?pagewanted=all). Redins, L. (2012). Understanding Cyberterrorism. RISK Management, . Retrieved from http://rmmagazine.com/2012/10/05/understanding‐cyberterrorism/

238


Abdulrahman Alqahtani Rowe, A.J., Davis, S.A. and Vij, S. (1996). Intelligent Information Systems : Meeting the Challenge of the Knowledge Era. Westport, Conn. ; London: Quorum. UN. (2012). The Use of the Internet for Terrorist Purposes United Nations Office on Drugs and Crime (http://www.unodc.org/documents/frontpage/Use_of_Internet_for_Terrorist_Purposes.pdf). Vienna: United Nations. Yunos, Z. (2009). Putting Cyberterrorism into Context. STAR In‐Tech http://www.cybersecurity.my/data/content_files/13/526.pdf?.diff=1236049327.

239


Improving Cyber Warfare Decision‐Making by Incorporating Leadership Styles and Situational Context into Poliheuristic Decision Theory Daryl Caudle Advanced Education Concepts, Wilmington, USA dcaudle@advancededucationconcepts.com Abstract: Cyber attacks on a global scale increasingly continue to threaten critical infrastructures, information networks, and digital control systems that fundamentally support our national security and improve our quality of life. The ability to respond appropriately to a cyber attack in a timely and effective manner has never been more important. Unfortunately, decision‐making uncertainty following a cyber attack can hinder and delay critical response options including the use of force. A review of the literature indicated the decision‐making uncertainty experienced by senior military officers was described by five interdependent characteristics: response process, human factors, governance, technology, and environment. Furthermore, the response decision‐making process used by military officers following a cyber attack was best characterized by poliheuristic, noncompensatory decision theory. The literature also suggested that poliheuristic theory as originally formulated failed to incorporate two key elements of the decision‐making process, specifically leadership styles and situational context. Accordingly, this qualitative, directed content analysis was used to validate published research findings that certain leadership characteristics and situational variables affect the poliheuristic decision‐ making process that senior military officers use when determining the appropriate response to a cyber attack. A directed content analysis was used to examine previously collected interview data from 21 senior military officers who served for the Chairman of the Joint Chiefs of Staff in cyber warfare divisions in Washington, DC. The results of this research study supported and triangulated the findings in the literature that leadership styles and situational context are necessary additions to a more comprehensive poliheuristic theory that has improved predictive and explanatory power. The research study also showed that senior military officers’ leadership styles are more use of force oriented than politically oriented with adversarial distrust and military assertiveness being the predominate themes. Finally, the research study results revealed generational and organizational cultural effects, ethical considerations, and anticipatory skills influence leadership style and decision‐making following a cyber attack. Keywords: decision‐making, cyberspace, warfare, poliheuristic, leadership, situational

1. Introduction Cyber attacks are highly complex and disruptive events that challenge the existing warfare decision‐making paradigm. The dual nature of cyber attacks creates decisional ambiguity as the line between espionage activities and the use of force becomes nearly indistinguishable in cyberspace. As a result, decision‐making uncertainty experienced by senior leaders when determining the appropriate response to a cyber attack is complicated by many interrelated factors that stem from this inherent duality (Caudle 2010; Owens et al. 2009). In a qualitative, phenomenological study, Caudle (2010) purported that the decision‐making process used by senior military officers following a cyber attack was best described by poliheuristic, noncompensatory decision theory. Specifically, the decision‐making process used when evaluating noncompensatory thresholds and determining appropriate response options was based on the seamless integration of cognitive and rational mental models (Caudle 2010; Mintz and Geva 1997). However, Keller and Yang (2008) found that poliheuristic decision theory in its original formulation (Mintz 1993; Mintz 2004a; Mintz 2004b; Mintz and Geva 1997) did not incorporate two essential facets of the decision process when considering the use of force. Explicitly, poliheuristic decision theory does not specify how the decision‐maker’s leadership characteristics and the situational context surrounding the decision space affect the determination of the critical threshold necessary to respond to an attack with force (Keller and Yang 2008). A review of the literature indicated that a cyber attack can be considered an armed attack when the Schmitt Analysis criteria (severity, immediacy, directness, invasiveness, measurability, presumptive legitimacy, and responsibility) combine to produce the equivalent effects of a kinetic use of force (Schmitt 1999). Based on this equivalence, Keller and Yang’s (2008) recommended modifications to poliheuristic theory, which incorporates leadership characteristics and situational context, should improve the understanding and predictability of how military leaders make response decisions following significant cyber attacks.

240


Daryl Caudle

2. Background Cyberspace is defined as “a global domain within the information environment consisting of the interdependent network of information technology infrastructures, including the Internet, telecommunications networks, computer systems, and embedded processors and controllers” (Gortney 2012: 83). When considering the importance of information networks, one could easily argue that cyber warfare has existed for over 2500 years when Sun Tzu articulated the importance of achieving information superiority through asymmetric methods in The Art of War (Geers 2011; Mazanec 2009). Since then, cyber warfare opportunities and capabilities have significantly evolved as the ubiquitous nature of cyberspace and the proliferation of networked systems have caused an unprecedented level of dependency on social connectedness and reliance on nearly instantaneous access to information. This reliance on cyberspace transcends financial institutions, transportation systems, online markets, telecommunication networks, and military command and control grids. Indeed, cyberspace has become intertwined into the fabric of society as the medium that blurs the line between our physical and virtual personas (Fox et al. 2009). With this level of dependency on cyberspace, little doubt remains that cyber attacks present an ever‐growing threat to our way of life and our national security. Acknowledging the inevitable eventuality of a Cyber Pearl Harbor event, President Bush (2003: ix) warned, “Cyber attacks on information networks can have serious consequences such as disrupting critical operations, causing loss of revenue, intellectual property, or life.” Furthermore, cyber warfare’s low cost of entry motivates a large range of hostile actors with the ability to acquire sophisticated cyber attack tools and techniques that asynchronously and asymmetrically level the playing field between sovereign nations, terrorist groups, and rogue hackers (Kugler 2009). Therefore, national leaders must be aware that “cyber warfare enables attacks from anywhere in the globe at lighting speed” (Wilson 2003: CRS‐29). The spectrum of adversaries with the potential to acquire and leverage state‐of‐the‐art cyber capabilities, the speed and subversive nature of cyber attacks, and the potential catastrophic and cascading effects to critical infrastructures following a significant cyber attack can create considerable decision‐making uncertainty when determining response options (Caudle 2010). A lasting characteristic of cyber warfare will be the “uncertainty regarding important factors that influence . . . decision‐making calculations” including attribution of adversaries, characteristics of the attack, cyber warfare doctrine, and cyber deterrence policy (Cartwright et al. 2006: 16). The practical development of these factors remains immature despite a concerted effort over the last decade by the intelligence community, the Department of Defense, and the industrial base to collaborate and construct the rules, policies, and technology for making effective cyber warfare decisions (Jackson 2010). Intimately familiar with this challenge, Hayden, former Director of the NSA and the CIA, realized that the “disconnect between those . . . who understand the technology and the decision‐makers charged with making policy must be bridged, and might ultimately require a generational shift in leadership” (Jackson 2010: para. 10).

3. Problem statement Defense Secretary Panetta warned, “The United States . . . is increasingly vulnerable to foreign computer hackers who could dismantle the nation’s power grid, transportation system, financial networks and government” (Bumiller and Shanker 2012: para. 1). Secretary Panetta added, “If we detect an imminent threat of attack that will cause significant physical destruction in the United States or kill American citizens, we need to have [response] options to take action against those who would attack us, to defend this nation when directed by the president” (Bumiller and Shanker 2012: para. 16). However, (Waters et al. 2008: 86) asserted, “Commanders at all levels will continue to deal with the uncertainty or the ‘fog of war’ due to a lack of complete and accurate information regarding cyber warfare.” Unlike traditional kinetic attacks, decision‐ making uncertainty impedes national and military leaders from making timely and effective response decisions following a cyber attack (Caudle 2010; Michael et al. 2003; Peng et al. 2006; Tubbs et al. 2002; Wilson 2007). Caudle (2010) found that the decision‐making uncertainty experienced by senior military officers following a cyber attack was described by five interdependent characteristics: (a) response process, (b) human factors, (c) governance, (d) technology, and (e) environment. These interrelated characteristics are similar to the factors that describe organizational change uncertainty (Caudle 2010; Leavitt 1965; Radnor 1999). Further, Caudle (2010) showed that the decision‐making process used by senior military officers following a cyber attack was noncompensatory in nature and best explained by poliheuristic decision theory. A review of the literature

241


Daryl Caudle indicated this finding was consistent with substantial research conducted on understanding how national leaders make foreign policy and use of force decisions in conflict and following kinetic attacks (DeRouen 2000; DeRouen and Sprecher 2004; Mintz 1993; Mintz 2004a; Mintz 2004b; Mintz and Geva 1997; Ostrom and Job 1986; Redd 2002). Poliheuristic theory combines the essence of cognitive and rational theories of decision‐making into a coherent means of understanding and describing how leaders consider alternatives and critical dimensions of complex situations (DeRouen and Sprecher 2004; Mintz 1993; Mintz 2004a; Mintz 2004b; Mintz and Geva 1997). Poliheuristic theory is a two‐stage decision‐making process. First, decision‐makers use heuristics (i.e., cognitive shortcuts or experiential rules of thumb) to consider the dimensions of the situation and eliminate alternatives in a noncompensatory manner (Mintz 2004b; Mintz and Geva 1997). That is, a less important dimension cannot compensate for a more critical or essential dimension even though it might have a relatively higher score. Second, from the remaining alternatives, the decision‐maker uses analytical tools, expected values, and rational processes to select the best alternative that optimizes the decision based on a cost‐benefit assessment (Mintz 2004a; Mintz 2004b). Poliheuristic theory has been successful in explaining the decision‐making process used by national leaders who have made use of force decisions during armed conflicts (DeRouen 2000; DeRouen and Sprecher 2004; Keller and Yang 2009; Mintz 1993; Mintz 2004a; Nincic 1997; Redd 2002). In most of the situations examined in the literature, political context was a critical dimension (Brule 2005; Mintz 1993; Mintz 2004a; Mintz 2004b). Although poliheuristic theory has made empirically validated contributions to understanding decisions to use force during conflict, the theory, as originally formulated (Mintz 1993; Mintz 2004a; Mintz 2004b; Mintz and Geva 1997), does not incorporate or describe how leadership characteristics and situational context affect the noncompensatory elimination portion (first stage) of the decision‐making process (Keller and Yang 2008). Therefore, this research study addresses the problem that poliheuristic theory, as originally formulated, is incomplete and consequently, is not a fully developed decision theory.

4. Purpose of the study The purpose of this qualitative, directed content analysis was to validate Keller and Yang’s (2008) research findings that certain leadership characteristics and situational variables affect the poliheuristic decision‐making process that senior military officers use when determining the appropriate response to a cyber attack. A better understanding of the decision‐making process used by senior military officers following a cyber attack is necessary to develop improved rules of engagement, more effective response options, and enhanced national security policies (Caudle 2010; Peng et al. 2006). Therefore, the goal of this directed content analysis was to determine if sufficient evidence existed to support Keller and Yang’s (2008: 687) conclusion that incorporating leadership style and situational context into poliheuristic decision theory enhances its “explanatory and predictive power.” By doing so, a more comprehensive poliheuristic decision theory should emerge and lead to improved cyber warfare decision‐making.

5. Research method and design A qualitative, directed content analysis was used to examine interview data from 21 senior military officers who served for the Chairman of the Joint Chiefs of Staff in cyber warfare divisions at the Pentagon in Washington, DC. The interview data were previously collected and coded by Caudle (2010) in support of a qualitative, phenomenological research study conducted to explore the decision‐making uncertainty that senior military officers experience when determining the appropriate response to a cyber attack. Qualitative research methods are preferred when seeking to interpret and understand the meaning of leadership phenomena, contextual situations, and cognitive processes such as decision‐making (Bryman et al. 1988; Cohen et al. 2008; Conger 1998; Holosko 2006; Leedy and Ormrod 2010). A directed content analysis is a research design used when “prior research would benefit from further description . . . to validate or extend conceptually a theoretical framework or theory” (Hsieh and Shannon 2005: 1281). Further, a qualitative content analysis serves as an appropriate research technique used to improve the rigor, trustworthiness, and consistency of other research findings via methodological triangulation by using multiple methods (Denzin 1978; Humble 2009; Leech and Onwuegbuzie 2007; Lincoln and Guba 1985).

242


Daryl Caudle

5.1 Participant demographics The research population consisted of 21 senior military officers assigned to cyber warfare divisions serving for the Chairman of the Joint Chiefs of Staff in Washington, DC. Caudle (2010) used a purposeful, criterion‐based sampling method and feedback from an expert panel to identify senior military officers with ranks varying between paygrades O5 to O7 from all four military Services. The participants were “cyber warfare experts whose roles and responsibilities included operational, strategic policy, computer network, doctrinal, organizational and legal experience” (Caudle 2010: 220). As cyber warfare experts, each participant had an average of 22 years of military service and 2.4 years of cyber warfare decision‐making experience at the national level (Caudle 2010). The sample included 15 men (71%) and 6 women (29%). This corresponded well with the actual gender demographics on the Joint Staff at the time of the original study in which 77% were male (Caudle 2010). Other relevant demographics are provided in Table 1. Table 1: Participants’ services and paygrades

Military Service

Participants n

%

Military Paygrade

Participants n

%

Army Navy Air Force Marines Total

3 6 9 3 21

14 29 43 14 100

O5 O6 O7 Total

10 7 4 21

48 33 19 100

5.2 Source data description The source data used for the directed content analysis completed in this research study was obtained by Caudle (2010) who conducted in‐depth, semi‐structured, personal interviews with 21 senior military officers serving in cyber warfare divisions on the Joint Staff. The interviews were digitally recorded, professionally transcribed into Microsoft Word documents, and imported into QSR NVivo 8 textual analysis software. The transcribed documents were originally coded “by employing Moustakas’ (1994) modification to the van Kaam (1959) method of phenomenological reduction . . . in response to the lead interview question: Please describe the decision‐making uncertainty you experience when determining the appropriate response to a cyber attack” (Caudle 2010: 251). The reduction process yielded 10 key themes that were “based on 87 invariant constituents that emerged from 210 broadly defined horizons identified from 1,579 coded textual expressions” (Caudle 2010: 251). A synthesis of the perceptual and experiential relationships found in the composite textual‐structural descriptions resulted in the 10 key themes being further categorized into five distinct and interdependent characteristics (Caudle 2010). The five overarching characteristics and the 10 supporting themes describe the decision‐making uncertainty that senior military officers experience when determining the appropriate response to a cyber attack are shown in Table 2. Furthermore, the five interrelated characteristics are consistent with the factors found in the literature that describe organizational change uncertainty (Caudle 2010; Leavitt 1965; Radnor 1999). Table 2: Characteristics and key themes describing cyber warfare decision‐making uncertainty Interdependent Characteristics Response Process Human Factors Governance

Key Themes (Rank Order) Response Characteristic and Efficacy Considerations (T1) Cyber Warfare Characteristics (T7) Cyber Attack Characteristics (T8) Social, Behavioral, Cultural, and Cognitive Aspects (T2) Experience, Training, and Education Considerations (T10) Policy and Strategic Aspects (T3) Legal and Ethical Aspects (T4)

243


Daryl Caudle Interdependent Characteristics Environment Technology

Key Themes (Rank Order)

Organizational Concepts, Constructs, and Relational Considerations (T5) Cyberspace Characteristics (T9) Data, Information, and Technology Considerations (T6)

6. Directed content analysis results A directed content analysis was conducted on the source data described in section 5.2. Unlike a phenomenological reduction procedure, a directed content analysis is a deductive research design that is “guided by a structured process . . . using existing theory or prior research . . . to identify key concepts or variables as initial coding categories” (Hsieh and Shannon 2005: 1281). For this research study, poliheuristic decision theory as modified by Keller and Yang (2008) was used to establish predetermined codes to conduct the initial analysis. The predetermined codes were based on the leadership styles and situational context categories shown in Table 3. Table 3: Leadership styles and situational context coding results Leadership Styles and Situational Context Characteristics Leadership Styles with Political Orientation Leadership Styles with Use of Force Orientation Situational Context

Predetermined Coding Categories (Keller and Yang 2008)

Interviews Containing Code (N = 21)

Need for Power

18

90

15

63

Text Segments Coded

Belief in Ability to Control Events Task versus Interpersonal Emphasis Self‐Monitoring

13

43

12

37

Distrust

19

132

Military Assertiveness Threat or provocation Accountability

18 13 13

115 38 29

6.1 Leadership styles with political orientation Leadership content was prevalent throughout the source interview data. The importance of leadership with respect to making effective cyber warfare decisions was directly stated in 15 of 21 interviews. An evaluation of Caudle’s (2010) phenomenological reduction of the source data indicated the average number of interviews and coded text segments for a particular invariant constituent was 12 and 51 respectively. Therefore, those averages were used as the benchmarks for describing the relative importance of the new thematic content found by the directed content analysis shown in Table 3. Overall, a weighted average of 15 interviews (71%) containing 62 text segments contained descriptions of the four coding categories related to leadership styles with political orientation. Of the four political oriented categories, the “need for power” was predominate. According to Keller and Yang (2008: 691), the need for power “involves the desire to influence, control, or dominate other people and groups . . . [by] taking a more zero‐sum approach to bargaining situations.” This finding is consistent with Kinne’s (2005) research regarding autocratic leadership styles associated with many senior military officers. The “belief in ability to control events” ranked second among the political oriented categories. Keller and Yang (2008) described this category as a worldview in which individuals have confidence they can control situations by dictating and implementing policy directly versus through delegation to subordinates. This controlling type of leadership style is prevalent among senior officials and military officers that make critical decisions under stress (DeRouen 2000; Keinan 1987; Nutt and Wilson 2010).

244


Daryl Caudle The remaining two politically oriented categories shown in Table 3 accounted for subjectively less thematic content. The senior military officers interviewed described the “task versus interpersonal emphasis” and “self‐ monitoring” leadership styles and referential characteristics less frequently. The task versus interpersonal emphasis style refers to “the extent to which one is relatively more concerned with getting the task accomplished [versus] attending to the feelings and needs of others” (Keller and Yang 2008: 691). The self‐ monitoring style refers to “the degree to which one closely monitors one’s behavior to appear favorably to others” (Keller and Yang 2008: 692).

6.2 Leadership styles with use of force orientation All but two senior military officers interviewed described or referenced leadership characteristics consistent with the use of force leadership style orientation. A weighted average of 19 interviews (90%) containing 124 text segments contained descriptions of the two coding categories related to leadership styles with use of force orientation. Both the “distrust” and the “military assertiveness” categories were found to have substantially more thematic content than the categories associated with political orientation. The distrust leadership style refers to “the belief that others’ statements and actions are often insincere . . . [and that the] world is threatening . . . with adversaries that are implacably hostile” (Keller and Yang 2008: 693). The military assertiveness leadership style involves the “general attitude toward the use of force as a policy instrument” (Keller and Yang 2008: 693). This finding is consistent and supported by a review of the literature regarding the leadership characteristics, decision‐making processes, and conflict resolution methods senior military officers exhibit with respect to the use of force (Keller and Yang 2008). Further, this directed content analysis showed generational and organizational cultural effects, ethical considerations, and anticipatory skills influence leadership style and decision‐making.

6.3 Situational context The importance of situational context and awareness with respect to making cyber warfare decisions was described by 13 of the 21 senior military officers. Specifically, a weighted average of 13 interviews (62%) containing 34 text segments contained descriptions of the two coding categories related to situational context. The “threat or provocation” category refers to the degree that a “violent crises or grave threat to basic values . . . triggers a matching level of force” (Keller and Yang 2008: 694). The “accountability” category refers to the extent that “pacifying constraints prevent leaders from using military force irresponsibility and imprudently” (Keller and Yang 2008: 694). The density of thematic content in the source interview data for these two categories is consistent with Caudle’s (2010) research. Caudle (2010: 265) found that “decision‐ makers seldom have complete situational awareness or a full understanding of the higher order effects and consequences of their decisions; a problem confounded by ambiguous leadership, ineffective oversight, and weak accountability measures.”

7. Conclusions The results of the directed content analysis suggest that leadership styles and situational context are important themes and affect the decision‐making uncertainty that senior military officers experience when determining the appropriate response to a cyber attack. Therefore, leadership styles and situational context should be an inherent part of any governing theory used to predict and explain the response decision‐making process fully. Caudle’s (2010) research indicated that the response decision‐making process was best described by poliheuristic theory. Accordingly, the findings of this directed content research study have validated and triangulated Keller and Yang’s (2008) assertion that leadership styles and situational context are missing from poliheuristic theory as originally formulated (Mintz 1993; Mintz 2004b) and are necessary considerations for a more complete theory. Furthermore, by conducting the directed content analysis on the interview data collected by Caudle (2010), this research study supported Keller and Yang’s (2008: 708) recommendation that “confronting military officers with accountability‐related decision‐making scenarios from their domains of expertise would confirm the general validity of these findings.” Finally, additional research studies are recommended that explore how generational and organizational cultural effects, ethical considerations, and anticipatory skills influence senior military officers’ leadership styles when making cyber warfare decisions.

References Brule, D. J. (2005). Explaining and forecasting leaders' decisions: A poliheuristic analysis of the Iran hostage rescue decision. International Studies Perspectives, Vol. 6, No. 1, pp. 99‐113.

245


Daryl Caudle Bryman, A., Bresnen, M., Beardsworth, A. and Keil, T. (1988). Qualitative research and the study of leadership. Human Relations, Vol. 41, No. 1, pp. 13‐30. Bumiller, E. and Shanker, T. (2012). Panetta warns of dire threat of cyberattack on U.S. The New York Times, October 11, 2012. Bush, G. W. (2003). The national strategy to secure cyberspace. Washington, DC: The White House. Cartwright, J. E., Pace, P. and Rumsfield, D. H. (2006). Deterrence operations: Joint operating concepts (v. 2.0). Washington, DC: Department of Defense. Caudle, D. L. (2010). Decision‐making uncertainty and the use of force in cyberspace: A phenomenological study of military officers. Doctor of Management Dissertation, School of Advanced Studies, University of Phoenix, ProQuest UMI 3438389. Cohen, M., Etner, J. and Jeleva, M. (2008). Dynamic decision‐making when risk perception depends on past experience. Theory and Decision, Vol. 62, No. 2‐3, pp. 173‐192. Conger, J. A. (1998). Qualitative research as the cornerstone methodology for understanding leadership. Leadership Quarterly, Vol. 9, No. 1, pp. 107‐121. Denzin, N. K. (1978). The research act: A theoretical introduction to sociological methods, New York, Praeger. Derouen, K. (2000). Presidents and the diversionary use of force: A research note. International Studies Quarterly, Vol. 44, No. 2, pp. 317‐328. Derouen, K. and Sprecher, C. (2004). Initial crisis and poliheuristic theory. Journal of Conflict Resolution, Vol. 48, No. 1, pp. 56‐68. Fox, J., Arena, D. and Bailenson, J. N. (2009). Virtual reality: A survival guide for the social scientist. Journal of Media Psychology, Vol. 21, No. 3, pp. 95‐113. Geers, K. (2011). Sun Tzu and cyber war. Tallinn, Estonia: Cooperative Cyber Defence Centre of Excellence (CCD COE). Gortney, W. E. (2012). Joint Publication 1‐02: Department of Defense dictionary of military and assocaited terms. Washington, DC: Joint Chiefs of Staff. Holosko, M. J. (2006). Primer for critiquing social research: A student guide, Florence, KY, Cengage Learning. Hsieh, H. F. and Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, Vol. 15, No. 9, pp. 1277‐1288. Humble, A. M. (2009). Technique triangulation for validation in directed content analysis. International Journal of Qualitative Methods, Vol. 8, No. 3, pp. 34‐51. Jackson, W. (2010). U.S. understanding of cyber war still immature, says former NSA director. Government Computer News [Online]. Keinan, G. (1987). Decision‐making understress: Scanning of alternatives under controllable and uncontrollable threats. Journal of Personality and Social Psychology, Vol. 52, No. 3, pp. 639‐644. Keller, J. W. and Yang, Y. E. (2008). Leadership style, decision context, and the poliheuristic theory of decision‐making: An experimental analysis. Journal of Conflict Resolution, Vol. 52, No. 5, pp. 687‐712. Keller, J. W. and Yang, Y. E. (2009). Empathy and strategic interation in crises: A poliheuristic perspective. Foreign Policy Analysis, Vol. 5, No. 2, pp. 169‐189. Kinne, B. J. (2005). Decision‐making in autocratic regimes: A poliheuristic perspective. International Studies Perspectives, Vol. 6, No. 1, pp. 114‐128. Kugler, R. L. (2009). Deterrence of cyber attacks. In: Kramer, F. D., Starr, S. H. and Wentz, L. K. (eds.) Cyberpower and national security. Washington, DC: Potomac Books, Inc. Leavitt, H. J. (1965). Applied organizational change in industry. In: March, J. G. (ed.) Handbook of organizations. Chicago, IL: Rand McNally. Leech, N. L. and Onwuegbuzie, A. J. (2007). An array of qualitative data analysis tools: A call for data analysis triangulation. School Psychology Quarterly, Vol. 22, No. 4, pp. 557‐584. Leedy, P. D. and Ormrod, J. E. (2010). Practical research: Planning and design (9th ed.), Upper Saddle River, NJ, Pearson. Lincoln, Y. S. and Guba, E. G. (1985). Naturalistic inquiry, Beverly Hills, CA, Sage. Mazanec, B. M. (2009). The art of (cyber) war. The Journal of Information Security Affairs, Vol. Spring 2009, No. 16, pp. 84‐ 98. Michael, J. B., Wingfield, T. C. and Wijesekera, D. Measured responses to cyberattacks using Schmitt analysis: A case study of attack scenarios for a software‐intensive system. 27th Annual International Computer Software and Applications Conference, 2003 Dallas, TX. IEEE. Mintz, A. (1993). The decision to attack Iraq: A noncompensatory theory to decision‐making. Journal of Conflict Resolution, Vol. 37, No. 4, pp. 595‐618. Mintz, A. (2004a). Foreign policy decision‐making in familiar and unfamiliar settings: An experimental study of high‐ranking military officers. Journal of Conflict Resolution, Vol. 48, No. 1, pp. 91‐104. Mintz, A. (2004b). How do leaders make decisions? A poliheuristic perspective. Journal of Conflict Resolution, Vol. 48, No. 1, pp. 3‐13. Mintz, A. and Geva, N. (1997). The poliheuristic theory of foreign policy decision‐making. In: Geva, N. and Mintz, A. (eds.) Decision‐making on war and peace. Boulder, CO: Lynne Rienner Publishers. Moustakas, C. (1994). Phenomenological research methods, Thousand Oaks, CA, Sage. Nincic, M. (1997). Loss aversion and the domestic context of military intervention. Political Research Quarterly, Vol. 50, No. 1, pp. 97‐120.

246


Daryl Caudle Nutt, P. C. and Wilson, D. C. (2010). Crucial trends and issues in strategic decision‐making. In: Nutt, P. C. and Wilson, D. C. (eds.) Handbook of Decision‐Making. West Sussex, United Kingdom: John Wiley & Sons, Ltd. Ostrom, C. W. and Job, B. L. (1986). The president and the political use of force. American Political Science Review, Vol. 80, No. 2, pp. 541‐566. Owens, W. A., Dam, K. W. and Lin, H. S. (2009). Technology, policy, law, and ethics regarding U.S. acquisition and use of cyberattack capabilities, Washington, DC, The National Academies Press. Peng, L., Wingfield, T. C., Wijesekera, D., Frye, E., Jackson, R. and Michael, J. B. (2006). Making decisions about legal responses to cyber attacks. In: Pollitt, M. and Shenoi, S. (eds.) Advances in Digital Foresnics. Boston, MA: SpringerLink. Radnor, Z. J. (1999). Lean working practices: The effect on the organization, Manchester, England, Manchester School of Management. Redd, S. B. (2002). The influence of advisers on foreign policy decision‐making: An experimental study. Journal of Conflict Resolution, Vol. 46, No. 3, pp. 335‐364. Schmitt, M. N. (1999). Computer network attack and the use of force in international law: Thoughts on a normative framework. The Columbia Journal of Transnational Law, Vol. 37, No., pp. 885‐937. Tubbs, D., Luzwick, P. G. and Sharp, W. G. (2002). Technology and law: The evolution of digital warfare. In: Schmitt, M. N. and O'donnell, B. T. (eds.) Computer network attack and international law. Newport, RI: Naval War College. Van Kaam, A. L. (1959). Phenomenal analysis: Exemplified by a study of the experience of “feeling really understood". Journal of Individual Psychology, Vol. 15, No., pp. 66‐72. Waters, G., Ball, D. and Dudgeon, I. (2008). Australia and cyber warfare. Canberra, Australia: Australian National University E Press. Wilson, C. (2003). Computer attack and cyber terrorism: Vulnerabilities and policy issues for congress. Washington, DC: CRS Report for Congress. Wilson, C. (2007). Information operations, eletronic warfare, and cyberwar: Capabilities and related policy issues. Washington, DC: CRS Report for Congress.

247


248


Work in Progress Papers

249


250


Attack‐Aware Supervisory Control and Data Acquisition (SCADA) Otis Alexander, Sam Chung and Barbara Endicott‐Popovsky Institute of Technology – University of Washington, Tacoma, USA Center for Information Assurance & Cybersecurity – University of Washington, Seattle, USA otisa@uw.edu chungsa@uw.edu endicott@uw.edu Abstract: SCADA systems are used for geographically distributed process control. These systems are used in national critical infrastructure such as transportation, power grid, water facilities, etc. Malfunctions in these systems can be catastrophic and can potentially cause harm to the environment and even human beings. As a result, SCADA systems are high value targets for attackers and need to be able to detect intruders within the system before they can exploit vulnerabilities. In this paper, we propose an architecture that makes SCADA systems attack‐aware. The architecture builds on several existing methodologies and works to improve application level intrusion detection. This goal is accomplished by helping to make sense of the vast amounts of log data that SCADA systems produce daily and over time also refining them. Keywords: attack‐aware, SCADA, log mining, snapshots, baselining, anomaly detection

1. Introduction Supervisory Control and Data Acquisition (SCADA) systems can be found in critical infrastructures such as power plants and power grid systems, water, oil and gas distribution systems, and transportation systems, production systems for food, cars, ships and other products (Hadžiosmanovi ́et al., 2012). Even though SCADA systems are the backbone of many mission critical facilities, they are not being sufficiently protected against attacks. Hadžiosmanovi ́ et al. (2012) state that approximately 2,700 organizations dealing with critical infrastructures in the U.S. experienced about 150,000 hours of system downtime in 2005. Availability is crucial in a vast majority of SCADA systems. Loss of availability can potentially be catastrophic for the environment and human lives. There are various reasons why security incidents are on the rise in facilities using these systems. In the past, they were mainly conceived as isolated systems, but more recently, because of the ever growing demand for popular computing services and remote access to resources, they are increasingly being connected to other IT systems and often to the Internet (Cheminod et al., 2011). This traditional isolation of SCADA systems was often sufficient to prevent them from being exposed to serious security problems that affect more open systems. Accessibility and openness though have exposed SCADA systems to the same security threats that traditional IT systems experience. The growing complexity of SCADA systems has also exposed them to an increasing amount of security threats (Ten et al., 2010). Interdependences among computers, communication, and critical infrastructure hardware have also contributed to this problem. This ever increasing number of threats makes it harder and harder to efficiently manage the security of SCADA systems. There is a need for security solutions that go beyond protecting network perimeters and instead focus on making the system and relevant parties aware of intruders already within the system (Watson et al., 2011). The timely and correct execution of industrial processes is of the utmost importance in SCADA systems and the longer an attacker remains on a system undetected, the longer they can disrupt these processes. Intrusion detection is a daunting task within a SCADA system. Due to the shear amount of log data generated daily, it is extremely hard to pinpoint anything of real significance (Hadžiosmanovi ́et al., 2012). By turning our attention away from system‐related threats such as vulnerability exploits and focusing instead on process‐ related threats at the application level that involve the legitimate execution of SCADA commands, we can vastly reduce our search space. We propose an architecture that seeks to greatly improve intrusion detection by combining learning, anomaly detection, human analysis and the addition of custom detection points within application code to detect anomalous process‐related events that may potentially be signs of intrusion. Our architecture (Figure 1) forms a cycle in which output, anomalous events that turn out to be genuine threats, are used to alert the SCADA system and relevant parties of potential intrusion and also serve to refine the overall logging process.

251


Otis Alexander, Sam Chung and Barbara Endicott‐Popovsky

2. Previous work Traditional methodologies for addressing security issues have mainly focused on security at the network level. Some methodologies do exist though that address security issues at the application level. Hadžiosmanovi ́et al. (2012) present a methodology to systematically identify potential process‐related threats by implementing a tool called MELISSA (Mining Event Logs for Intrusion in SCADA Systems). Its strength exists in its ability to detect anomalous behaviour at the application level. A downfall of MELISSA is that it does not directly aid in making SCADA controller applications more secure from within. It solely relies on system logs which may, due to corruption, be unreliable. We take a similar approach to MELISSA by focusing on process‐related threats and using data mining techniques in anomaly detection. AppSensor defines a conceptual architecture that offers guidance to implement intrusion detection into new or existing applications by means of sensors that are capable of detecting malicious events. These sensors are embedded within the application’s code. AppSensor does not rely on system logs, but instead generates its own. Although it uses pre‐defined threats to detect malicious activity, it is also able to detect unknown threats when coupled with an analysis engine. We use the same approach AppSensor uses in regards to the placement of embedded sensors within the application’s code in order to achieve the benefits of a security‐by‐ design paradigm and to generate custom logs. SCADAHawk (William et al, 2011) is a system that incrementally learns normal behaviours of a SCADA system and then continuously watches for the occurrence of abnormal behaviours. The system focuses on monitoring non‐functional system‐related events. Our architecture also uses a baselining approach to find anomalous events, but instead focuses on process‐related ones. (A baseline is a known state by which something is measured or compared.)

3. Attack‐aware SCADA architecture Our proposed architecture (Figure 1) has the sole purpose of improving intrusion detection by increasing the accuracy, efficiency and speed of detection. It represents a continuous cycle that feeds on its own output to accomplish this goal. SCADA systems generate large amounts of system information daily and this architecture aids in reducing the complexity of log analysis by refining the logging process. The main aspects of the architecture will be explained in the following paragraphs.

Figure 1: Attack‐aware SCADA Architecture

3.1 Learning phase The learning phase combines focus group input, threat analysis and baselining to produce valuable information to aid in anomaly detection.

252


Otis Alexander, Sam Chung and Barbara Endicott‐Popovsky Focus group input: Industrial processes in various domains differ greatly in detail, thus it is essential to assemble a focus group to gather the relevant details about the specific SCADA system being analysed. The focus group should be composed of key stakeholders and process engineers. Receiving input from both groups, especially process engineers, is essential to the learning phase of this architecture. Process engineers are especially important because they are powerful system users who write the scripts that define process automation. Therefore process engineers are aware of the semantic implications of specifications. Input from this group should result in deeper insights about the SCADA system processes. Threat Analysis: During this step of the learning phase, we focus on generating process‐related threats that exploit weak process controls and that imply an attacker has obtained user access rights and is issuing legitimate SCADA commands to disrupt the industrial process. We focus on threats that leverage vulnerabilities of the SCADA control application. This includes scenarios where attackers perform legitimate user actions that can have negative impacts on the process production or devices. Baselining: To complement the other activities in the learning phase, system baselining is used to provide a more complete picture of the SCADA control application processes. In order to baseline the SCADA control application, a monitoring process is used to generate an initial known state of the system over a period of time. This process consists of recording system behaviour during real‐time operation to form an event sequence called a snap shot.

3.2 Anomaly detection In order to automate the tedious task of log review and to find events that are of interest to operators, anomaly detection is used in two forms, data mining and outlier detection. Data mining: For this type of anomaly detection, we perform data aggregation and transformation on the logs that are gathered from the SCADA control application to get a suitable data format for pattern mining. A pattern engine then runs an algorithm for mining frequent patterns and outputs an ordered list of patterns based on the frequency of the occurrence. Low frequency events represent anomalies that need to be analysed. Find outliers: At this point, the snapshot obtained from baselining the system during the learning phase can be compared with real‐time operations running on the SCADA control application. This serves the purpose of detecting outliers which correspond to anomalous system process events.

3.3 Human analysis Large portions of the learning phase and anomaly detection are automated. In order to ensure that the outputs of these activities are in fact genuine threats, human analysis is leveraged as a final verification step. Human analysis can result in the addition of sensors to the SCADA control application’s code, the addition of threats to a known list of threats, or no action at all if the anomalies are not found to be threats. Addition of sensors: Once credible threats are identified from human analysis of the anomalies found during anomaly detection, additional sensors can be added to the SCADA control application in order to improve the logging process. Once the detection points have been determined, thresholds are set for each point, or group of points. The sensors can then be deployed within the application code. In this way each sensor acts like a tripwire within the SCADA control application making it increasing more attack‐aware. Additions to known threats list: Maintaining an up to date list of known threats is essential to ensuring the SCADA control application is attack‐aware. Upon inspection, some anomalies are inevitably going to be labelled as threats and they subsequently need to be catalogued for later signature based detection. They also serve as a helpful addition to the learning phase of the architecture.

4. Conclusion and future work Operators of SCADA systems have seen an increase in attacks against their systems recently. This is partially due to the growing trend of connecting systems to IT infrastructures and the Internet. The increased complexity of SCADA due to the rapid change of technology is also increasing the number of threats and

253


Otis Alexander, Sam Chung and Barbara Endicott‐Popovsky making it increasingly difficult to manage security. Security solutions already exist that protect the boundaries of the network, but additional security solutions are needed to detect malicious activities at the application level. In this article, we propose an architecture that works to improve application level intrusion detection by helping to make sense of the large amounts of log data that SCADA systems produce daily. Future work will include implementing the architecture, developing use cases and conducting testing within a testbed SCADA environment. After this we hope to be able to address the sensitive subject of automated threat mitigation within a SCADA system. Innovative solutions will have to be developed in order to avoid disrupting the availability of production systems.

References Cheminod, M., Durante, L. and Valenzano, A.(2012) Review of Security Issues in Industrial Networks. IEEE Transactions on Industrial Informatics, p.1‐17. Hadziosmanovic, D., Bolzoni, D., and Hartel, P.H. (2012). A Log Mining Approach for Process Monitoring in SCADA. International Journal of Information Security. 11, 231‐251. Ten, C., Manimaran, G., and Liu, C. (2010) Cybersecurity for Critical Infrastructures: Attack and Defense Modeling. IEEE Transactions on Systems, Man, and Cybernetics, 40 (4), p.853‐865. Watson, C., Coates, M., Melton, J., and Groves, D.(2011) Creating Attack‐ Aware Software Applications with Real‐Time Defenses. CrossTalk, Iss. September/October p.14‐18. William, S., Gandhi, R., Zhu, Q., and Mahoney, W. (2011). Using Anomalous Event Patterns in Control Systems for Tamper Detection. In: ACM Proceedings of the Seventh Annual Workshop on Cyber Security and Information Intelligence Research. New York: ACM. 26.

254


Cyber Disarmament Treaties and the Failure to Consider Adequately Zero‐Day Threats Merritt Baer Harvard Law School, Cambridge, USA mbaer@post.harvard.edu Abstract: Because the Internet carries a borderless aspect, it is unsurprising that international solutions to cybersecurity problems have become an increasingly insistent area for debate. The notion of a cyber disarmament treaty is appealing as we begin to wrap our minds around the destructive possibilities in cyber, including the potential for civilian casualties of cyber dimensions of recognized armed conflicts—a traditional arena for treatymaking. In this paper, I argue that calls for cyberwarfare treaties miss the mark because they conflate traditional forms of force with the avenues that nation‐state cyber actors are exploiting. In this, I differ from existing cyber treaty skeptics because my rationale hinges on the substantive nature of the cyber threat as one ill‐suited to treaties. I agree with others’ critiques that definitions in the cyber world remain vague, often haphazard or poor adjustments of kinetic world definitions. Yet the search for terminology to best address cyber threats and behaviors is not a problem particular to cyber disarmament. Another common critique of cyber disarmament is that we have no enforcing body. Similarly, I find this critique true but not necessarily unique— international law of all sorts faces problems in law enforcement. I contend that the reason a cyber disarmament treaty is not an appropriate tool to address the threat of cyberwarfare is that it fails to recognize that the most threatening cyber warfare concerns involve quiet but lucrative zero‐day threats. A zero‐day threat is a foundational “hole” in software or hardware that can be exploited before its existence is even known. Emerging research shows that zero‐day exploitations last longer and the payload is significantly higher than that of traditional hacking. Because a cyber disarmament treaty could only effectively bind countries to behavior that is known to the other players, it would not bind zero‐day hacks or the deliberate installation of zero‐day vulnerabilities in products. The notion of a treaty derives from a sense that violence is knowable at the moment when a defector acts. Cyber warfare simply acts outside of that assumption much of the time. Keywords: zero‐day threat, cyber disarmament, supply chain integrity

1. Introduction Because the Internet carries a borderless aspect, it is unsurprising that international solutions to cybersecurity problems have become an increasing insistent arena for debate. Many countries agree that in law, policy and diplomacy there are certain activities that are beyond the pale, regardless of situational factors. These have included online crime such as child pornography, human trafficking, and other uses of Internet that offend human rights. Governments have worked together to eradicate these forms of online crime. (See, e.g., Baines 2008). As cyber warfare becomes more real, the notion of a cyber disarmament treaty sounds appealing. Destructive possibilities of cyber weapons include the potential for civilian casualties of cyber dimensions of armed conflicts, as violent acts may be a hybrid of kinetic and cyber attacks, such as the 2008 Russo‐Georgian conflict. (See generally Bachmann 2012). Both the more kinetic manifestations of cyber weapons like Stuxnet and the more Internet‐specific violence such as escalating Internet vigilantism (See Wehmhoener 2010) have provoked a sense that law enforcement may need to assert better control over Internet violence. International cooperation provides resources and commitment.

2. Existing lines of scepticism Despite its promise when considered in the abstract, most critics who have examined the possibility of international disarmament have taken a critical eye. Some argue that there is a problem with definitions. This is certainly true—as it is in other areas of cyber law and policy, we have not defined coherently in US or international law many of the basic terms. There is no clear definition of what “cyber” is limited to, let alone what cyber warfare does or does not entail. The use of the term “war” applied to security concerns that do not have a geographic border has a controversial past in the United States, including the “War on Drugs” and the “Global War on Terror.” (PBS 2012. See also Goldsmith 2011: 7 on lack of consensus in definitions). Other critics have opined that a treaty would be toothless because enforcement is not feasible. (Goldsmith 2011: 6). Indeed, without remedies these treaties might be more embarrassing than productive, if they are established only to be ignored. And they might also punish the rule‐abiding countries while renegade countries

255


Merritt Baer continue to develop a cyber arsenal, mirroring the ways in which countries like Iran and North Korea persist in pursuing nuclear weapon development despite international outcry and economic sanctions. Moreover, as Jack Goldsmith and James Lewis point out, cyberattacks often originate, technically or functionally, from the United States. Thus exhortations to rogue countries to fall in line seem hollow. (Goldsmith 2011: 8, citing Lewis 2010). I recognize the value in many of these criticisms, but I also put forth one that has not formed the hinge for previous cyber disarmament conversations. I contend that calls for cyberwarfare treaties miss the mark because they conflate traditional forms of force with the avenues that nation‐state cyber actors are exploiting. It is true that definitions in the cyber world remain vague. Sometimes they are haphazard or poor adjustments of kinetic world definitions and other times they create categories that are immediately outdated. Lack of definitions is a recurrent problem and to some extent a moving target as relevant technologies shift and the words we use to describe them move accordingly. But lawmakers have found (some imperfect) ways to approximate the ideas that we are searching to affect in making and enforcing laws affecting cyber crime. We have, for instance, found ways to reach the conduct of employees, students and library users online. See, e.g., United States v. American Library Association, 539 U.S. 194 (2003) (Congress has the authority to require public schools and libraries who receive federal funding to install Internet filtering software; public schools’ and libraries’ use of filtering software is not a violation of patrons’ First Amendment rights). The search for ways to best address cyber threats and behaviors is not a problem particular to cyber disarmament. The second argument, that we have no enforcing body, is similarly true but not necessarily unique— international law faces problems in law enforcement, jurisdiction, and enforcement; the problems are not unique to cyber. Nuclear disarmament may be easier to track in theory because of the relatively limited supply of enriched uranium, whereas in theory cyber weapons have no comparable limited natural resource. In practice, however, the consolidation of resources and expertise required for a sophisticated, nation‐state level cyber threat exist only in a handful of nations and groups. A cyber disarmament treaty would by definition seek to address nation‐state level conduct, not the 16 year old who is hacking from his garage for lulz.

3. Zero‐Day threats defy diplomatic approaches to cyber disarmament I contend that one compelling reason that a cyber disarmament treaty is not an appropriate tool to address the threat of cyberwarfare is that diplomacy fails to recognize that many of the most threatening cyber warfare concerns are not acts of aggression against security mechanisms. Rather, they are quiet but lucrative zero‐day threats that are embedded in the technology with no need to break down or blow up virtual doors or walls. A zero‐day threat is a foundational “hole” in software or hardware that those who are using the technology do not realize exists. Zero‐day vulnerabilities can be deliberately built into the software/hardware or left accidentally during development and later discovered and exploited. In a recent study, a pair of Symantec researchers outlined some of the zero‐day threat landscape using data from 2008‐2011. (Bilge and Dumitras 2012). They extracted their detection data for this study from a World Intelligence Network Environment (WINE) platform, using a signature/reputation based malware detection model and comparing them against Symantec’s own threat database. Their findings demonstrate that zero‐day attacks are even more dangerous and prevalent than previously thought, with the average zero‐day attack lasting 312 days and some lasting up to 2.5 years. (Ibid 7). The Symantec report revealed that the same hackers who attacked Google were active in at least eight additional zero‐day attacks in the past three years. (Zetter 2012 and Schwartz 2012). Many of these are hybrid zero‐day attacks that use supply chain vulnerabilities to access companies, riding on vulnerabilities in companies that provide electrical or mechanical parts to targeted industries until they breach the target company and then installing a backdoor Trojan that gives the attackers control over the victim’s machine. (Ibid). These attacks are by nature different than hacking—for instance, one attack structure they have used is the “Watering Hole” attack: the predator waits at a virtual watering hole for the prey. The attacker has injected a legitimate server, so that when a user browses to the site (the watering hole), the returned Web

256


Merritt Baer pages point to the server hosting the exploit kit (O’Gorman and McDonald 2012: 3). The Google hackers, whom researchers dubbed “Elderwood gang,” have utilized so many zero‐day vulnerabilities; some analysts speculate that they have access to source code. (O’Gorman and McDonald 2012: 2). Zero‐day attacks embed mining capabilities yet remain elusive to security software looking to detect data traffic going in or out; they also provide flexibility for hackers to access data over time and refine their extraction to be increasingly targeted based on data that was previously mined. These characteristics make zero‐day hacks valuable; an Adobe Reader exploit sold in November 2012 for $50,000. (Krebs 2012). Because a cyber disarmament treaty would only effectively bind countries to behavior that is known, it would not bind zero‐day hacks or the deliberate installation of zero‐day vulnerabilities in products. The notion of a treaty derives from a sense that violence is knowable at the moment when a defector acts. Cyber warfare simply acts outside of that assumption much of the time.

4. Internet presents a new form of violence, not just a new forum Because international manufacturers produce a good number of the parts in US hardware and software and because zero sum vulnerabilities are extremely difficult to detect, supply chain integrity is a challenge that goes hand‐in‐hand with zero‐day threats. (See, e.g., Chang 2012). A defector country could sign onto the cyber disarmament treaty while simultaneously exploiting embedded zero‐day vulnerabilities in parts used in manufacturing everything from hardware and software to cellular phones and satellites. Moreover, a country could do so while remaining effectively undetectable for some time, preserving plausible deniability and continuing to refine its technology to make the holes even less detectable while the treaty‐abiding countries back away from technology tailored to cyber aggression. As I have argued before, we need to adjust categories of Internet crime and violence to fit the landscape we inhabit. (Baer 2010 and 2011). That landscape encompasses an Internet life for each of us; those lives come with associated risks, some of which are particular to Internet violence. Diplomatic approaches must take into account the technical characteristics of emerging threats. Conversation about mitigating cyber violence must recognize that war‐level harm need not come in the form of bombs, nor should defense strategy model bunkers.

References Baer, M. (2010) “Cyberstalking and the Internet Landscape We Have Constructed,” Virginia Journal of Law and Technology Vol. 15, No. 2. ‐‐‐‐‐ (2011) “The Uses and Limits of Game Theory in Conceptualizing Cyberwarfare,” International Conference on Information‐Warfare and Security, presented at George Washington University, Washington, DC. Bachmann, S. (2012) “Hybrid Threats, Cyber Warfare and NATO’s Comprehensive Approach for Countering 21st Century Threats—mapping the new frontier of global risk and security management,” [online], Amicus Curiae 88, http://www.academia.edu/1324010/HYBRID_THREATS_CYBER_WARFARE_AND_NATOS_COMPREHENSIVE_APPROA CH_FOR_COUNTERING_21st_CENTURY_THREATS_‐ _MAPPING_THE_NEW_FRONTIER_OF_GLOBAL_RISK_AND_SECURITY_MANAGEMENT. Baines, V. (2008) “Online Child Sexual Abuse: The Law Enforcement Response,” [online], Virtual Global Taskforce as presented by ECPAT International, http://www.ecpat.net/worldcongressIII/PDF/Publications/ICT_Law/Thematic_Paper_ICTLAW_ENG.pdf. Bilge, L. and Dumitras, T. (2012) “Before We Knew It: An Empirical Study of Zero‐Day Attacks in the Real World,” [online], Symantec Research Labs, http://users.ece.cmu.edu/~tdumitra/public_documents/bilge12_zero_day.pdf. Chang, A. (2012) “‘Made in USA’ Nexus Q Teardown Reveals Many Overseas Parts,” [online], Wired, http://www.wired.com/gadgetlab/2012/07/made‐in‐usa‐nexus‐q‐teardown‐reveals‐many‐overseas‐parts/ Goldsmith, J. (2011) “Cybersecurity Treaties: a Skeptical View,” [online], The Hoover Institution, Stanford University, http://media.hoover.org/sites/default/files/documents/FutureChallenges_Goldsmith.pdf. Goodin, D. (2012) “Zero‐Day Attacks are Meaner, More Rampant than We Ever Thought,” [online], Ars Technica, http://arstechnica.com/security/2012/10/zero‐day‐attacks‐are‐meaner‐and‐more‐plentiful‐than‐thought/. Krebs, B. (2012) “Experts Warn of Zero‐Day Exploit for Adobe Reader,” [online], Krebs on Security, https://krebsonsecurity.com/2012/11/experts‐warn‐of‐zero‐day‐exploit‐for‐adobe‐reader/. Lawson, S. (2012) “Cyberwarfare Treaty Would Be Premature, Unnecessary, and Ineffective,” [online], US News & World Report, http://www.usnews.com/debate‐club/should‐there‐be‐an‐international‐treaty‐on‐ cyberwarfare/cyberwarfare‐treaty‐would‐be‐premature‐unnecessary‐and‐ineffective. Lewis, J. A. (2010) “Multilateral Agreements to Constrain Cyberconflict,” [online], Arms Control Today 40 , under “Obstacles to Agreement,” www.armscontrol.org/act/2010_06/Lewis.

257


Merritt Baer O’Gorman, G. and McDonald, G. (2012) “The Elderwood Project,” [online], http://www.cs.cornell.edu/courses/CS6410/2012fa/slides/Symantec_ElderwoodProject_2012.pdf PBS (2012) “Thirty Years of America’s Drug War: a Chronology,” [online], http://www.pbs.org/wgbh/pages/frontline/shows/drugs/cron/. Schwartz, M. (2012) “Google Aurora Attackers Still On Loose, Symantec Says,” [online], Information Week, http://www.informationweek.com/security/attacks/google‐aurora‐attackers‐still‐on‐loose‐s/240006930. Wehmhoener, K.A. (2010) “Social Norm or Social Harm: An Exploratory Study of Internet Vigilantiism” [online], Iowa State University Graduate Theses and Dissertations, Paper 11572, http://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=2561&context=etd. Zetter, K. (2012), “Sleuths Trace New Zero‐Day Attacks to Hackers Who Hit Google,” [online], Wired, http://www.wired.com/threatlevel/2012/09/google‐hacker‐gang‐returns/.

258


Evaluation of a Cryptographic Security Scheme for Air Traffic Control’s Next Generation Upgrade Cindy Finke, Jonathan Butts, Robert Mills and Michael Grimaila Air Force Institute of Technology, Wright Patterson AFB, USA Cindy.Finke@AFIT.edu Jonathan.Butts@AFIT.edu Robert.Mills@AFIT.edu Michael.Grimaila@AFIT.edu Abstract: The United States’ national airspace system (NAS) is reliant on legacy systems and technology that stems from the 1970s. Indeed, the Air Traffic Control (ATC) radar surveillance system is antiquated, the controller communication protocol is unreliable, and the traffic management process is heavily dependent on human perception of conflict. As a result, the safety margin incorporated into the ATC separation standards artificially limits air traffic capacity. With the demand for air transportation increasing each year, the Federal Aviation Administration (FAA) has introduced the Next Generation (NextGen) upgrade to modernize ATC capabilities. Automatic Dependent Surveillance‐Broadcast (ADS‐B), a key component of the NextGen upgrade, enables an aircraft to generate and broadcast digital messages that contain the aircraft’s Global Positioning System (GPS) coordinates. Ground stations and other aircraft within range will be able to interpret the sender’s location, trajectory, and identification. The incorporation of ADS‐B surveillance is intended to provide enhanced surveillance accuracy, efficiency, and safety. The open design of the system, however, introduces inherent security concerns. Currently, ADS‐B messages can be received and decoded by any receiver within range that is tuned to the operating frequency. Additionally, recent research has demonstrated aircraft position and tracks are surprisingly simple to fabricate—making it possible for malicious spoofing to flood the ATC system. In each instance, the ability to intercept and fabricate authentic ADS‐B messages was accomplished using inexpensive, store‐bought equipment. Such key vulnerabilities could be mitigated using encryption, providing message confidentiality and aircraft authentication. However, implementation of a large‐scale, distributed encryption scheme is nontrivial. This paper evaluates limitations of the legacy systems currently associated with the ATC and explores the feasibility of employing Format Preserving Encryption (FPE), specifically the FFX algorithm, within ADS‐B communication. Based on the analysis, recommendations are provided that highlight areas that should be examined for inclusion in the ADS‐B upgrade plan. Keywords: format preserving encryption, FFX, ADS‐B, FAA NextGen security

1. Introduction Currently, the national airspace system (NAS) relies on infrastructure and technology that dates to the 1970s (McCallie, Butts, Miller, 2011). In an attempt to handle the expected increase in demand and improve flight safety, the FAA has proposed a fundamental overhaul to operations and infrastructure known as NextGen. A key component of the NextGen upgrade plan is the automatic dependent surveillance broadcast (ADS‐B) system for air traffic management. ADS‐B generates a precise air picture for air traffic management by requiring aircraft to continually broadcast their position, identity, velocity and other information over unencrypted data links as illustrated in Figure 1. Although the FAA claims unencrypted data links are necessary for operational requirements, there are concerns regarding the confidentiality, integrity and availability of transmitted aircraft data. This paper focuses on protecting the confidentiality of transmitted ADS‐B messages. Specifically, the FFX encryption algorithm is examined within the context of ADS‐B operating parameters. Challenges and recommendations are also provided for exploring potential solutions.

2. Air traffic control Presently, air traffic controllers monitor radar displays and provide positive control over the movement of associated aircraft to ensure safe separation. If a controller detects a potential conflict, navigational direction is provided to the aircrew to restore safe separation. Note that directions and clearances are given via voice transmission over line‐of‐sight radio channels. Although currently capable, the ATC system is not optimal. Three main factors bound the capacity of the current air traffic network. First, aircraft tracking and identification rely on outdated and unreliable surveillance radars; the primary and second radar systems have supported aircraft surveillance since 1959.

259


Cindy Finke et al. Second, ATC controllers are dependent on restricted communication to disseminate control messages. The voice communication protocol requires three transmissions (i.e., request, response, and acknowledgement) for every controller‐pilot interaction often saturating the frequency. Finally, human controllers are subject to natural limitations (e.g., fatigue and information overload) and are susceptible to errors.

Figure 1: ADS‐B overview (Drouilhet, Knittel, Orlando 1996) While ADS‐B surveillance will alleviate the first two limiting factors of the ATC, it will do nothing to ameliorate the third. Once ADS‐B is fully employed, each aircraft will use GPS and onboard flight systems, as detailed by Figure 2, to generate and broadcast pertinent position reports via datalink channels at a rate of two per second. This significantly enhances surveillance accuracy and nearly eliminates the need for voice transmissions. However, the centralized control policy will continue to place full responsibility with ATC. This controller‐centric approach ensures one universal resolution to all conflicts, but places a large strain on the controllers. Ironically, given the controller’s reliance on the generated visual display for situational awareness, the security vulnerabilities surrounding ADS‐B system may increase the impact of human factors on air traffic management.

Figure 2: System details (McCallie, Butts, Mills 2011)

260


Cindy Finke et al. The use of plain text (i.e., unencrypted) broadcasts provides the ability to replicate ADS‐B messages. Recent independent research has demonstrated the ability to generate and broadcast false messages with relative ease using an inexpensive solution (Magazu, 2011) (Costin, 2012) (Thurber, 2012). Consider what may happen if a controller’s display were to become besieged by fabricated aircraft; injecting ghost targets at best would create confusion and costly delays, but at worst could lead to aircraft mishaps (McCallie, Butts, Mills, 2011). In response to the recent ADS‐B hacker demonstrations (e.g. Black Hat, DefCon 20), the FAA claims to have developed a comprehensive security action plan; however, the pertinent details are security sensitive and henceforth retained from the public (Henen, 2012). Note findings indicate that encrypting message transmissions would reduce the likelihood of false message broadcasts, ameliorate the resulting controller confusion, and assist in system wide authentication. Additionally, encryption would ensure message confidentiality, preventing unauthorized aircraft surveillance.

3. Format preserving encryption (FPE) The unique and predetermined format of the ADS‐B broadcast messages makes them a perfect test‐bed for FPE. Such algorithms are designed to encrypt fixed length messages that do not conform to standard block sizes (e.g., 64 or 128 bit blocks). FPE algorithms are best adapted from conventional symmetric block ciphers and are provably as secure as the underlying cipher (Black, Rogaway 2002),. The FFX algorithm was proposed to NIST in 2010, and is expected to be ratified. FFX represents format‐ preserving, feistel‐based encryption, with the `X' indicating multiple implementation variances (Bellare, Rogaway, Stegers, 2010). As demonstrated for binary strings in Table 1, the algorithm includes user defined parameters of radix (i.e., character alphabet), message length, key, tweaks, number of feistel rounds to be completed, and the split number to delineate the feistel pairs. This particular set of suggested parameters is known as A2 (Bellare, Rogaway, Stegers, 2010). Using the A2 parameters the FFX algorithm could readily encrypt ADS‐B messages. Table 1: FFX‐A2 parameters Parameter

Value

Comment

Radix Lengths Keys

Alphabet is 0, 1 Permissible message length 128‐bit AES key

Addition Method

2 [minlen = 8 …maxlen = 128] {0,1}128 BYTE≤M 64 M = 2 ‐ 1 0 2

split(n)

⎣ ⎦

Tweaks

rnds(n) F

12 if AES CBC‐MAC

Tweaks are arbitrary byte strings Character‐wise addition (XOR) Alternating feistel Maximally balanced feistel

From entropy‐based heuristic Defined in Figure 4

Figure 3 portrays three rounds of the FFX algorithm using the method = 2 alternating feistel behaviour. During each round a part of the input, determined by split(n), is altered by the function F then added to the remaining. This process is repeated a total of rnds(n) times. The most influential parameter in this algorithm is the function, F. This function invokes the user determined symmetric block cipher in order to produce a replicable hash‐like value. Bellare et al. (2010) recommend F employs the CBC‐MAC or CMAC mode of AES, allowing the algorithm to boast the proven security of AES (Black, Rogaway, 2002). Figure 4 provides the pseudo‐code for the function F.

4. FFX applied to ADS‐B While ADS‐B messages exist in many formats, the one used to transmit an aircraft’s airborne position via the 1090 MHz channel includes 112 bits of data. As Figure 5 illustrates, these bits are divided into five fields: Downlink Format (DF), Capabilities (CA), Address Announced (AA), Message Extended Squitter (ME), and finally Parity/Integrity Identity (PI) (Funkwerk Avionics, 2010). The DF describes the message format, and is coded to ‘10001’ in binary. These bits must remain unencrypted so each message may be identified as a position report. The other fields, consisting of the remaining 107 bits, may be encrypted. However, the CA field, used to

261


Cindy Finke et al. describe the aircraft’s transponder capabilities, could be left unencrypted for the ease of providing byte aligned input. The suggested feistel split is outlined in Figure 5.

Figure 3: FFX functionality (Bellare, Rogaway, Stegers, 2010)

Figure 4: Function F pseudo‐code

Figure 5: ADS‐B message format (adapted from (McCallie, Butts, Mills, 2011))

262


Cindy Finke et al. As described in Figure 4, the function requires 24 rounds of AES encryption for each message broadcast. However, this may be reduced as Bellare et al. (2010) suggest. Once the A2 and ADS‐B specific parameter values are applied to P within the function, the sole variable depends on the tweak: If the size of the tweak parameter is static, P would be constant allowing the value of AESK(P XOR 0) to be pre‐ computed and stored as P’. This would make AESK(Q XOR P’) the only calculation required per round, ultimately reducing the number of AES rounds invoked per message by half. According to Magazu (2011) the aircraft depicted in Figure 6 would generate the ADS‐B message pictured at the top of Figure 7 (represented in hexadecimal format). Using the FFX‐A2 encryption described above with a given key and a one‐byte tweak, the secure message would be transmitted as depicted in the bottom of Figure 7. Future tests will be conducted to confirm the added encryption computation does not hinder the required transmission rate for operational use.

Figure 6: Example aircraft (Magazu, 2011)

Figure 7: Example message

263


Cindy Finke et al.

5. Discussion This paper provided initial discussions for ongoing research efforts relating to encryption techniques for ensuring confidentiality of ADS‐B messages. Although possible, simply encrypting ADS‐B messages is nontrivial. Key management within a symmetric cryptosystem is a difficult problem. Indeed this is a major hurdle that must be overcome before considering a symmetric cipher as a feasible solution to the surrounding ADS‐B security concerns. Future research efforts involve implementing a secure controller‐pilot datalink communication (CPDLC) link to enable out of band key transmissions. Similarly, the advantages of incorporating dynamic tweaks are being evaluated. ADS‐B surveillance will be fully operationally within the NAS by 2020 with a similar world‐wide standard expected to follow shortly; FFX encryption could bolster the system’s security. Indeed, key management and the highly distributed and cooperative nature of the NAS present challenging problems for implementing encryption schemes without impacting safety. Still, the FFX algorithm provides a promising platform for such encryption.

References Bellare, M., Rogaway, and P., Stegers, T. (2010) The FFX mode of operation for format‐preserving encryption. Black, J., and Rogaway, P. (2002) Ciphers with arbitrary finite domains. In: B. Preneel, ed. Topics in Cryptology ‐ CT‐RSA 2002: Springer Berlin, pp. 185‐203. Costin, A. (2012) “Ghost in air(traffic): On insecurity of ADS‐B protocol and practical attacks on ADS‐B devices”, paper presented at Black Hat conference, Las Vegas, NV, 24‐26 July. Drouilhet, P., Knittel, G., and Orlando, V. (1996) Automatic Dependent Surveillance Air Navigation System. United States, Patent No. 5,570,095. Funkwerk Avionics, (2010) RTH60 ADS‐B/Mode S Receiver Operation and Installation (Document‐No. 03.231.010.71e). Waal, Germany. Henn, S. (2012) “Could the new air traffic control system be hacked”, [Online], National Public Radio, http://m.npr.org/news/Technology/158758161?page=0. Magazu, D. (2011) “Exploiting the automatic dependent surveillance‐broadcast system via false target injection,” Master’s thesis, Dept. Of Electrical and Computer Engineering, Air Force Institute of Technology. McCallie, D., Butts, J., Mills, R. (2011) Security analysis of the ADS‐B implementation in the Next Generation air transportation system. International Journal of Critical Infrastructure Protection, Vol. 4 No. 2, pp. 78‐87. Thurber, M., (2012) “Hackers, FAA disagree over ADS‐B vulnerabilities”, [Online], Aviation International News, http://www.ainonline.com/aviation‐news/ainalerts/2012‐08‐21/hackers‐faa‐disagree‐over‐ads‐b‐vulnerability.

264


Attack Mitigation Through Memory Encryption of Security‐ Enhanced Commodity Processors Michael Henson and Stephen Taylor Thayer School of Engineering at Dartmouth College, USA Michael.Henson@dartmouth.edu Stephen.Taylor@dartmouth.edu Abstract: Modern computer systems exhibit a major weakness in that code and data are stored in the clear, unencrypted, within random access memory. As a result, numerous vulnerabilities exist at every level of the software stack. These vulnerabilities have been exploited to gather confidential information and inject malicious code into device drivers, operating system kernels, and user processes. Encrypting memory would mitigate the vulnerabilities but the CPU‐memory bottleneck presents a significant challenge to designing a usable system with acceptable overheads. Recently, security hardware, including encryption engines, has been integrated on‐chip within commodity processors such as the Intel i7, AMD bulldozer, and multiple ARM variants. This paper describes on‐going work to develop a clean‐slate operating system ‐ ‐ Bear – that leverages on‐chip encryption to provide confidentiality of code and data. The system currently operates on the Intel X86‐based multi‐core blade servers and ARM M3, A8, and A9 processors. Work on memory encryption is focused on the Freescale i.MX53 using its integrated encryption engine. Keywords: memory encryption, mobile platform security, security‐enhanced commodity processors, secure microkernel

1. Background This material is based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) under agreement number FA8750‐09‐1‐0213. Full disk encryption (FDE) is a relatively recent innovation in commodity computer systems intended to provide confidentiality of everything stored on disk (Brink 2009). Unfortunately, FDE protected systems exhibit a major weakness since code and data are stored in the clear, unencrypted, within memory as shown in Figure 1. This weakness has been exploited to gather confidential information, including encryption keys, passwords, and passphrases diminishing the value of FDE (Halderman et al. 2008, Boileau 2006, Steil 2005, Henson and Taylor, 2012). Unfortunately, memory vulnerabilities extend to every level of the software stack and the opportunities for exploitation extend beyond physical attack to include software attacks over the Internet. Consequently, various techniques have evolved that allow malicious code to be injected into system flash, device drivers, operating system kernels, and user processes.

Figure 1: Vulnerable code and data in memory Although the concept of memory encryption has been actively researched for over three decades, it has yet to be used at the core of operating system designs to provide confidentiality of code and data. Recently, security

265


Michael Henson and Stephen Taylor hardware, including encryption engines, has been integrated within commodity processors such as the Intel i7, AMD bulldozer, and multiple ARM variants; however, systems developers have yet to embrace these specialized, often vendor‐specific, features. Little practical experimentation has been conducted and the improvements in security and performance have yet to be quantified (Henson and Taylor 2012).

2. Memory encryption approach Our approach is to produce a clean‐slate operating system design – Bear ‐‐ that leverages security‐enhanced commodity processors, to ensure that code and data never appear in the clear outside the processor chip boundary. This approach confines the boundary available to physical attack to lie within the processor itself, presenting a barrier that, in most cases, cannot be defeated without sophisticated equipment and/or destruction of the device (Anderson and Kuhn 1996, Kocher et al. 1999, Suh et al. 2007). This approach to security is intended to increase attacker workload associated with physical attacks, crafting exploits, and stealing sensitive information, rather than detecting attacks. Although primarily concerned with maintaining confidentiality of data and code during execution, encryption also hampers code injection, generally assumed to require memory authentication: An adversary lacking an encryption key would be unable to successfully patch an encrypted binary; decryption would result in corrupt code that is unlikely to execute correctly (Barrantes et al. 2003). The Bear system currently operates on the Intel X86‐based multi‐core blade servers and ARM M3, A8, and A9 processors. The exploration of memory encryption is focused on the Freescale i.MX53 using its integrated encryption engine. This device is a variant of ARM Cortex A8 processors common to many smart phones and rd th tablets, including Apple’s iPhone 3GS and 4, iPad first generation, iPod touch 3 and 4 generation, and Samsung Galaxy Tablet. ARM processors typically include internal RAM; the Freescale device includes 128 KB of RAM and internal Flash. Two initial approaches to memory encryption are being explored: one using internal RAM as a cache and the other as a pre‐decryption buffer with cache enabled. An initial proof‐of‐concept, demonstrating use of the Freescale encryption engine, has already been developed. This prototype loads the encrypted binary image for user processes from external memory, into the internal memory. It then decrypts the code inside the processor boundary, and schedules the process for execution. All code and data from this point forward remains in internal memory. This technique, which we refer to as static encrypted processes, only performs decryption once and is relevant to embedded systems where processes fit entirely within internal memory. Other than the one‐time initial decryption cost, there is little evidence of overhead using this method. Since embedded processors are continually increasing on‐chip memory, this technique represents an increasingly practical, low‐overhead approach to memory encryption. A more general case, dynamic encrypted processes, occurs where there is sufficient memory pressure to force processes back to external memory during execution. An initial prototype that allows swapping of encrypted processes to external RAM was recently completed and is described in Figure 2. In this case, both code and data may eventually be swapped out. Data may reside on the stack (registers, local variables in functions), heap (dynamically allocated memory), and in global/static variables. The current prototype encrypts process stacks, keeping global data and code resident in internal memory. Process code and data are stored in external memory in encrypted form and brought into internal memory, decrypted and executed on demand. Processes are re‐encrypted before being sent back to external memory. In this initial proof‐of‐concept, the cache is disabled as shown in Figure 2. The encryption‐decryption unit (EDU), is controlled via a descriptor chain, consisting of six 32‐bit words. Each bit or group of bits (generally 2‐3) are carefully chosen to enable the hardware module (e.g., encryption, authentication, random number generation), algorithm (e.g. RSA, DES), mode of operation (e.g. electronic codebook, cipher block chaining) and other details. The prototype uses a security API that we have developed to hide proprietary Freescale encryption details and is responsible for building the appropriate descriptor chain. For example, the following function call: EDU(‘E’, 0x000001A0, 0xF8000000, 0x70000000, 0xF801FFFD); causes the encryption unit to encrypt (E=encrypt, D=decrypt) a process block of 416 bytes ‐‐ the current size of a process descriptor and stack ‐‐ from internal memory at location 0xF8000000, placing the result at external

266


Michael Henson and Stephen Taylor memory location 0x70000000. Initially, only one key is used which is pointed to by the final parameter. Padding of the basic process block size was required in order to satisfy the AES requirement for blocks of 128 bits.

Figure 2: Dynamic encrypted processes A central operating system concern is the saving of registers to external memory during context switching. Since Bear is a clean‐slate design, we are able to overcome this issue directly by modifying the process switching routine, swprocs(). The modifications add a simple form of virtual memory management, where external memory maps to one internal block, with the addition of the encryption and decryption steps. A processes stack is saved and restored in internal memory only. Currently, the prototype uses AES encryption with electronic codebook (ECB) mode. We may choose another mode in the future to counter some of the statistical weaknesses apparent in ECB. For example, when patterns of the same 16 bytes appear in the same location of plaintext, they will be the same (although encrypted) sequence externally. Such sequences do appear frequently in common binary code as shown in Figure 3. One such sequence does not offer the adversary much information, but multiple sequences combined with knowledge of the predefined structures of processes may. To prevent this, seeding/tweaking, or methods of memory permutation that have been explored in the literature, could be used.

3. Future work With cache disabled, the prototype loses mechanisms designed over the past several decades to improve performance based on locality of reference, such as caching and out‐of‐order execution. However, this initial proof‐of‐concept is straightforward (one internal buffer) to implement and understand, and is helpful in that it provides an upper bound on the amount of overhead required to provide memory encryption. The next step in this work is to enable dynamic protection of process code and global data. Once complete, we plan to conduct a thorough performance assessment using the AIM9 benchmark suite to quantify the impact on performance and code size. These results will be compared with simulation results from the memory encryption literature and used as a yardstick against which we can measure future improvements and optimizations aimed at incorporating the cache. The A8 architecture includes a built‐in preload engine that can be used to move data to and from the L2 cache under programmer control. We plan to utilize this engine to load the cache with decrypted data and instructions, with internal memory acting as a pre‐decryption buffer. We currently only use one encryption key. However, we anticipate assigning each process its own key allowing mutually distrusting processes, such as those downloaded from an application store, to safely execute on the

267


Michael Henson and Stephen Taylor same architecture. Additionally, breaking up the memory space into sections with multiple keys reduces the overall material available for a single brute‐force attack. Encryption of memory can be viewed as a form of synthetic diversity. Future goals of this work include increasing that diversity by re‐encrypting processes at certain intervals. This technique invalidates surveillance and introduces non‐determinism, increasing attacker workload and rendering brute‐force attacks on memory useless. Although difficult, brute‐force attacks have been demonstrated on an 8‐bit memory‐encrypting processor used for ATMs (Kuhn 1998).

Figure 3: Redundancies in 128‐bit sections of program binary code

4. Conclusion This paper has described our ongoing research in memory encryption with some background into the associated research challenges. While the concept of memory encryption has existed for over three decades, there are still no general‐purpose, commercial‐off‐the‐shelf solutions integrated with secure operating systems. However, there is clearly a growing need for privacy and intellectual property protection today as evidenced by the increasing use of full disk encryption. Unfortunately, while full disk encryption seems to be the state‐of‐the art it is clearly insufficient to protect mobile platforms.

References Anderson, R., and Kuhn, M. (1996) Tamper resistance – a cautionary note. In Proceedings of the Second USENIX Workshop on Electronic Commerce. 2, pp 1‐11. Barrantes, E., Ackley, D., Forrest, S., Palmer, T., Sefanovic, D., and Zovi, D. (2003) Randomized Instruction Set Emulation to th Disrupt Binary Code Injection Attacks. In Proceedings of the 10 ACM conference on Computer and communications security (CCS ’03). October, pp 281‐289. Boileau, A. (2006) Hit by a Bus: Physical Access Attacks with Firewire. Presented at Ruxcon. Brink, D. (2009) Full‐disk encryption on the rise. Aberdeen Research Group Report. September. Halderman, J., Schoen, S., Heninger, N., Clarkson, W., Paul, W., Calandrino, J., Feldman, A., Appelbaum, J., and Felten, E. (2008) Lest we remember: cold boot attacks on encryption keys. In Proceedings of the USENIX Security Symposium. February. Henson, M. and Taylor, S. (2012) Memory Encryption: a Survey of Existing Techniques. submitted to ACM Computing Surveys, July. Kocher, P., Jaffe, J., and Jun, B. (1999) Differential power analysis. In Proceedings of the CRYPTO 19th Annual International Cryptology Conference. pp 388‐397. Kuhn, M. (1998) Cipher instruction search attack on the bus‐encryption security microcontroller DS5002FP. In IEEE Transactions on Computing. 47, October, pp 1153‐2257. nd Steil, M. (2005) 17 mistakes Microsoft made in the xbox security system. In Proceedings of the 22 Chaos Communication Congress. Suh, G., O’Donell, C., and Devadas, S. (2007) Aegis: a single‐chip secure processor. In IEEE Design and Test of Computers. 24(6), pp 570‐580. Wang, Z., and Stavrou, A. (2010) Exploiting smart‐phone usb connectivity for fun and profit. In Proceedings of the Annual Computer Security and Applications Conference (ACSAC).

268


Action and Reaction: Strategies and Tactics of the Current Political Cyberwarfare in Russia Volodymyr Lysenko and Barbara Endicott‐Popovsky University of Washington, Seattle, USA vlysenko@uw.edu endicott@uw.edu Abstract: In this work in progress we investigate, what tactics and strategies are employed by the main opposing stakeholders in the current Russian politically‐motivated local cyberwar. Particularly, we found further evidence which can indicate active Kremlin involvement in cyberattacks against its political opponents. Our results suggest that modern cyber‐ arms can be effective means of conducting contentious politics in (semi)authoritarian countries. We show that the technically well‐prepared protesters can successfully withstand even highly sophisticated and powerful cyberattacks, and are able to retaliate accordingly. Keywords: hacktivism, cyberwar, Russia, “patriotic” hackers, anonymous, e‐democracy

1. Introduction Recent national elections in Russia – parliamentary in December 2011 and presidential in March 2012 – demonstrated one more time that the incumbent regime can control, using various manipulative techniques, the legal means of transition of power. Under these circumstances, the only peaceful way for true democratic change is massive non‐violent protest actions, like the “color revolutions,” which have already occurred earlier in such post‐Soviet countries as Georgia and Ukraine. ICTs, especially Internet‐based ones, play an important role in organizing, mobilizing and operating such protests. Accordingly, recently, Russian pro‐regime forces made significant efforts to disrupt respective communications by attacking oppositional online tools. In their turn, protesters responded by hacking and DDoSing pro‐governmental Internet‐based resources. This research is devoted to the investigation of the main strategies and tactics used by the two opposing camps in this cyberwar during 2011‐2012. During this work in progress we plan to examine the following in particular: 1) break‐ins into email of the oppositional leader Alexei Navalny in summer 2011 and summer 2012; 2) the cyber‐protestors’ counter‐attack in winter 2011/2012 (break‐ins into email accounts of the various pro‐Kremlin activists and organizations); 3) major DDoS attacks against Russian leading oppositional online resources in December 2011 and again in March, May and June 2012; 4) May 2012 DDoSing of the official websites of the Russian President and Prime‐ Minister by hacktivists; 5) August 2012 break‐in and defacing of the website of the Moscow district court where members of the oppositional punk band “Pussy Riot” were sentenced; 6) Cyberattacks against the oppositional e‐democracy resource http://www.cvk2012.org/ in the second half of October 2012. Our initial findings (see further below) already reveal that hackers play an important role on both sides of the current protest activity in Russia: “patriotic” ones cooperate with the regime, while Russian Anonymous sides with the protesters.

2. Methodology In this study we explore primary sources in the original Russian language to ensure we ‘hear’ the unvarnished messages coming from the main participants of this cyberwar. Our methodology consists in building logically sound case narratives of the cyberwar events using the process‐tracing approach, and then categorizing findings into concepts that form “building blocks” for the inductive construction of the resulting theoretical framework describing the dynamic system of modern political cyberconflict in a semi‐authoritarian state. Triangulation of the data from various sources representing all sides involved in the confrontation ensures additional reliability of our findings.

3. Initial findings After May 2012, when authorities started a massive offence against opposition in Russia, it became obvious that some improvements are necessary in the organization of protesters who represent the whole spectrum of various political views, and for whom the only main uniting theme is that Putin and his regime must go. One of the solutions proposed was election of some kind of a coordinating body – Coordination Council ‐ which would

269


Volodymyr Lysenko and Barbara Endicott‐Popovsky unite representatives from all of the main oppositional forces – left, right, center, and everything between. In order to take into account views of the maximum number of activists and the necessity to operate within a very limited budget, and considering the high Internet penetration rate in Russia in general, and within the opposition specifically, it was decided to conduct this election completely online. The opposition already had an e‐democracy experience with platform http://democratia2.ru/ that was created by activist Leonid Volkov and his team. He also took charge of development of the new election website (Albats, 2012a). The online election was scheduled for October 20‐21, 2012. Russian authorities were very aggravated with the opposition’s attempt to improve its organizational capacities. Many provocations were attempted to disrupt both the elections and the future efficiency of the Coordination Council. Specifically, such infamous a far‐right extremist figure as Maxim Martsinkevich (nicknamed Tesak) attempted to infiltrate the election process (Kravtsova, 2012). Tesak already surfaced in December 2011 when the opposition conducted an online vote on who should speak at the mass protest rally on December 24. Then, as a result of vote falsification during just one night, Tesak emerged as one of the poll’s leaders (Murtazin, 2011). Immediately the pro‐Kremlin (Atwal & Bacon, 2012) movement Nashi started an anti‐rally information campaign claiming that if such people as Tesak will speak at the meeting then there is no reason for any rational human being to attend such events. Therefore suppositions emerged that there is a direct connection between Nashi (read Kremlin) and this Tesak‐related vote falsification aimed at decreasing the number of people who would attend the rally (Murtazin, 2011). In September 2012, further evidence of such a connection were obtained, made possible after the pro‐ oppositional Russian chapter of Anonymous broke into mailboxes of different Nashi activists several months earlier. The first related tweet by Russian Anonymous appeared on January 29, 2012, at https://twitter.com/Op_Russia/status/163882706994864128. Email messages of various pro‐Kremlin groups and activists, broken into by Russian Anonymous, were placed by the latter into a searchable database at http://slivmail.com/. There, of specific interest, it can be found that the email address 8584398@gmail.com belongs to one of Nashi’s activists. Further, this same address was used to register Tesak for the Coordination Council’s election in Autumn 2012, while the related fee was transferred directly from the activist’s bank account. At the same time, an information campaign around Tesak’s run for the Council was conducted by various pro‐Kremlin media, while his former colleagues, who now take part in the opposition’s political right wing, factually claimed that Tesak was a provocateur who works for the authorities (Volkov, 2012a). Eventually, he was banned from the election. But his name was also used in an active cyberwarfare campaign against the Coordination Council’s election. The election’s portal http://www.cvk2012.org/ was created in the course of 3 months by a team of 6 developers led by Leonid Volkov (Albats, 2012b). Cyberattacks against the portal started one week before the election, using JavaScript LOIC‐type DDoS attack software from the website hellotesak.narod.ru, and reached a dangerous magnitude after three days of relentless attack, Thursday, October 18th (Highload Lab, 2012). Within several hours the attack was neutralized by filtering out high‐loading attacking queries to the election site’s database. The next effective attack started on Saturday, October 20 ‐ this time using a special LOIC‐type software robot specifically tailored to bring down the election website (Albats 2012b, Highload Lab 2012). The attack’s source was the Armenian website cvkhello.do.am that was registered from an IP address belonging to a major Ukrainian provider (Highload Lab, 2012). (Here we may be seeing an example of possible coordinated work of an “Authoritarian International” of state hackers and/or the secret services behind them. For other examples see, e.g., Lysenko & Desouza (2012a).) It took 20 hours of trial and error by Volkov and his colleagues to mitigate this attack through another type of intelligent filter (Albats 2012b, Highload Lab 2012). During that entire time the online election process was rendered impossible. As a result, the election was prolonged for one more day – until Monday, October 22. After the tailored robot attack was effectively neutralized early on the morning of Sunday, October 21, the state attackers finally resorted to a full‐blown botnet DDoS‐attack using more than 130,000 bot computers with Asian and European IP addresses. The attack was successfully mitigated by filtering out suspicious IP addresses (Highload Lab, 2012). During the attack’s peaks, cloud technology helped Volkov and his team to maintain registration and verification of voters by using a computerized network of approximately 60 distributed local election commissions. Later, verified voters could vote after attacks on the main election

270


Volodymyr Lysenko and Barbara Endicott‐Popovsky system were alleviated. On Monday, October 22nd—the last day of voting—the attacks continued, but never reached enough magnitude to threaten ending the voting process and the online voting system worked practically without interruption. Previously, on October 5, the server of TV channel Dozhd was also under attack during a live Internet broadcast of the Coordination Council candidates’ debate. That interruption lasted for approximately one hour (NEWSru, 2012). Of interest, on Saturday, October 20, during the most serious DDoS attack on the online election system, the infamous commissar of Nashi, Konstantin Goloskokov, who earlier admitted his active participation in the cyberattacks against Estonia in 2007 and Georgia in 2008 (see Lysenko & Endicott‐ Popovsky, 2012), arrived “for inspection” of Volkov’s election committee headquarters, but was “kicked out of the house” (Volkov, 2012b). Sophisticated attacks, very similar to the ones suffered by the Russian Coordination Council election portal, were also conducted a week later against the Ukrainian oppositional and observers’ websites ‐ during the parliamentary election in that country (Ukrainska Pravda, 2012a). Particularly, specially customized automated script queries, coming from abroad, were used against online databases of the observers’ network Opora (Ukrainska Pravda, 2012b). As a result, many oppositional informational resources were inaccessible, while the observers experienced difficulties with monitoring and reporting about the election process. Probably, if the above analysed successful experience of Russian opposition was timely transferred to their Ukrainian colleagues, such harm could be avoided. Here we see one more evidence of the necessity of the well‐timed transfer of knowledge about effective withstanding the politically‐motivated cyberattacks against the pro‐ democratic entities, ‐ a transfer that, for example, was effectively realized in summer 2008 between Estonia and Georgia (see Lysenko & Endicott‐Popovsky, 2012).

3. Conclusion Eventually, the October 2012 election of the Russian opposition Coordination Council ended successfully. Almost 82,000 verified activists voted, and that was within the range (from 50,000 to 100,000 (Albats, 2012a)) predicted earlier by oppositional leaders in early September. Later, all of these verified people, who are now in the oppositional database, can be engaged readily in further oppositional activities – similarly to what was observed in Moldova in 2009 (Lysenko & Desouza, 2012a). Members of the Coordination Council already plan to grow the database with other recruited activists (Parkhomenko, 2012). Thus from the example of this online election, we see that e‐democracy can be an effective way to conduct strategic oppositional activities under such a harsh political regime as we observe today in Russia. Moreover, this case proved that while local politically‐motivated cyberwar currently occurs in Russia with direct involvement of pro‐Kremlin organizations which employ various and sophisticated tactics of modern cyberwarfare, protesters are able to eventually successfully withstand such attacks and effectively react to them, conducting their own tactical counter‐ attacks, like we saw from the example of the local Anonymous chapter. We conclude that the resulting cyber‐ arms race can supplement the model of ICT‐facilitated modern contentious political process outlined in Lysenko & Desouza (2012b).

4. Future work Eventually, the above initial findings of this work in progress will be complemented with additional cases of contemporary political cyberwarfare in Russia, and then generalized for other non‐democratic states of the former Soviet Union, as well as other authoritarian countries, globally. It is anticipated that the resulting theoretical framework will be useful to both scholars researching the relatively new phenomena of game‐ changing cyber events, and policymakers dealing with these issues. Resulting practical recommendations could be of use, as well, to pro‐democratic activists struggling against (semi)authoritarian regimes worldwide.

References Albats, Y. (2012a). Opposition: what this autumn prepares for us. [In Russian] Radio Moscow Echo, September 10, 2012. Retrieved from: http://echo.msk.ru/programs/albac/928540‐echo/#element‐text Albats, Y. (2012b). Electronic democracy: pluses and minuses. [In Russian] Radio Moscow Echo, October 22, 2012. Retrieved from: http://echo.msk.ru/programs/albac/942943‐echo/ Atwal, M., & Bacon, E. (2012). The youth movement Nashi: contentious politics, civil society, and party politics. East European Politics, 28(3), 256‐266. Highload Lab. (2012). Clouds against tesak, or Chronicle of DDoS‐attacks on cvk2012.org. [In Russian] Highload Lab blog, October 22, 2012. Retrieved from: http://habrahabr.ru/company/highloadlab/blog/155667/

271


Volodymyr Lysenko and Barbara Endicott‐Popovsky Kravtsova, Y. (2012). Cyberattacks disrupt opposition's election. The Moscow Times, October 22, 2012. Retrieved from: http://www.themoscowtimes.com/news/article/cyberattacks‐disrupt‐oppositions‐election/470119.html Lysenko, V., and Desouza, K. (2012a). Moldova’s Internet Revolution: analyzing the role of technologies in various phases of the confrontation. Technological Forecasting & Social Change, vol. 79, issue 2 (February 2012), pp. 341‐361. Lysenko, V., and Desouza, K. (2012b). Charting the coevolution of cyberprotest and counteraction: The case of former Soviet Union states from 1997 to 2011. Convergence: The International Journal of Research into New Media Technologies 1354856512459716, first published online before print on October 24, 2012, doi:10.1177/1354856512459716 Lysenko, V., & Endicott‐Popovsky, B. (2012). Hackers at the state service: Cyberwars against Estonia and Georgia. Paper presented at 7th International Conference on Information Warfare and Security ICIW‐2012, March 2012, Seattle, WA. Murtazin, I. (2011). Tesak and Rogozin go to the aid of Putin. [In Russian] Novaya Gazeta, December 25, 2011. Retrieved from: http://www.novayagazeta.ru/comments/50283.html?print=1 NEWSru. (2012). TV channel Dozhd interrupted live Internet‐broadcasting of the opposition debates: DDoS‐attack. [In Russian] NEWSru, October 5, 2012. Retrieved from: http://www.newsru.com/russia/05oct2012/dozhd_sboi_print.html Parkhomenko, S. (2012). The essence of the events. [In Russian] Radio Moscow Echo, October 26, 2012. Retrieved from: http://echo.msk.ru/programs/sut/944384‐echo/#element‐text Volkov, L. (2012a). On the special case of Martsinkevich. [In Russian] Leonid Volkov’s blog, September 10, 2012. Retrieved from: http://leonwolf.livejournal.com/427078.html Volkov, L. (2012b). What is going on here. [In Russian] Leonid Volkov’s blog, October 20, 2012. Retrieved from: http://leonwolf.livejournal.com/446122.html Ukrainska Pravda. (2012a). Oppositional websites are “knocked down”. [In Ukrainian] Ukrainska Pravda, October 28, 2012. Retrieved from: http://www.pravda.com.ua/news/2012/10/28/6975751/view_print/ Ukrainska Pravda. (2012b). Opora could not count because of DDoS attacks. [In Ukrainian] Ukrainska Pravda, October 29, 2012. Retrieved from: http://www.pravda.com.ua/news/2012/10/29/6976022/view_print/

272


Non Academic Papers

273


274


The Adam and Eve Paradox Michael Kraft, David Rohret, Michael Vella and Jonathan Holston Computer Sciences Corporation, Inc., San Antonio, USA mkraft5@csc.com drohret@ieee.org mvella3@csc.com jholston@csc.com Abstract: Individuals working in the Information Technology (IT) industry are familiar with Moore’s Law and its guiding principle: exponential improvement every 18‐24 months where computer technology is concerned (Brock, 2006). This principle has been proven generally accurate and is routinely used for long term planning by the computer industry, which has led to an explosion in computing power and technologies that have catapulted computing into every aspect of human’s st lives in the 21 century. However, while new technologies increase the quality of life for the current generation, they also provide avenues for nefarious individuals to take advantage of others using the same new technologies. To help counter this, the IT industry has made great strides in its efforts to protect users by developing security appliances to include firewalls, intrusion detection systems, encryption, passwords, two‐factor authentication methods, and a layered approach to security; to name just a few. It is because of this effort by the IT industry to help protect users, the authors have identified unique cyber attack trends, that could be referred to as a new “Moore’s Law” (as it pertains to cyber security). As computer technologies become more sophisticated and robust, malicious actions have become less sophisticated, and in many instances, cyber exploitation and attacks occur without the use of technology. The authors have penned this concept as the “Adam & Eve Paradox”. The paradox construct being, as technologies improve and network perimeters are hardened preventing direct attacks against systems, users and systems are being exploited at an exponentially increased rate by methods contrary to the technological improvements. Cyber criminals and hackers will always first attempt attacks against the easiest targets, known as the low‐hanging forbidden fruit described in the biblical Adam & Eve story. While the IT industry continues to spend billions of dollars (US) annually to create appliances and develop software to protect its resources, data, and users; attackers are increasingly focusing their attention on the lowest hanging fruit, whether it be an unsuspecting user who clicks a link in an email, to a helpful administrator who provides information to a false authority. As the IT industry moves in the direction of complex defensive tactics, attackers are moving towards less complex – softer targets that are more difficult to detect, block, and mitigate. It is the authors’ intention to define and substantiate the “Adam & Eve Paradox”. Keywords: malware, social engineering, Moore’s law, cyber‐attack, perimeter defense, Adam & Eve

1. Cyber‐attack defined Cyber‐attacks can be defined as “an attempt by hackers to damage or destroy a computer network or system” (Lindberg, 2010). The manner in which an attack happens varies and there are a voluminous numer of tools and techniques attackers have at their disposal in order to carry out an attack. The sophistication of the attack will vary based on the least secure target in the hackers scope of attack and desired effect. Types of cyber attacks include: Malware, spyware, Trojans, viruses Phishing, spamming, spoofing Social Engineering Denial of Service Web defacement (private and public) Advanced Persistent Threats (APTs) Physical (penetration, theft, breach of access) Worms, Botnets There are several factors that must exist in order for a cyber‐attack to be successfully executed: threat, vulnerability, and risk. A threat can be categorized as the person or organization conducting the attack. Threats include groups such as organized crime, state‐sponsored hackers, hacktivists, novice hackers, a disgruntled employee, or a self‐motivated hacker. The initial attack vector will often be the same for all threats, from the self‐motivated hacker to the third world adversary, the lowest hanging fruit in the scope of attack will be compromised first. In order for a threat to be legitimate, the following four conditions must be met:

There must be an available target

An opportunity must be present

There must be intent (a will to attack)

The aggressor must have the capability (resources and skill sets) to attack

275


Michael Kraft et al. Vulnerabilities can be described as flaws or security weaknesses in hardware, software or processes that expose a system to compromise and exploitation; this includes the human factor in the form of social engineering and human‐introduced vulnerabilities in existing systems. A hardened network can easily become a target of interest as soon as a user decides to install peer‐to‐peer file sharing software on their work laptop, now becoming a point of entry into a larger target. Risk is the potential that a given threat will exploit a known vulnerability. This includes the will to attack even if that will is simply boredom, the capability to conduct a successful attack, and having the resources to successfully accomplish an attack. Figure 1 below illustrates a visual description of how the three factors are interrelated and represent an ideal condition for a cyber attack to occur.

Figure 1: Threat‐vulnerability‐risk association

2. The Adam and Eve paradox The Adam and Eve Paradox helps explain a phenomenon in the increase of cyber attacks despite the advancement in sophistication of computer and network security software and hardware. Cyber‐attacks fluctuate in the types of attacks executed and the frequency in which they occur, but cyber attacks continually increase as a whole. As new advanced network defensive technologies and processes are developed, hackers and cyber criminals adapt and continue to attack the lowest hanging fruit, which often consists of exploits aimed at newly developed productivity, recreational programs, or the same vulnerable programming language being used to create new software programs. The lowest hanging fruit concept, as it relates to cyber security, can also be explained as the path of least resistance, a target of opportunity, the weakest link in the chain, or the least secure target in a secure environment. Figure 2 illustrates how one type of attack methodology can decline due to better security appliances and coding standards. In the case of web defacing statistics for government web sites, better web development practices and increased network security has made it more difficult for those wanting to disrupt a web site or even exfiltrate data from the web server. This downward trend does little to alter the increased number of annual (reported) cyber threats, as shown in Figure 3, which shows an increase just under 300 percent in 2011 from the previous reporting year. Although the two examples provided seem counterintuitive, the resulting increase has been documented by IBM’s X‐Force Research and Development Team (Cross, 2012) as being due to the latest Internet technology, smart mobile devices. Figure 4 displays IBM’s trend report identifying mobile devices as the latest attack trend, and lowest hanging fruit.

Figure 2: Annual website defacements (Osisecurity.com.au, 2012)

276


Michael Kraft et al.

Figure 3: Annual number of new threats (Dinan, 2009)

Figure 4: Increase of mobile exploits from 2006 – 2011 (Cross, 2012) The examples above illustrate two points. First, as an attack trend matures, security devices and software developers will build or develop capabilities to minimize their effects. As network security professionals make it more difficult to attack using once proven methods, hackers and cyber criminals will migrate to other methods less risky or difficult, and the overall number of threats continues to rise.

3. The new attack surface To better understand why hackers are now targeting the lowest hanging fruit, one must first understand how we got to this point. One must first understand how new threats and vulnerabilities changed how the industry has responded with new appliances, policies, and even skilled personnel. New Threats: In 2000 the first major worm known as the “I Love You” virus was released. This worm affected 50 million plus computers to include the Pentagon, CIA, and British Parliament. It was a simple self‐replicating worm that sent out copies of itself to all entries in the targets Microsoft Outlook address book. In 2010 the Stuxnet worm, a sophisticated polymorphic worm, targeted Siemens industrial software and equipment. While it is not the first time hackers have targeted industrial systems, it is the first discovered malware that spies on and subverts industrial systems and the first to include a programmable logic controller (PLC) rootkit.

277


Michael Kraft et al. New Vulnerabilities: There are many ways to classify vulnerabilities; hardware, software, network, personnel, sites, and organizational to name a few. Each one of these categories has their own laundry list of items that can be vulnerable; the lists are exhaustive and would go well beyond the scope of this paper. As new defensive mechanisms are put into place to mitigate these vulnerabilities, attackers are continuously identifying and/or developing new tools and techniques to exploit existing and new vulnerabilities. Because of these new threats and vulnerabilities, the following has been the industry response to better protect organizational networks. New Security Appliances: As attacks have become part of everyday life, companies have fought back by designing security appliances to help protect networks from outside attacks. These appliances can be categorized by both type and focus of attack. Some of the types include active, passive, preventative, and unified. Some of the specific focuses for these appliances include web services, email services, firewalls, VPNs, proxies, and intrusion detection/prevention systems, and even heuristic (intelligent) systems. Better Security Policies: Because computers and the data they provide have become a necessity in conducting business, security policies have become mandatory, both by law and by necessity. Security policies have been developed in an attempt to address specific issues but at the expense of productivity and personal usage. Many of the policies in place today were mandated by law, such as those dealing with financial transactions and for personal data protection, such as Health Insurance Portability and Accountability Act of 1996 (HIPAA). There are also more areas that could be included in this list, as any available means of communications that is vulnerable to attack could and should have a security policy. Security Professionals: As hackers increase their capability through automated tools and technologies network security personnel have become more sophisticated. In the early days of information technology (IT), the system administrator was usually responsible for maintaining computers, network resources, and to also ensure the security of each device. In today’s IT realm, companies hire security experts who are solely responsible for the security of computers and networking devices. Network compartmentalization has become a necessity to combat a hacking community comprised of novice to expert practitioners. Takeaway: The attack surface has become hardened using defense‐in‐depth forcing organizations to make changes like new policies, new appliances, certified professionals, etc. Attacking head‐on can be compared to the characters in the story of David and Goliath. Attackers are forced to keep their distance and find the weaknesses of the target before attacking. These weaknesses are ideology and the foundation for the Adam and Eve Paradox, identifying and attacking the low hanging fruit.

4. Low hanging fruit defined The lowest hanging fruit concept is the target or goals which are easily achievable because of the low level of effort and risk required to attain the goal. In terms of cyber security the authors have selected the following examples of “low hanging fruit” as they relate to the overall cyber security landscape. Based upon the evolution of security and the costs associated with the cyber realm, these items are the low hanging fruit (lowest level effort items) that attackers are focusing their efforts on as organizations implement defense‐in‐depth and as complex technical safe‐guards are becoming much harder to defeat.

278


Michael Kraft et al. Again, the following examples are what the authors deem as “low hanging fruit” because of how attackers are leveraging the following tools/techniques versus attacking more advanced appliances and computer policies head‐on. Social Engineering: Social engineering is a non‐technical method of deceiving a person or persons into divulging sensitive information that would otherwise be unavailable or difficult to obtain through technical means. “While similar to a confidence trick or simple fraud, the term typically applies to trickery or deception for the purpose of information gathering, fraud, or computer system access; in most cases the attacker never comes face‐to‐face with the victim” (Hadnagy, 2011). Organizational employees will often be threatened by a show of authority or insistence of absolute compliance by the attacker, and providing seemingly innocent information is easier than facing possible termination for non‐compliance. A September 2011 report by Check Point systems demonstrates just how prevalent social engineering has become. The report reveals 48% of large companies and 32% of companies of all sizes surveyed have been victims of social engineering, experiencing 25 or more attacks in the past two years, costing businesses anywhere from $25,000 to over $100,000 per security incident (Research, 2011). Social engineering is a prime example of the Adam and Eve Paradox concept of the lowest hanging fruit, because historically proven; people are one of the greatest security risks in an organization. Examples of hackers calling system administrators to retrieve user passwords or gaining physical access to an organizations server room by dressing like maintenance personnel are too numerous to list. Social engineering provides attackers with the ability to gain access to areas and systems considered to be impenetrable without using sophisticated tools. Phishing/Spear Phishing: According to bestselling author and internationally renowned security technologist Bruce Schneier, Phishing is when an attacker sends you an e‐mail falsely claiming to be a legitimate business in order to trick you into giving away your account info—passwords, mostly (Schneier, 2008).” Phishing takes social engineering to an electronic communications level, targeting personnel via e‐mail (typically) posing as the local administrator, security manager, or someone of importance that needs their credentials to assist them with “something”. Spear Phishing is similar to phishing; however, the target of interest is elevated to a person of importance, such as a CEO or accounting department manager. According to an InformationWeek Report; “More than 500 million phishing emails show up in our inboxes every day. While this number pales in comparison to spam, which accounts for almost 70% of all email traffic, spam is mainly a nuisance, whereas phishing can lead to costly security breaches...This is just the tip of the iceberg, as more‐targeted “spear phishing” attacks can lead to potentially devastating security breaches, loss of sensitive data, and significant financial losses (Sadeh, 2012). Social Networks: Social networks are an extension to social engineering, arguably providing an adversary exponentially more opportunities to convince their targets they are someone they really aren't or to acquire the persona of a victim to escalate an attack. It is through the use of social networks (i.e., Twitter, Facebook, and LinkedIn) that attackers acquire the necessary information and data on a target for a successful attack. Social networks have become an invaluable resource for attackers. Pre‐Texting: Pretexting is said to be “just a story or lie that you will act out during a social engineering engagement, but that definition is very limiting. Pretexting is better defined as the background story, dress, grooming, personality, and attitude that make up the character you will be for the social engineering (Hadnagy, 2011).” Pretexting works hand‐in‐hand with social networking and social engineering. When using pretexting in

279


Michael Kraft et al. combination with social networks, attackers can gather an enormous amount of information about a target and then use social engineering combined with pre‐texting to exploit the target. Theft: Theft is an attackers “attack of convenience”. Although this is another non‐technical method of obtaining possible valuable information, access credentials, software, and/or hardware, this is also an example of an attacker taking advantage of the least secured target. A laptop in a secured environment is considered mostly secure, but once the laptop is removed from the secure environment, that increases the level of exposure and possibility of compromise. One event demonstrating this that made headlines back in 2006 was the Department of Veterans Affairs employee laptop theft that contained the sensitive information of 26.5 million veterans and military personnel (Howard & Prince, 2011). Once an attacker has physical access to a machine, unless the hard drive is encrypted with the highest encryption standard, an attacker will have access to everything on the computer. This information could be personal information, passwords, bank account information, and business proprietary information to say the least. Malware: Malware is an interesting insecurity to consider as a “low hanging fruit” because of its complex technical nature. First the authors will define what malware is and then why it is considered low hanging fruit. Referring to an IRMI definition, they provide this lengthy but insightful explanation of malware: “Malware is short for malicious software and means any software or code developed or used for compromising or harming information assets without the owner's informed consent. Malware enables or prolongs access, captures data, and/or furthers the attack. The most common means of infection for malware is installation or injection by a remote attacker, constituting 81 percent of malware infections. (Krasnow & Dorsey & Whitney LLP, 2012).” More simply put, “Malware can be defined as any unintended and unsolicited installation of software on a system without the user knowing or wanting it (Harper, et al., 2011). Malware can have a prolonged effect on an unknowing target, as the residence of the malware can be persistent well beyond the initial phase of the target being compromised. This persistence often allows for the possibility of compromised data, access to trusted network resources, and a backdoor instantiated for remote access by an attacker. As already pointed out, social engineering, phishing, social networks, and pretexting are all methods attackers use against targets, the question is what do all these methods attempt to do? They attempt to exploit the target. Malware is the payload of choice once a target is exploited. The function’s malware provides an attacker is invaluable as it provides a backdoor or a continuous stream of uploaded data. If attackers can get a user to open an infected email attachment, or to click a link that takes them to an infected website (all forms of social engineering and phishing), then they can deliver the malware payload and wreak havoc on a target. Malware is also considered a low hanging fruit because it can be considered passive. If an attacker sets up a website loaded with malware, then the website will just sit there until someone visits and gets the “drive‐by” download. No further action (minus the phishing and social engineering to gets website visitors) is required. The attacker doesn’t have to take on the network perimeter of an organization head‐on. No firewalls to break‐ through, no intrusion detection systems to thwart, and no amount of time trying to defeat the costly appliances put in place by organizations. They just create the website with the malware and get the targets to come to them. Demilitarized Zone (DMZ) Devices: Organizational DMZ devices are consistently going to be considered an item of interest for attackers because of their accessibility. These devices are usually placed in the DMZ (outside of the local area network) to provide services to the outside world. These devices range from email servers, web servers, FTP servers, VoiP servers, and DNS servers. All are available to outside users and therefore subject to head‐on attack with limited in‐line security, as these devices are intentionally forward facing to allow for public access. Each type server has a

280


Michael Kraft et al. variety of vulnerabilities attackers will try to exploit. For the purpose of this paper the authors will focus on web servers in regards to the lowest hanging fruit as part of the Adam and Eve Paradox. The lowest hanging fruit as it pertains to web servers is SQL injection. SQL injection defined: “It is the vulnerability that results when you give an attacker the ability to influence the Structured Query Language (SQL) queries that an application passes to a back‐end database. By being able to influence what is passed to the database, the attacker can leverage the syntax and capabilities of SQL itself, as well as the power and flexibility of supporting database functionality and operating system functionality available to the database. SQL injection is not a vulnerability that exclusively affects Web applications; any code that accepts input from an untrusted source and then uses that input to form dynamic SQL statements could be vulnerable (Clarke, et al., 2012). As reported by the Web Hacking Incident Database 2007 annual report (Shezaf & Teams, 2007), SQL injection accounts for 20% of all attacks against web servers. Figure 5 below shows the top 10% attacks recorded on this report.

Figure 5: Website attack types 2007 According to this same annual report done in 2008 (Barnett, 2008), SQL jumped to 30% as depicted in figure 6 below.

Figure 6: Website attack types 2008 This is significant because most web sites that offer services usually have some kind of input field linking the front end user interface to a backend database server. The backend database server is more secure and often unreachable from external communications, however, there is a trusted relationship that allows for communications to occur between the front end web site input fields and the database. This is also significant because if organizations are to make a profit or provide some sort of service to the public, they will need to have these resources available on the internet. Attackers know this and will always target these resources versus trying to get inside the perimeter, as discussed earlier. To further demonstrate the widespread use of SQL injection, an InformationWeek report breakdowns web site attacks and also highlights SQL injection as the most used attack type. Figure 7 from this report demonstrates this fact (Prince, 2012).

5. Why attackers choose the low hanging fruit The low hanging fruit identified in this paper will always be the path of least resistance. Attackers will not often waste time, energy, or resources taking on the expensive, highly technical appliances being put in place by organizations. It is human nature to find the path of least resistance in order for one to reach their goals. Advanced security appliances are being used to deter and reject direct attacks against an organizations critical infrastructure, but the attackers do not face these appliances when less secure‐focused employees are freely opening malware filled e‐mails, clicking on malicious links on random web pages, and downloading third party software without approval.

281


Michael Kraft et al.

Figure 7: Website attack methods as reported by informationweek

6. Summary As organizations fight to protect their cyber assets, they continue to spend a large portion of their IT budget on security appliances, out‐sourced security professionals, and liabilities. The process of defending network assets and the data they contain has led the IT market to produce highly specialized and capable appliances that have made it difficult for attackers to remotely exploit and compromise networks. These appliances and the resources required to maintain an experienced IT security work force are a necessary component of the layered security approach. Organizations must continue to invest in emerging security technologies to remain protected against future waves of innovative attacks by cyber criminals and hackers. One result of this aggressive defense is that cyber criminals and hackers are resorting to less technical avenues and using the human factor or low‐risk web‐based attacks (lowest hanging fruit), in order to accomplish their goals. These vectors of attack include social engineering, social network manipulation, phishing/spear phishing, self‐propagating malware, and web server SQL attacks. As computer technologies become more sophisticated, malicious actions become less technical, and in many instances, cyber exploitation occurs using only social engineering methods. Therefore, as network security expenditures on security appliances and out‐sourced consulting requirements increase, the cost of a network attack has decreased, creating what the authors have coined as, “the Adam and Eve Paradox”.

References Barnett, R., 2008. The Web Application Security Consortium / Web Hacking Incident Database 2008 Annual Report. [Online] Available at: http://projects.webappsec.org/w/page/27087349/Web%20Hacking%20Incident%20Database%202008%20Annual%2 0Report [Accessed 30 September 2012]. Brock, D., 2006. Understanding Moore's Law: Four Decades of Innovation. 1st ed. Philadelphia: Chemical Heritage Foundation. Clarke, J. et al., 2012. SQL Injection Attacks and Defense, Second Edition. 2nd ed. Waltham: Syngress Publishing. Cross, T., 2012. IBM X‐Force Trend & Risk Report Shows Progress Against Security Threats But Attackers Adapt. [Online] Available at: http://asmarterplanet.com/blog/2012/03/ibm‐x‐force‐trend‐risk‐report‐shows‐progress‐against‐ security‐threats‐but‐attackers‐adapt.html [Accessed 26 October 2012]. Dinan, M., 14 April 2009. Taxpayers Beware: Cyber‐Criminals Seek to Intercept IRS Filings. [Online] Available at: http://sip‐ trunking.tmcnet.com/topics/security/articles/54168‐taxpayers‐beware‐cyber‐criminals‐seek‐intercept‐irs‐filings.htm [Accessed 15 November 2012]. Hadnagy, C., 2011. Social Engineering: The Art of Human Hacking. 1st ed. Indianapolis: Wiley Publishing Inc.. Harper, A. et al., 2011. Gray Hat Hacking: The Ethical Hacker's Handbook, Third Edition. 3rd ed. s.l.:McGraw‐Hill Companies. Howard, D. & Prince, K., 2011. Security 2020—Reduce Security Risks This Decade. Indianapolis: Wiley Publishing, Inc.. Krasnow, M. J. & Dorsey & Whitney LLP, 2012. IRMI.com: Cyber Threats Contributing to Breaches. [Online] Available at: http://www.irmi.com/expert/articles/2012/krasnow01‐cyber‐privacy‐risk‐insurance.aspx [Accessed 30 September 2012].

282


Michael Kraft et al. Lindberg, C. A., 2010. New Oxford American Dictionary. 3rd ed. USA: Oxford University Press. Osisecurity.com.au, 2012. Web Application Security Testing | OSI Security. [Online] Available at: http://www.osisecurity.com.au/solutions/web‐app‐security‐testing [Accessed November 2012]. Prince, B., 2012. InformationWeek Reports ::Strategy: How Attackers Find and Exploit Database Vulnerabilities. [Online] Available at: http://reports.informationweek.com/abstract/21/8851/Security/strategy‐how‐attackers‐find‐and‐ exploit‐database‐vulnerabilities.html [Accessed 30 September 2012]. Research, D., 2011. Social Engineering Survey. [Online]Available at: http://www.checkpoint.com/press/downloads/social‐ engineering‐survey.pdf [Accessed 2012 September 2012]. Sadeh, N. M. a. P., 2012. Why Phish Should Not Be Treated as Spam | Dr Dobb's. [Online] Available at: http://www.drdobbs.com/security/why‐phish‐should‐not‐be‐treated‐as‐spam/240001777 [Accessed 30 September 2012]. Schneier, B., 2008. Schneier on Security. Indianapolis: Wiley Publishing Inc.. Shezaf, O. & Teams, B. S. L., 2007. The Web Hacking Incidents Database Annual Report 2007. [Online] Available at: http://projects.webappsec.org/w/page/13246990/Web%20Hacking%20Incident%20Database%202007%20Annual%2 0Report [Accessed 30 September 2012].

283


Offensive Cyber Initiative Framework (OCIF) Raid and Re‐Spawn Project David Rohret, Michael Vella, and Michael Kraft Computer Sciences Corporation, Inc., San Antonio, USA drohret@ieee.org mvella3@csc.com Mkraft5@csc.com Abstract: During the 2010 European Conference on Information Warfare (ECIW) the authors unveiled their Offensive Cyber Initiative Framework, which outlined an approach for an offensive cyber initiative that included cyber intelligence, trends and predictive analysis, in‐line exploit tool development, integration of supporting technologies, continual reconnaissance, and implementation plans (integrated into a single comprehensive offensive cyber framework (Rohret, 2010)). The unique aspect of the OCIF project was the autonomy in which, once released, the program would complete a mission or set of goals without user intervention or support through traceable proxy services. One requirement not addressed was identified by the author that would be required to allow such a system to be successful; the ability to recover from a counter attack or adverse affect and re‐spawn as a viable system. In the following paper the authors review the OCIF framework, requirements for resiliency, and outline their solution for this additional capability that will provide autonomous functional resiliency for an OCIF‐based tool. Advancements in cyber security, to include the ability for network security companies and government agencies to identify and mitigate new or legacy attacks, have significantly decreased the life span of exploits and attack tools. Coupled with more robust operating systems, and a generation of better educated system administrators, the longevity of any attack tool can be measured in hours rather than days. If a unique attack exploit or methodology is developed it is often withheld; for once it is released it will rapidly be countered or defeated. Furthermore, the releasing organization engaging in cyber warfare will wish to remain anonymous in order to prevent political fallout or a retaliatory attack. Polymorphism and code mutation have been key components in successful attacks, but for a tool to continue a specific mission, resiliency will need to be developed into the tool’s framework. Often touted as a system requirement, resiliency is rarely achieved in software projects due to the cost in time and other resources. The least complicated and least expensive method of resiliency is achieved through redundancy of systems, providing immediate failsafe measures should an adverse condition arise at an alternate operating location (AOL). Unfortunately, an autonomous system with the requirement to remain anonymous cannot afford to be duplicated throughout the World Wide Web or be linked to multiple AOLs, as multiple instances (especially static instances) would amplify the systems’ exposure to an adversary. The author’s solution is a cloud‐based non‐traditional redundant array of independent disks (RAID) that will allow an autonomous tool to replicate itself using concatenation of a large span of disks. The necessary disk space is acquired through easily exploited systems throughout the cloud. Each span or node is separated from other nodes and wholly contained, allowing one node to be corrupted or discovered without affecting other nodes. In this way an OCIF‐based system will survive counter attacks, reconstitute, and the new instance will continue the mission. The authors will discuss the various methods of RAID and demonstrate through a simulated cloud environment their solution and results. Keywords: cloud RAID, RACS, RAIC, re‐spawn, functional resiliency, autonomous cyber warfare

1. Offensive cyber initiative framework overview An approach for an autonomous offensive cyber initiative, which includes cyber intelligence, trends and predictive analysis, tool development, functional resilience and resonance, reconnaissance, and implementation plans (integrated into a self‐healing framework), will provide decision makers and planners the ability to incorporate cyber warfare actions into an overall concept of operations. This capability allows action officers to implement cyber warfare in support of mission goals in the same manner air, sea, and ground actions are currently implemented. Central to this approach is a flexible and interchangeable offensive operations framework used to organize, coordinate, and integrate offensive technology, while reducing human error and providing the ability to align cyber actions with kinetic mission objectives. An operational prototype will implement a framework using technology from publically available and open‐source tools that include intelligence collection, threat/target determination, custom and open‐source weaponized exploits, covert infrastructures, and code obfuscation (anonymity). Figure 1 displays a duality system that provides control and feedback from both operational and decision (or mission criteria) events. The offensive cyber framework, developed within expert systems, supports all facets and levels of cyber offensive, defensive, and covert actions. Minimum requirements for an OCIF system include:

Long‐term reconnaissance and target enumeration

284


David Rohret, Michael Vella, and Michael Kraft

Stealth and deception

Slow, low‐bandwidth and time sensitive attacks

Emulating a mutual adversary’s capabilities

Remote proxy attacks (anonymity through foreign proxy servers)

Utilizing an adversary’s SATCOM resources for launching cyber operations

Code obfuscation

Randomization of tool sets to deter signature attack mapping

Covert and overt communications (data and analog)

Social engineering and psychological operations

Isolating and securing military IT assets (bandwidth manipulation)

Pre‐emptive attack requirements and rules of engagement (mission data)

Manipulating information, altering content, disinformation

Target‐specific malicious logic attacks

Battle damage assessment and forensics analysis

Noticeably missing from the OCIF requirements list is the ability to reconstitute following a counter attack or any other type of adverse action that would cause the system to fail or become non‐operational.

Figure 1: Duality controls via mission set data

2. Overview of OCIF RAID requirements In their 1989 ground breaking paper, Patterson, Gibson, and Katz were the first to discuss using a redundant array or inexpensive disks to create a level of reliability in massive computing systems (Patterson, Gibson, and Katz, 1989). Although their paper was more concerned with determining accurate disk failure rates and how to apply 1 of 5 RAID architectures to overcome specific problems discovered in each, their research provided the necessary resiliency in today’s massive enterprise networks that constitute the Internet and current Cloud architectures. RAID systems, including those touted to be Cloud RAID systems, rely on a fixed number of static

285


David Rohret, Michael Vella, and Michael Kraft systems under the control of the parent systems. RAIDs are not meant to be hidden from their owners nor are they required to reconstitute or spawn a new system without the owning networks approval or knowledge. RAID systems are uniform and provide the same data storage and encryption algorithms for each drive; in this way they are predictable, manageable, and maintainable. Unfortunately, the OCIF architecture did not meet several of the necessary requirements for a standard RAID architecture. An OCIF RAID requires non‐uniform disks of varying sizes. The OCIF RAID alters the number of storage disks, based on availability, and re‐spawns a system at random intervals, based on last contact with a parent system. Finally, the OCIF RAID is not time restricted in data storage and recovery. To remain anonymous, even to its parent systems, storage and recovery would be based on international time zones and may take days, not hours or minutes, to accomplish. These requirements are a result of OCIF’s function as an autonomous attack tool, rather than a productivity or data tool. To accomplish a sophisticated cyber attack, it may take weeks of passive and active enumeration before a viable target is confirmed, and just as long to conduct the attack. Therefore, system survivability is paramount and data storage or recovery times are of less priority. By system survivability, the authors are referring to the ability to re‐spawn via the RAID and continue a mission, not the survivability of the OCIF cyber tool itself, which can be reconstituted when necessary. Figure 2 provides a view of how each system component interacts during a mission while remaining autonomous to each other. Within an OCIF Cloud Node there are four major components:

The discovery program: an independent program used for proxy discovery

The Offensive Cyber Framework: the action tool (cyber weapon)

Data RAID: receives and stores data associated with the re‐spawn of the discovery program and non‐OCIF data

Action RAID: receives and stores data associated with the OCIF polymorphic worm

Autonomous and wholly independent Ping Utilities are the coordination tools that link components and system information as necessary. Ping Utilities are outside the scope of this paper, but will be discussed several times to help define and clarify OCIF RAID requirements. As displayed in Figure 1, the RAID systems were designed to mimic an enterprise RAID and each should be viewed as a separate entity or as a server located at an AOL. Each RAID is connected to multiple nodes, but will terminate a connection once triple redundancy is achieved, coordinate with a Ping Utility, and spawn another node. A proxy represents a compromised system being used for data storage or processing time, depending on its component connection. Once a proxy accomplishes a job, it is terminated and no longer used; the data or logs are erased to further promote anonymity and to prevent positive attribution. Figure 2 displays five proxies being used simultaneously by the OCIF component. This allows the OCIF program to accomplish an action, retrieve the data or return values, kill the proxy, and start a new job with a new proxy. This will be continued until the mission is completed. New proxies are continually being acquired and put into sleeper status by the Discovery program until they are needed. Like the OCIF component, Action RAIDS (AR) and Data RAIDS (DR) have multiple proxy sets that store multiple RAIDs. ARs and DRs are not required to work in pairs, although they do maintain a versioning algorithm to ensure they are compatible. In this way a DR can be recruited to supply mission data to an AR that has lost its data due to defensive actions, as long as it is in one of the three connecting nodes. Out‐of‐node DRs and ARs can be utilized if necessary, but their effectiveness may be limited due to incompatibility issues that will arise. An analogy would be using 32 bit software developed for an older Windows operating system on a 64 bit system; depending on the tools, they may or may not function as intended. The Discovery program is independent of all other components as illustrated in Figure 2. The sole purpose of the Discovery program is to identify new proxies that meet the criteria for the OCIF system. Once a proxy candidate is identified, it is linked to one of the polymorphic worms that will compromise and establish the necessary connections, disk space, and internal security to create a proxy type. Each proxy type is associated with a specific component and provides immediate access by the component when it is required. Discovery programs are linked to Cloud Nodes by Ping Utilities but are not associated with a specific Cloud Node.

286


David Rohret, Michael Vella, and Michael Kraft

Figure 2: OCIF cloud node architecture The management of the entire OCIF cloud structure is provided by multiple administration nodes referred to by the authors as Mother Ships (MS), as described in Figure 3. These systems coordinate processes and mission actions throughout the entire cloud. There are only three active MSs at any one time, although there can be up to nine as there is also a need for redundancy in the case of an adverse defensive action. Each MS controls Cloud Nodes in eight pre‐defined time zones; daylight savings time will not be followed to prevent the inclusion of a 25th time zone. Time zones were chosen as divisions for the MS nodes for two reasons:

Time zones provide an appropriate geographical divisor and are quantitative (measurable)

Geographical and cultural affects on bandwidth usage (Internet use) can be easily managed

On average, home residents and small businesses tend to utilize their systems and networks less between 2300 and 0600 hours; making these time periods optimum for an OCIF system to accomplish a mission or task. Furthermore, weekends and holidays see a jump in late evening and early morning use, while at the same time day‐use declines slightly. Table 1 displays Internet traffic bandwidth and usage tests for different countries at 1245 CST, on Sunday, August 16, 2012 (ITR, August 16 2012). From the data provided it is possible to conclude that evening Internet activity in Europe, mid‐day activity in North America, and early morning use in Asia would make proxy activity less attractive, due to high Internet use in these regions, but Australia and the Pacific region provide ample bandwidth to conduct operations. Table 2 further details which routers in a region (in this case Europe) are operational, and their current workloads (IRT, August 16 2012). This data is valuable in order to avoid regions or countries that have poor infrastructure in place, or are experiencing adverse conditions. Underutilized systems that provide more processing time and reduce the risk of being identified or discovered are cataloged and maintained in a Discovery Program’s database. By scheduling missions during these times, MSs can pass a job to another time zone in order to continue the job non‐stop.

287


David Rohret, Michael Vella, and Michael Kraft

Figure 3: Multiple MSs and PLs provide administration for each cloud node Table 1: Traffic tests at 1245 PM CST (internettrafficreport.com, August 16 2012)

Table 2: Router health, (http://www.internettrafficreport.com/europe.htm, August 16 2012)

As illustrated in Figure 3, MS nodes do not directly communicate to prevent a defensive action from affecting the entire cloud; rather, they communicate using Ping Life (PL) nodes. PL nodes maintain direct communication with each time zone’s Ping Utilities and coordinate mission data between MSs. As with the previously discussed Cloud Node components, each MS and PL has multiple proxies for processing and RAID requirements. PLs are also a means to disable or terminate operations throughout the cloud, but that architecture is out of the scope of this paper.

288


David Rohret, Michael Vella, and Michael Kraft

3. Determination of RAID type The overview of the OCIF system was necessary to detail the requirements of a RAID system that will provide a robust resiliency capability to an autonomous system. Research into available cloud RAID systems identified the development of multiple solutions for a Redundant Array of Independent Clouds (RAIC). Diego Righi’s development of systems that attempts to use a series of available cloud storage services provides an inexpensive solution (Righi, June 2012). Righi’s method provides a global storage solution that does not require the exploitation of non‐participating systems. Similarly, Least Authority Enterprise (LAE) uses existing cloud storage services, which support an HTTP API for their RAIC storage solution, which has the advantage in that it can sync with services that do not provide an API (LAE, June 2012). Although both development efforts are attractive, they both have several issues to overcome to be considered for and OCIF system:

The storage is out of the OCIF system’s control, and if storage systems fail there is no recourse

The use of available storage does not account for time zones or regions

Governments and law enforcement have the ability to search and analyze data in commercial or client‐ based storage systems

OCIF systems cannot use the storage services as a processing service if necessary

Only HTTP APIs are supported

Hussam, Princehous, and Weatherspoon in their paper, RACS: A Case for Cloud Storage Diversity (June 2010) solve most of the issues by developing a Redundant Array of Cloud Storage (RACS). Their implementation is based on traditional RAID architecture and not on simplified available storage solutions, which is illustrated in Figure 3. The RACS solution coordinates multiple RACS proxies via a ‘Zookeeper’. Hussam, Princehous, and Weatherspoon also greatly increase resiliency by providing data striping across multiple vendors in the same manner a RAID 5 would provide the necessary redundancy to provide hot swapping, but in a virtual sense. The basic architecture in Figure 4 closely resembles the OCIF cloud node in Figure 2, and provided the direction for further testing.

Figure 4: http://www.cs.cornell.edu/projects/racs/, Jun 2012 Hussam, Princehous, and Weatherspoon’s RACS solution still provided challenges for the OCIF development team, such as the use of vendor‐supplied storage for RAID creation, and the necessity to have disk space of equal size for consistent reconstitution of a lost drive (on an OCIF system). In order to create an OCIF compatible RAID architecture the authors reviewed traditional RAID types to identify a robust solution that could be efficiently manipulated to meet all of the OCIF system requirements, including non‐standard disk size reconstitution for use on unspecified systems. The following provides a quick synopsis of the authors review on specific RAID types:

RAID 0 and RAID 1: RAID 0 uses block‐level striping without parity or mirroring and does not provide redundancy. RAID 1 simply mirrors the data without parity or striping and requires each storage system to be of equal or larger size than the data requirements. Both solutions were deemed unsatisfactory to support the OCIF system.

289


David Rohret, Michael Vella, and Michael Kraft

RAID 2 and RAID 3: both RAIDs 2 and 3 require all disk spindle rotations to be synchronized for bit level parity. Because it will be impossible to accomplish synchronization between the proxies used as storage devices, these RAID types do not meet OCIF storage requirements.

RAID 4‐6: Although these RAID types provide for data reconstruction, specifically RAID 6, which uses double parity fault reconstruction (up to 2 bad drives can be reconstituted), they require access to at least one drive (or OCIF proxy virtual drive) at all times. This is an unacceptable requirement, as access to a virtual drive during specific time‐zone job transfers may not be possible.

Non‐standard RAID solutions, specifically JBOD (Just a Bunch of Drives) and Concatenation, or, spanning of disks, were also reviewed to determine if they were compatible with the OCIF Cloud requirements. In each case a drive (virtual drive for an OCIF component) would be recreated within a span of disks; virtual drives based on OCIF proxies. Because JBOD and similar concepts do not require a standard size disk or amount of free space to create a redundant disk, this method of disk storage allows OCIF the flexibility to create multiple redundant disks for reconstitution of a component, or the re‐spawning of an entirely new cloud node. Figure 5 displays a diagram in which an unspecified number of OCIF proxies would be used to create a resilient block of data that could be used to reconstitute an OCIF component. In each case, the OCIF Discovery Program identifies acceptable proxies based on a number of criteria, to include available disk space, uptime, and operating system. The information from these three criteria help to determine the amount of disk space available for surreptitious use, how reliable the system will be, and if a system’s disk space will remain relatively stable (if it is a newer system, the user may rapidly occupy available space, whereas if it is an older system the available disk space is more likely to remain static. Figure 5 also demonstrates that no more the 30% of the available disk space was used for testing. In an actual deployment, it would be more practical to reduce the amount of disk space occupied by OCIF components to less than 10%. Although a larger number of proxies would be required, it would also provide greater security as the risk of interfering with user operations would be largely diminished, as a hidden partition of 10% or less of the user’s available disk space would, in most cases, go unnoticed.

Figure 5: The OCIF Cloud RAID utilizes a portion of unused disk space on each proxy

4. Testing results In order to create a Cloud Node for RAID testing, the authors used 5 systems simulating proxies of different operating systems, amounts of available disk space, and processor types. Processors were included to show a latency component for processing data. Table 3 lists the systems used for each test.

290


David Rohret, Michael Vella, and Michael Kraft

Table 3: OCIF RAID test systems Operating System Windows XP, SP2 Windows Vista SP1 Windows 2000 SP3 Linux, Kernel 2.6.2 Solaris 8 (no patches applied)

Available Disk Space / Available OCIF Space 1.5 GB /.5 GB 32 GB / 10.6 GB 155 GB /51.6 GB 5 GB / 1.6 GB 450 MB / 150 MB

Processor 2.6 GHz, Intel 2.6 GHz, Intel 1.4 GHz, Intel 800 MHz, Intel 300 MHz, RISC

Using the parameters outlined previously, the largest amount of disk space available for use as an OCIF component proxy RAID, is 51.6 GB. In order to determine the effectiveness of using concatenation, both in storage and recovery, the authors used an Audio Video Interleave (AVI) file 64.4 GB in size. This forced the OCIF RAID to occupy space on all five test systems. Using the AVI file also allowed for immediate feedback to test the ability of the OCIF RAID to correctly reconstitute the file for use. Although a block size of data for a RAID can consist of a single byte, the authors chose to use the standard RAID block size of 64 kilobytes. This will be modified in future research to equal a multiple of the hard disk sector size of 512 bytes. In order to simulate the virtual environment, two systems were configured to emulate an OCIF component, and a Ping Utility. To initiate each test, the AVI file, located on the virtual OCIF component, would be deleted or copied over with a corrupt file. The virtual OCIF component would receive an error message and notify the Ping Utility. The ping utility would run the OCIF RAID routine and build the AVI file on the virtual OCIF component. A total of 50 tests were accomplished in order to verify the accuracy and efficiency of the system’s ability to reconstitute the AVI file. First results identified a problem with the Linux driver block routine used for read/write functions, ll_rw_blk. The standard Linux driver block routine was modified as shown below to allow for the non‐standard requirements of the OCIF RAID: The ll_rw_blk routine: { OCIF‐checks (PL_*); for‐each PL in blocks { make_request (PL_block); } } Was modified to support striping for non‐standard disk size: ll_rw_blk (blocks) { OCIF‐checks (PL_*); for‐each PL in blocks { if (block is‐in pl‐device) md_map (block) } for‐each Pl in blocks { make_request (PL_blocks); } } The altered block routine allowed the system to store each block successfully by verifying the data within each system, or proxy. Anomalies did exist and at this time are unexplained. In multiple tests using the same parameters and variables different results were observed. In these instances the OCIF RAID would reconstitute the AVI file but with varying sizes. Each simulated proxy was being used for the same number of blocks, but the

291


David Rohret, Michael Vella, and Michael Kraft file varied in size and functionality once reconstituted. More research will be conducted to identify the cause(s) and to increase the sample size in both numbers and system types.

5. Summary Cyber warfare capabilities are increasing exponentially and recent attacks on infrastructures have identified autonomous tools as the culprit. Simultaneously, cyber security firms, governments, and militaries have greatly improved their ability to identify and eradicate new malware and other malicious logic in minutes or hours rather than days or weeks. To create an autonomous cyber weapon with operational resiliency that is able to defeat attribution through anonymity, it will be necessary provide a cloud‐based RAID for reconstitution when required. This capability must be self‐sustaining while using non‐standard architectures and resources. This research was a continuation of the OCIF development project presented at the ECIW 2010 conference. The results conclude that a Cloud‐based RAID system can be developed to support non‐standard RAID requirements by using concatenation or spanning a large number of disks. Testing identified several issues and anomalies that remain to be identified and corrected, but the results were conclusive that disparate systems can be used remotely to reconstitute complex file structures.

References CS.CORNELL.edu/projects/racs/, RACS: Redundant Array of Cloud Storage. Jun 2012 DEFCON 2012. Call for Autonomous Cyber Attack. https://forum.defcon.org/blog.php. Jun 2012 Hussam, Abu‐Libdeh, Princehous, Lonnie, and Weatherspoon, Hakim. RACS: A Case for Cloud Storage Diversity. SoCC, 10 June, 2010, Indianapolis, Indiana. ACM 978‐1. ITR. http://internettrafficreport.com/. August 16 2012 Least Authority Enterprise (LAE). Redundant Array of Independent Clouds. https://tahoe‐lafs.org/~marlowe/TWN31.html. June 2012. Patterson, D. A, Chen, P., Gibson, G., and Katz, R., H. A Case for Redundant Arrays of Inexpensive Disks (RAID) IEEE COMPCON 89, San Francisco, Feb‐Mar 1989. Prince, Brian. IT Security & Network Security News & Reviews: Stuxnet Worm: Nine Facts Every IT Security Pro Should Know. eWeek. September 30, 2010 Righi, Diego. Redundant Array of Independent Clouds. https://tahoe‐lafs.org/~marlowe/TWN31.html. June 2012. Sale, Richard. Stuxnet Loaded by Iran Double Agents. ISS Source, http://www.isssource.com/stuxnet‐loaded‐by‐iran‐ double‐agents. April 11 2012.

292


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.