Page 1

Volume 1, 2020 Conference Proceedings of the Indian Conference on Artificial Intelligence and Law, 2020 (IndoCon 2020)

Š Indian Society of Artificial Intelligence and Law, 2020


AI & Glocalization in Law, Volume 1 (2020)

Volume: 1 Year: 2020 Date of Publication: October 17, 2020 ISBN: 978-81-947131-1-1 (Online) ISBN: 979-86-963116-9-2 (Paperback) Editors: Abhivardhan, Akash Manwani, Sameer Samal, Aditi Sharma, Kshitij Naik. All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher and the authors of the respective manuscripts published as papers, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, write to the publisher, addressed “Attention: Permissions Coordinator,” at the address below. Printed and distributed online by Indian Society of Artificial Intelligence and Law in the Republic of India. First edition, Volume 1, 2020. Price (Online): 250 INR Price (Paperback): 10.8 USD ( Indian Society of Artificial Intelligence and Law, 8/12, Patrika Marg, Civil Lines, Prayagraj, Uttar Pradesh, India – 211001 The publishing rights of the papers published in the book are reserved with the respective authors of the papers and the publisher of the book. For the purpose of citation, please follow the format for the list of references as follows: 2020. AI and Glocalization in Law. Prayagraj: Indian Society of Artificial Intelligence and Law, 2020. 978-81-947131-1-1, 979-86-963116-9-2. You can also cite the book through (recommended). For Online Correspondence purposes, please mail us at: | For Physical Correspondence purposes, please send us letters at: 8/12, Patrika Marg, Civil Lines, Allahabad, Uttar Pradesh, India - 211001


Preface The Indian Conference on Artificial Intelligence and Law, 2020 (IndoCon 2020) was the Flagship Conference organized by the Indian Society of Artificial Intelligence and Law, 2020 from October 1, 2020 to October 4, 2020. Amidst the COVID19 pandemic, the Conference was organized in virtual (online) capacity. The Conference sought the participation of 300+ viewers, 40+ delegates in the AI General Assembly, 10-20 (approx.) presenters from the academic community & a diverse community of experts and eminent personalities in the field of AI Ethics, Technology Diplomacy, International Law and Relations & Fintech. The Conference Proceedings of IndoCon 2020 covers research papers presented in the Track Presentations, the Resolutions, Position Statements and Reports presented in the AI General Assembly & the Reports emerged from the Panel Discussions in the Conference. I would be honest to enumerate that I am indebted to the Core Team of the Conference that made this event successful, comprising of Baldeep Singh Gill, Vice President of the Conference, Sameer Samal, Convenor, Innovation, Akash Manwani, Convenor, Academics, Aditi Sharma, Convenor, Partnerships, Kshitij Naik, Convenor, Publicity, Prof Suman Kalani, Chief Research Expert, ISAIL & Trishla Parihar for their utmost support and motivation.

Abhivardhan President Indian Conference on Artificial Intelligence and Law, 2020.


AI & Glocalization in Law, Volume 1 (2020)

About the Indian Conference on Artificial Intelligence and Law, 2020

The Indian Society of Artificial Intelligence and Law organized a 4-day long virtual conference from October 1 to October 4, 2020, that focused on discussing the enormity of the challenges our society faces in technology and law ethics in India. The Indian Conference on Artificial Intelligence and Law 2020 (INDOCON) is the flagship event of ISAIL. This conference was the forum where ISAIL provided a platform to AI-experts and students from across various countries, who discussed ways to impart solutions on various normative and unconventional issues related to Artificial Intelligence, Law and Ethics. The conference was planned to be held in three different parts, i.e. Artificial Intelligence General Assembly (AIGA), Track Presentations and Research Panels. The highlights of these are as follows:


Artificial Intelligence General Assembly

AI General Assembly is the Democratic Forum within the Indian Conference on AI and Law, 2020. The forum had invited students, professionals, and institutions/organizations interested and experienced in Artificial Intelligence and Policy. The forum this year was organized through online means for the 4 days of the Conference. From the selected members, Mr. Sanjay Notani, Partner at Economic Law Practices was the elected President and Dr. H.C. Sandeep Bhagat, IBM India was the elected Vice President. Some of the key agendas discussed during the General Assembly were 1. The Scope of Splinternet & 5G Governance in Multilateral Governance & Data Sovereignty Policy.



The Legal and Political Repercussions of Privatization of Autonomous and Augmented Systems in Space and Conflict Activities. 3. Assessing the Scope, Liability and Interpretability of the Paralytic Nature of AI Ethics Board in Corporeal Entities. 4. The Role and Framework of Plurilateralism in AI-enabled Crimes and Judicial Governance. The Assembly passed four recommendation reports on these agendas and finally a resolution, which is available at the official website of the conference, The reports and the resolution have been featured in the book as well.


Track Presentations

The Track Presentations were based on innovative ideas related to Artificial Intelligence and Law. We witnessed participation from across the globe. The threeday track presentation was chaired by Mr. Sabastien Lafrance, Crown Counsel for the Public Prosecution Service of Canada, Abhivardhan, Founder and Chairperson of ISAIL and Internationalism, and Prof. Suman Kalani, Professor of Law at SKVM Pravin Gandhi Institute, Mumbai. The themes of these tracks were beautifully expressed by the presenters who portrayed various dimensions of AI and Law.


Research Panels

The series of 4 panel discussions in the conference endorsed and opened conversations on the convergences between AI and law & policy. We invited AI experts as guests from across various countries. They discussed the following: 1.

Decrypting AI Regularization and its Policy Dynamics

Prof. Christoph Lutge, Director, Institute for Ethics in Artificial Intelligence; Ms. Luna De Lange, Partner and Data Protection Officer at KARM Legal; Mr. Sebastien Lafrance, Crown Counsel for the Public Prosecution Service of Canada; and Mr. Sushanth Samudrala, Cyber Law Expert were the esteemed panellists. 2.

Algorithmic Diplomacy, Geopolitics, and International Law: A New Era

Mr. Eugenio V. Gracia, Senior Advisor to the President of UN General Assembly; Mr. Emmanuel Goffi, Director of the Observatory on Ethics & AI, Institut Sapiens; Mr. Roger Spitz, Founder of Techistential; Ms. Nicol Lee Turner, CEO & Senior Fellow, The Brookings Institute; Mr. Bogdan Grigorescu, AI Platform Manager,


AI & Glocalization in Law, Volume 1 (2020)

Combined Intelligence; and Mr. Abhivardhan, Chairperson, ISAIL and Internationalism were the esteemed panellists. 3.

Algorithmic Trading and Monetization: Policy Constraints for Disruptive Technologies

Dr. Raul Vallamarin Rodriguez, Dean of School of Business, Woxen University; Ms. Akshata Namjoshi, Lead of Fintech, Blockchain and Emerging Tech at KARM Legal Consultants; Ms. Arletta Gorecka, Researcher at the University of Strathclyde; Ms. Pooja Terwad, Founder of Pooja Terwad and Associates; and Mr. Ratul Roshan, Associate at IKIGAI Law were the esteemed panellists. 4.

Artificial Intelligence and its Synchronous Implications to Ecological Solutions.

Mr. Rodney D. Ryder, Founding Partner, Scriboard; Mr. Pinaki Laskar, Founder, Fisheyebox; Mr. Raghav Mendiratta, Researcher at Columbia Global Freedom of Expression and Stanford University Center for Internet Society were the esteemed panellists.


4 • • • • • • • • • • •


Core Team of IndoCon2020 Abhivardhan, President of the Conference Baldeep Singh, Vice-President of the Conference Prof Suman Kalani, Chief Research Expert, ISAIL Akash Manwani, Convenor, Academics Kshitij Naik, Convenor, Publicity Aditi Sharma, Convenor, Partnerships Sameer Samal, Convenor, Innovation Aryakumari Sailendraja, Chief Operations Officer, ISAIL Prafulla Sahu, Chief Futuring Officer, ISAIL Vrushali Marchande, Rapporteur for the Research Panels 1 & 2 Mayank Narang, Rapporteur for the Research Panels 3 & 4

Executive Board and Delegates, AI General Assembly, 1st Session, 2020

• Mr Sanjay Notani, President, AI General Assembly, 1st Session, 2020 • Dr (H.C.) Sandeep Bhagat, Vice President, AI General Assembly, 1st Session, 2020 Delegates present in the Assembly, 1st Session, 2020 Anurati Bukanam Arundhati Kale Dev Tejnani Diksha Mehta Emmanuel Goffi Kashvi Shetty Taying Nega Ankit Shripatwar Nagbhushan Hanagandi Samar Singh Rajput Thisis Kumar Shubham

Prakash Prakash Prantik Mukherjee Sanad Arora Shruti Somya Swatilina Barik Mridutpal Bhattacharya


AI & Glocalization in Law, Volume 1 (2020)

Table of Contents AI General Assembly, 1st Session 2020: Statements, Reports and Resolutions 1. Explanation Report on the Agendas of the AI General Assembly, 1st Session (2020) 2. Separate Position Statement by Swatilina Barik 3. Recommendations Report approved by the AI General Assembly, 2020, 1st Session on October 1, 2020 4. Recommendations Report approved by the AI General Assembly, 2020, 1st Session on October 2, 2020 5. Recommendations Report approved by the AI General Assembly, 2020, 1st Session on October 3, 2020 6. Recommendations Report approved by the AI General Assembly, 2020, 1st Session on October 4, 2020 7. Resolution adopted by the AI General Assembly, 2020, 1st Session on October 4, 2020 Policy Reports on the Research Panels of the Conference 8. Decrypting AI Regulations & Its Policy Dynamics – Panel 1, October 1, 2020 9. Algorithmic Diplomacy, Geopolitics & International Law: A New Era – Panel 2, October 2, 2020 10. Algorithmic Trading & Monetization: Policy Constraints for Disruptive Technologies – Panel 3, October 3, 2020 11. Artificial Intelligence and its Synchronous Implications to Ecological Data Solutions – Panel 4, October 4, 2020 Papers Presented in the Track Presentations of the Conference 12. Dr Robot went crazy – Liability issues arising from a medical device’s hackability Tomás Gabriel García-Micó 13. “Alexa, I will see you in Court” –AI and Freedom of Speech and Expression Ritwik Prakash Srivastava 14. Laws Regulating Facial Regulation in India Paranjay Sharma 15. Digital Diplomacy and The Role of Governance in International ‘CyberSphere’ Aditi Sharma 16. AI & Cybersecurity in India: A Critical Review Ankita Malik


17. Impact of Fake News, Misinformation and Computational Propaganda on Electoral Politics: Some Global Perspectives Soundarya Rajagopal 18. Manoeuvring AI in International Arbitration Ankita Bhailot 19. Regulation of algorithmic trading, scams and flash crashes Sameeksha Shetty 20. The Age of AI and the New Global Order Dyuti Pandya 21. Artificial Intelligence and Law: Competition Law and Algorithmic Pricing Rishabh Arora & Vasundhara Mahajan 22. Steering Towards a Digitalized Education System Kapil Naresh, Ezhil Kaviya, Aarathi Manoj, Madhumidha, Ooviya Sekaran Transcripts of the Conference 23. Transcripts of the Research Panels of the Conference


AI & Glocalization in Law, Volume 1 (2020)

AI General Assembly, 1st Session 2020: Statements, Reports and Resolutions


1 Explanation Report on the Agendas of the AI General Assembly, 1st Session (2020) Sameer Samal, Editor Convenor, Innovation, Indian Conference on Artificial Intelligence and Law, 2020

Report received by Abhivardhan, President of the Conference Mr Sanjay Notani, President, AI General Assembly, 1st Session, 2020 Dr (H C) Sandeep Bhagat, Vice President, AI General Assembly, 1st Session, 2020

Abstract. This is a General Report drafted by the Convenor, Innovation of the Indian Conference on Artificial Intelligence and Law, 2020 for the purpose of elaboration and briefing on the understanding of the agendas of the AI General Assembly, 1st Session, submitted to the Executive Board. The President of the Conference gives assent to disseminate the same report, and acknowledges that the contributions by Anurati Bukanam, Prantik Mukherjee, Shruti Somya, Prakhar Prakash, Nagbhushan Hanagandi & Dev Tejnani, who are the nominated delegates of the Assembly for the year 2020, in this report, have been exemplary.


The Scope of Splinternet & 5G Governance in Multilateral Governance & Data Sovereignty Policy

The phenomena of Splinternet is also referred to as the ‘balkanisation of the internet’, essentially it can be translated to the division of the internet on regional basis according to that region’s social and political factors. It can be said that the practice of ‘Splinternet’ is gaining popularity amongst many countries and different countries have different motives behind implementing it. The national Chinese Government which has always been fixated on the idea of a surveillance state, has long been a supporter of this policy, where even its social media apps are different from what is used by the rest of the world, and the ruling government exercise heavy control over them to police its citizens and quell down any voices of dissent against


AI & Glocalization in Law, Volume 1 (2020)

government. Many other countries like Ethiopia, Saudi Arabia, Vietnam and Myanmar also follow strict internet regulation policies, stamp down on individual data privacy rights and any other mediums on the internet which critique the ruling government under the garb of national interests. Even Russia has followed in China’s footsteps and drastically changed their internet surveillance policy to a more rigorous one, where all foreign sites are labelled as foreign agents, banned all social media apps with encrypted texts and even set up its own private DNS. Splinternet has become a well accommodated phenomenon in the European Union too, where the EU GDPR indirectly impresses upon the creation of a separate internet for EU. Many American sites are banned because the EU GDPR and much of the content on the internet can’t be accessed because of the GDPR’s strong advocacy for copyright laws and with laws like “right to be forgotten” search engines like Google often provide the citizens of EU with inaccurate searches at the cost of protecting their privacy. When it comes to 5G governance, the millimetre waves require the data towers to be situated not more than 1000 feet away from the 5G user, hence to adequately use 5G a high-density allocation of data towers in a particular region. As cited by Washington, these data towers can also be possibly used as spying equipment by the Chinese Government. In accordance with the Chinese National Intelligence Law passed in 2017, specifically Article 7 and Article 14 of the law mandated that the state can demand support of an individual or an organisation to “support, co-operate with and collaborate in national intelligence work”. The Chinese Government can demand Huawei to submit all the data it has gathered on all its customers which poses a major privacy risk to all its users. With the governments worrying about foreign companies spying on their citizens and the phenomena of Splinternet taking over the world, it can be argued that there is not much scope of multilateral governance left in these two areas and is almost non existential. Data sovereignty as practiced by the local governments would take precedence in the name of national security and the internet as we have it right now, would stand divided which necessarily might not be a bad thing unless the entire world decides to follow an iron veil policy with regards to the internet like China and Russia in order to police their citizens. A splinter is a fragment of a more massive object, and a splinternet is a fragmented internet. Fragmentation depends on multiple reasons. Cyberbalkanization is a synonymous term that is a hybrid of the internet and the Balkans, a part of Europe that was historically subdivided by languages, religions, and cultures. It could be a threat to democracy because it displays controlled information to the users and presents information based on criteria like government policies and business motives. The “worldwide web” as a free and open entity is at threat, and the Splinternet is not more just a concept. This reality is starker in the context of China and Russia, who are the most prominent internet disruptors. In 2016, the UN declared “online freedom” to be a fundamental human right that must be protected. This was not legally binding, the motion passed with consensus, and therefore the UN was provided limited power to endorse an open internet system. By selectively applying pressure on governments who are not compliant, the


UN can now enforce digital human rights standards through transparent monitoring and ranking the countries on that basis and aligning more towards internet companies to access the vast information they have. European Union regulations under GDPR prioritize privacy over freedom of expression. The General Data Protection Regulation (GDPR) is the latest splintering factor. Europe’s “right to be forgotten” laws will make search engines like Google to become inaccurate and non- representative of what’s actually on the internet. Thus, the web is not worldwide anymore, and it will never be. Splinternet is the result of national governments extending their rule to cyberspace. The next-generation mobile network standard 5G would allow a large amount of data transfer with minimum lag. It would enable faster and reliable digital communication and video conferencing facilities in the remotest corner of the globe. 5G will further aid to the scope of Splinternet. Facebook’s attempt through “Facebook basics” and a similar effort by Reliance was highly debated and ultimately rejected by the Indian policymakers. For, the internet shall not be regulated, and must not be discriminatory. Internet surveillance and internet censorship shall not be present. However, the Splinternet is slowly making its way under the Indian government’s compulsion as well, by an attempt to regulate its content. More so when facing opposition on social media, which remains the only avenue of protest amidst the present COVID pandemic. Nations that have gone the furthest to isolate their national internets from the international internet are in a far better position to survive all-out cyberwar than the countries (like the US) that are naively pushing for a single global internet. Thus, the Splinternet shall be accepted as a reality, and along with the apprehensions it poses, the focus should lie on how it can be productively used for better governance. Countries differ in morality and the sense of right and wrong. An easy example could be the difference between censorship laws in a conservative and liberal government. Splinternet can protect and promote national interest and restrict the anti- social views and say fake news on the internet. However, the biggest challenge lies in who shall have the responsibility and the authority to do that. The tech giants are aiming at IoT for a better future and the benefit and advancement of the people and their lifestyle. With this, the world is heading toward 5G connectivity, where predictions are such that 5G will offer up to 600x times faster internet speed compared to 4G and will advance the digital front keeping it costefficient. But, surprisingly in the era of the 5G network, the world is adapting the concept of splinternet. The “splinternet,” where cyberspace is controlled and regulated by different countries is no longer just a concept, but now a dangerous reality. Splinternet is splitting the internet into different regions or nations. When the government of that particular country bans the apps and websites either for their political profit or for national security. India banning 118 apps recently for national security, is one of the best examples of Splinternet. The USA banned Huawei, stating security concerns and China not using WhatsApp, Facebook, Instagram but their application is the most popular case of splinternet. Last year China issued two new censorship rules, identifying 100 new categories of banned content and implementing mandatory reviews of all content posted on short video platforms.


AI & Glocalization in Law, Volume 1 (2020)

The result is likely to bifurcate IOT, dividing the world between countries willing to use Chinese telecoms gear and those that share America’s concerns over security. If information cannot be exchanged easily between different networks, we have to go back to paperwork and print and input into the system, which will defeat the essence of the internet. Vincent Peng, a board member at Huawei, warned that this could result in a disastrous “digital Berlin Wall”. Apart from China and the USA, Russia has been another big name on the topic of splinternet. Earlier last year Russia announced a plan to disconnect the entire country from the internet to simulate an all-out cyberwar. Slowly and gradually, every country is engaging in substantial politically-motivated filtering. The results of the splinternet and digital authoritarianism stretch a long way beyond the population of these individual nations. As the web turns out to be more splintered and data more controlled over the globe, we risk the decay of equitable frameworks, the defilement of free business sectors, and further digital deception crusades. A new set of global guidelines is genuinely necessary which must be formed with the advantages of receptiveness with the desire to ensure online platform working in a public stage. The idea of the free and open web is both a worth, similar to basic liberties of human rights and commodities. A political settlement should be arranged between countries and a memorandum of understanding must be signed between them stating the free flow of the internet to the citizen. An international organization must be set up to ensure low barrier internet and the memorandum signed between the countries is followed. With the assistance of western internet organizations track must be kept on every administration's endeavours towards oversight creep and government overreach. Like other ranking lists, countries must be ranked on their achievement for a free flow of the internet. This might create a sense of allowing all citizens full information and save the distortion of the 'World Wide Web'. The borderless nature of the Internet has enabled the development of the digital economy and revolutionary technical innovations. It has become easy for companies (large and small) to achieve a global reach. However, in the wake of the Edward Snowden whistle-blowing leaks that revealed mass foreign surveillance wiretaps of the Internet, a number of governments are considering implementing data localisation laws. In practice, data localisation laws are highly likely to fragment the Internet resulting in inefficiencies and greater costs. Many recent technological advances may be affected or impeded by such laws. The US was in favour of a global internet infrastructure but recently it is following the path of China to enhance their splinternet policies. Meanwhile, several countries have told Apple and Google to exclude certain apps from their stores: in total, in the one-year period covered by Apple’s transparency reports, no fewer than 15 nations requested the removal of a total of 1,311 apps in 150 separate petitions, alleging violations of local laws. Apple challenged all or part of 12 of these requests, and ended up making 851 apps unavailable in several countries. The company does not specify which applications were removed, but provides some details on the reasons alleged, ranging from dissemination of illegal content to gambling, violation of privacy laws or operating outside government policies. Many of these requests


came from totalitarian states such as China, which requested the vast majority of withdrawals, along with Saudi Arabia, Pakistan or Russia, as well as from capitalist democracies such as India or Norway. Increasingly, national sovereignty is the main restriction to the scope of a truly global internet, meaning we now face a future of regional networks. This is merely a symptom of much more serious problems, such as the impossibility of global solutions to the pandemic or more serious issues such as the climate emergency. Supranational institutions and treaties such as the World Health Organization, the United Nations or the Paris Agreement see their role, their operations and their commitments progressively restricted, while governments give much more importance to holding on to power. If data localisation becomes commonplace, determining what data must be stored in which country may be easier said than done. If Company X is headquartered in India but has subsidiaries around the world, does that mean that under data localisation laws all of Company X’s data would have to be stored in India by the cloud service providers it uses to manage its online operations? Alternatively, should localised data be stored in the country of each Company X subsidiary? Will the country of incorporation matter or just the country where the relevant office or employee is located? There are no clear answers as matters currently stand. Much will depend on how each jurisdiction drafts its data localisation laws on the issue. Data localisation laws create a significant barrier to companies seeking to expand their international presence. Widespread data localisation will require global service providers to build or rent physical infrastructure in each jurisdiction that requires data localisation. The associated costs and administration burdens will likely make the provision of many global services currently taken for granted by Internet users impractical. So, in future if AI is integrated into telecommunication technology and the algorithms are set in a manner which enforces data localisation, then there is a high chance that splinternet might become a reality. The onslaught of recent developments demonstrates India will form cyber policy debates. Among rising economies, Republic of India is unambiguously positioned to exercise leverage over international tech firms because of its sheer population size, combined with a fast surge in users coming back on-line and also the country’s massive gross domestic product. India occupies a key seat at the information governance table alongside different players just like the EU, China, Russia and also the united states — a foothold the country ought to use to push its interests and people of different equally placed rising economies. For many years, the Indian population has served as an economic resource for foreign, largely U.S.-based technological giants. However, India is moving toward a regulative strategy that reduces the autonomy of those firms so as to pivot off from a system that recently has been termed “data colonialism”, wherein western technologies use data-driven revenue bolstered by data extracted from customers within the international South to consolidate their international market power. The policy thinking underpinning India’s new grand vision still has some gaps. information Localisation construct has become a reality and every nation is framing laws for constant. information Localisation laws mandate businesses the operate


AI & Glocalization in Law, Volume 1 (2020)

just about shall method and store the information among the place of business they are set, and if violation is found, they might be fined heavily. Localisation laws need organizations to store data at the place wherever they perform their business. Hence, this impacts the company homes because it creates a barrier to expand their business internationally. Further, so as to expand their operations, they need to form that infrastructure in every place wherever they want to conduct their business and additionally accommodates these laws. Hence, the costs of administration take issue from place to place and becomes valuable. The splinternet is defined as the fragmentation of internet into smaller sections due to a variety of factors. It is literally the splintering of the internet. What one must keep in mind here is that splinternet is a characterisation which has already started taking place and will result in a global division which would require our immediate attention on an international level. The important question that comes up here is what must be done in the global technology domain to address these issues? Firstly, across the country’s governance will be the first step that shall be taken to address this. Take a cue from history post World War II which saw the immergence of multilateral treaties and agreements which have successfully addressed a lot of issues till date. Therefore, a need for multilateral governance which will bind countries together to address the international technology domain and upcoming fields like AI will be the most essential step. Even though there have been instances where these forms of agreements have not worked in the favour of the developing nations as they have been forced to accept conditions that the developed nations have set forth or vetoed. This makes it important under sensitive topics like cyber security that countries feel secure enough to share information and put aside their personal interests for global security. Compromise and careful diplomacy will have to be exercised for the sake of respect to privacy and personal data. Additionally, these multilateral agreements must give the power of establishing policies and regulations in place to the developing countries so as to make sure they are not side tracked from the negotiations. It is also essential that implementation of these policies and regulations be done with immediate and full effect to benefit from them. When we talk about splinternet, one must not forget that apart from multilateral governance, data sovereignty of countries will also be affected at home. Businesses and Individuals will face a strong effect of the change and policy of the data protection will also stand to see changes. The most recent example is of Chine setting up the ‘Great China Firewall’ which regulates outside traffic within the boundaries of the country. This act of trying to safeguard the internet and thereby have a control over the people accessing such information is becoming highly preferred. Such a scenario is bound to bring questions on humanitarian grounds and also challenge the sovereignty of the states. One may say that internet gives the state the right to safeguard and impose its sovereignty, but what will this mean for the world? A splintered global dynamic? 5G is the next important issue we must address in the wake of technology. 5G which can be considered as the division of cellular network into small geographical


areas called cells, so that every cell is connected via radio waves through a local antenna in the cell. The major advantage of this is the greater bandwidth and higher download speeds on all cellular network. It is important to note that multilateral agreements can prove to be very beneficial in developing the technology via products which provide best results. The recent example is of Telefรณnica, a Spain based company which signed multilateral agreements with Telefรณnica I+D, Spanish Ministry of Industry, Energy and Tourism, Regional Chair for Education, Youth and Sports of the Madrid region, IMDEA Networks, Ericsson and AMETIC (Association of companies within the Electronical, IT, Telecommunications and Digital Content industries) with a view to supporting the development of 5G technologies, products, and services in the framework of the Advanced 5G Network Infrastructure for Future Internet Public-Private Partnership (5G PPP) program. This step is widely applauded and is aiming to address other important initiatives. Although the recent surge against the multilateral agreements has been about the inefficiencies of these agreements, it is important to note that under circumstances where everyone can benefit from each other, multilateral agreements still remain the preferred mode of negotiations. Conclusively, it is important to have laws in place to protect personal data and it is vital that this data be used in the most ethical way, but to what degree is the question which needs to be addressed. The data sovereignty policy of every country aims to protect its citizens and businesses from misuse and in doing so establish restrictions which may be hurtful.


Agenda 2- The Legal and Political Repercussions of Privatization of Autonomous and Augmented Systems in Space and Conflict

Space technology has always been an integral part of mankind. Right from satellite communication, to weather reports, to explore the universe, space has always played a very crucial role. But, one can see a considerable difference from the beginning of the space age. Earlier there used to be only two power-stations- the United States of America and the Soviet Union. Space Activities has consistently been an administrative issue, as it something which was highly required by every individual, and was never seen with a commercial point of view. However, recently, we can see a move toward commercialization and inevitably toward the privatization of space activity. Lately, there have been numerous developments in the augmentation of space and conflict activity, both on the national and international front. Government-led agencies have achieved amazing things since the Moon landings, but one simply does not have the in-house capacity to address the country's growing requirements. Today, the space program is not just about civilian applications for remote-sensing, meteorology, and communication, as in the early decades. The space sector and its requirements have grown enormously in the last decade to include television and broadband services, space


AI & Glocalization in Law, Volume 1 (2020)

science and exploration, space-based navigation, and, of course, defence and security applications. Around 32 States and 2 international organizations (ESA and Eumetsat) have registered space objects with the United Nations. More than 30 States have their space programs and various States have procured satellites in orbit. This makes evident the significant increase in the use of outer space. One can thus observe a clear tendency towards commercialization and even privatization of space activities The privatization of Space is by all accounts an extraordinary thought somewhat. Public funding isn't the way to genuine advancement. Real space exploration needs a development equipped for delivering income and benefit. This will require private undertakings to venture outside the administrative agreements and assemble a space voyaging revenue industry. The private sector, especially the start-ups stand a great chance in the sector for the exploration of the solar system. They have an advantage in terms of low-cost operations, which proves to be a huge incentive for the government. Also, a private company can make decisions and fund projects much more efficiently than the government. SpaceX became the first private enterprise, which launched its first successful launch in the year 2019. This became a landmark moment for the private space sector. Not very back, Jeff Bezos's Blue origin entered into the game of the private space sector. But there is to be some unintended consequence of privatization of space. Firstly, boosted by cheaper launch prices and new microsatellite technology which has seen devices shrink to the size of a loaf of bread, companies are now launching more and more satellites into space, and that has consequences. The small area of space around our planet is becoming quite crowded, and the potential for damaging and expensive collisions has increased. Secondly, the accidents of the private satellite. This is not a hypothetical situation but a case of SpaceX, during its testing of Falcon 9. Who will be held liable if the part of the vehicle stops working in outer space? If someone dies in the process? Government is usually not held liable to the same degree as the private enterprise. Since there is no alternative to the Space program, it is seen that accidents are part of the space program but the same is not so smooth for private enterprise. Overnight they are answerable to stakeholders, government, and many more. According to the outer space treaty, the launch state bears the cost of an accident as the private enterprise is not insured and thus may not be satisfactory. In this, the burden will all be on the launch state. But this follows some ambiguity as to when the treaty was written, private companies didn't exist. Thirdly, the concern of monopoly. Now, since private companies have entered and in the coming even more will enter, there are high chances of monopoly in the business. It won’t be surprising to see the powerful throwing other organizations out of business, which will eventually lead to other problems like an increase in the cost. Aforementioned, space activity is much needed by everyone. Right from television


and telephones, and now even the internet, all are working on the satellite. If a monopoly enters the field, there are chances that the basic need will be provided at an extremely high price. It evident that in the upcoming year there will be a lot of hustle in space activities. With a greater number of private enterprises along with government organization, entering the space, it becomes necessary for proper legislation which must be made by the government of the state and enact national space legislation. The use of space by actors must be regulated. This is extremely important for the environmental protection of space. Some traffic controllers for space must be set-up. Proper regulations must be made by the government for the private entities as well. These regulations must be in accordance with international space law. Private organizations must be adequately insured and proper framework and jurisdiction are required to address the issue of the accident in case of accident by a private entity. An international body must be formed to ensure the fair and proper functioning of the space activities and to avoid foul-play by the strong parties. A separate Space justice board should be formed to deal with space law. Privatization of autonomous and augmented systems in space and conflict activities would rupture the development of a Globalised society and lead to accumulation of resources in the hands of a few private players, which might perpetuate an extremely discriminatory environment for any other individual or organisation who might want to venture in the development of application of artificial intelligence in space related activities. Moreover, the AI so developed will let the private players controlling them have a leverage over the government and the society in general, thus placing them at a higher pedestal than the ruling body of a nation itself. Once AI is fully developed, they would possess vast amounts information in the field of space which would be beyond compare to any other organisation or government. This would aid the said private organisation to exploit the resources space has to offer in which-ever way they want and reap the benefits for themselves. Opaque privatisation of AI should be stopped at all costs and an inclusive, diverse and an open model of development of AI should be adopted when it comes to space conflicts. International laws like the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies, should be adhered to before considering the privatisation of Artificial Intelligence in space and the principle which was propounded in the above mentioned law of “free exploration”, “permission-less innovation” should be kept in mind before regulating the private entities on the development of autonomous and augmented systems in space. If privatisation of such technology is encouraged, the private companies of the countries with the leading technologies in AI might nudge the local governments to invalidate such international conventions lead to the formation of a socially inequitable society which might place a major chunk of the population of the world in the face of harm. The privatization of the commons is the real tragedy. Beginning with land, water, and genetic code, we move towards the privatization of many abstract things like


AI & Glocalization in Law, Volume 1 (2020)

rights, privacy, and security. Private players make a profit by pushing the cost to the rest of the community. The idea supporting privatization was that the person who owns the property would better care for it. However, this does not seem to be happening at the commercial level. For a company, the ownership and control of the resources can be traded, that too at shorter intervals, profit remains the sole motive. It also results in a concentration of ownership in the hands of a few big corporates. In general, regions of Earth beyond any one nation’s control. The atmosphere, the high seas, and Antarctica – have been viewed as globally shared resources. Same principle applied to space, too. These are the “global commons” shared among all nations and peoples of the Earth. As Elon Musk suggested, we are “at the dawn of a new era for space exploration”. This privatization is both an inevitable and positive evolution in many ways, but many judicial pitfalls characterize it. As the international community negotiates to navigate these pitfalls, fairness, international cooperation, and the peaceful development of outer space should always be given the most importance. Political Repercussions of Privatization of Autonomous and Augmented Systems in Space and Conflict Activities could be — Allowing private control of space resources could launch a new space race, in which wealthy companies, likely from developed countries, could take control of crucial resources. The US, Russia, and China are already developing weapons to defend their space assets in a case when any conflict in space arises. Legal repercussions of the same could be — Advocates are correct that defining property rights is a necessary precursor. Still, it is not a binary choice between a “global commons” or private property. Instead, there is a universe of rights that deserve consideration, which could provide a proper foundation for sustainable development. That is the reason why US has not ratified the UNCLOS, for example, which was agreed to in 1982 and took effect in 1994. The jus ad bellum is that body of law that governs states’ resort to force in their international relations. Today, the most important source of jus ad bellum is the UN Charter. But coming to an international agreement regarding resource sharing would take time, energy, and a widespread willingness to view resources as common assets that should be collectively governed. All those ingredients are in short supply in a world where many countries are becoming more isolationist. With the emergence of the new space industry and private enterprises like SpaceX, many legal concerns have come into existence. There exists an international framework for space laws such as the Outer Space Treaty. There is also ample policy regulations at the national level. In India, this sector is being opened for private players recently. However, implementation remains a concern. The defense and national security domain, reserved for the public sector units, is being opened for the private sectors. This sector plays a vital role in foreign trade and is responsible for India’s significant trade deficit. The management of Antarctica has useful parallels with the regulations of Space. The entire continent is governed by a treaty that has avoided conflict since its inception in 1959. It freezed national territorial claims,


barring military and commercial activities. The continent is reserved for “peaceful purposes” and “scientific investigation.” All these pose a more significant question of managing the probate resources and the extent to which the global community should rely on privatization. To what extent the future of human civilization be concentrated in the hand of a few Global Tech Giants. Moving towards the trend of deploying autonomous weapons, the United States, U.K., and many other countries have also privatized a number of military and security functions by using private contractors to an arguably extraordinary degree. For example, at the high point of the conflict in Iraq and Afghanistan, the US government employed more than 260,000 contractors, which at times exceeded the total number of US military personnel deployed in those two countries. Such contractors have performed numerous roles, from constructing military bases and refugee camps to cutting soldiers’ hair, maintaining weapons on the battlefield, serving meals in mess halls, interrogating detainees, and guarding diplomats and military facilities. Policy-makers and scholars have challenged the legal implications of this shift, and many have called for increasing oversight. This privatization and use of autonomous or semi-autonomous weapons has posed new challenges for international humanitarian law. Privatization and use of autonomous or semi-autonomous weapons make the decision-making fragment and diffuse, thereby obscuring the individual accountability. Accordingly, it will be a core challenge for law and governance in this century to evolve new legal regimes and new liability, oversight and accountability mechanisms to respond to the radically changing face of armed conflict. The main problem is not only the lack of a human agent but also the autonomy or even partial autonomy in the decision-making process, thereby making it hard to hold any one person or group of individuals responsible. Further, engaging contractors further diffuses the decision-making process. For example, with many remotely operated weapons systems, such as drones, contractors are involved in supplying intelligence that leads to target selection as well as occasionally calling in targets from the ground. Intelligence- gathering plays a key role in determining the target decision itself. A private or non-state actor contractor discombobulates the legal position further. The existing legal doctrine is not equipped to cope with criminal responsibility in the case of automated and outsourced weapons systems. Only human beings can be held criminally responsible for actions. But in case of automated and outsourced weapons systems, there is no clear human actor who can bear full responsibility. The governments of the United States and the United Kingdom have said that the last human to make a decision regarding the operation of the weapon should be held responsible. However, this is inadequate, because no one person truly can be said to make that decision. The doctrine of command responsibility allows for seniors in the chain of command to be tried, but that theory is premised on the conception that the human being who actually pulled the trigger could also be tried for the same crime. Yet here there is no human being who can accurately be said to have pulled a trigger. In particular, increasingly autonomous weaponry and growing privatization tend to spread responsibility for decision-making


AI & Glocalization in Law, Volume 1 (2020)

across a larger number of actors who do not fit neatly within either a military command structure or an ordered bureaucracy. Thus, as multiple actors work together to gather intelligence, make targeting decisions, and deploy weapons, the authority and responsibility for decisions involving violence are diffused and fragmented. There should not be complete or excessive privatization of autonomous and augmented systems in space and conflict activities. Further, proper liability mechanisms shall be figured out before we can rely on autonomous and augmented systems. The development in International space law was seen after the successful launch of Sputnik 1 in Oct 1957, that has been remarkably quickly developed in its initial part. Once the fundamental decision of America and therefore the USSR, to touch upon the international legal order for house activities, not at a bilateral level, however underneath the auspices of the United Nations, it absolutely was clear that this part was supported public law of nations. So, it failed to return as a surprise that there were 5 international agreements, the space pact of 1967, the Rescue Agreement of 1968, the Liability Convention of 1972, the Registration Convention of 1975, and therefore the Moon Agreement of 1979 all were negotiated within the United Nations Committee on the Peaceful Uses of Space and later adopted as Resolutions by the international organization General Assembly. The distinction within the range of ratifications simply expresses the various levels of acceptance of those international agreements. strangely enough, even the agreement principle failed to amendment the shortage of support, within the case of the Registration Convention with solely fifty-three ratifications till these days and significantly for the Moon Agreement with solely fourteen ratifications. However, this part will, no doubt, be characterized because the self-made part of international house law-making conveyance concerning 5 many-sided treaties of rather a profound character. The growing business and personal space activities need to a bigger extent than ever new national space legislation. it's documented that the four choices of turning into a launching State considerably increase the likelihood for States to be concerned and shall be controlled and liable just in case of an accident of a personal satellite which initiated location. In such a case, the launching State bears the danger that an attainable recourse against the personal enterprise might not be satisfactory as a result of the enterprise isn't insured. In such a case, the sole loser of such activity would be the launching State. It conjointly acknowledges that the Moon Agreement doesn't provide additional steerage on the precise feature of such a world legal order for economic house activities. Rather, the supply of Article 11 by career the Moon and therefore the celestial bodies the common heritage of human beings raises additional queries than it answers them. It is utterly receptive however the construct of the common heritage of human beings is also enforced in observe. solely paragraph seven of Article 11 of the Moon Agreement provides some hint by providing that each one investment countries might benefit from the Moon resources also as developing countries ought to do therefore. It does, however, suggests or indicate any concrete criteria for the activity of the advantages of either


aspect. It is therefore utterly open whether or not the international community can or won't eventually commit to have interaction itself in the least in any economic activity on the Moon or different celestial bodies or whether or not it'll do therefore underneath terribly strict restrictions and extremely severe liability provisions. There exists a pre-conceived notion that privatization of any sector would result in two inevitable changes, first, being the rise in prices of the services or product of the sector and the second, being the unaccountability from the private company. In recent years to come, we have been given enough reason to alter this preconceived notion. By carefully analysing the legal and political repercussions of the privatization of the autonomous and augmented systems in both space and conflict, we hope to change this pre conceived notion and give practical reasoning a chance to dictate our notions. At this juncture it is important to understand what the terms autonomous and augmented system stand for. It is very clear that these are advanced forms of artificial intelligence and we today don’t see them in our everyday activities. Autonomous intelligence can be understood as the assistive technology which is designed to only aide humans in their task and enhance the output of the same. It is considered as the second tier intelligence and is used to make life easier for individuals. Whereas on the other hand, augmented intelligence is the most advanced tier of artificial intelligence which is used to do work that humans do. The simplest example of the same is robotics. This field makes use of developmental principles to adapt to the working of the humans and follow through by being able to understand human intelligence. A rare and still in developing stages, this type of intelligence is the one which we consider as the intelligence with a mind of its own. It is the use of these two systems which will the torch bearers in the years to come. The simplest form of intelligence which precedes both of these is called the assisted intelligence, which merely functions to assist activities that we do by adding a few steps to make things easier. This type of system is only driven by rules which are pre-defined. Privatization can mean a lot of things, one of the simplest examples are the conversion of government owned enterprises into private sector undertaking, relieving the government participation by either acquiring a majority in the stakeholders or by completely buying out the governmental stakeholders. However, the term privatization is not so limited, today we call private companies who work towards common goals of the country and its growth which otherwise the government would do so, is also privatization. The question we must address here is the privatization of space. What would be the repercussions of such a step? Firstly, one of the major consequences of privatization of space would be the ability to conduct space activities without having to run them by opposing power or scramble in project power or suffer the chains of bureaucracy. This advantage benefits all by space activities being conducted in a fruitful way without intervention of government drawbacks and slow attitudes. One may argue at this point that


AI & Glocalization in Law, Volume 1 (2020)

the government undertaking allows for the activities to be governed, but does government mean governed? It is absolutely possible to hold private organisations accountable for their actions and ensure that they are governed in the right manner. Another important repercussion of private space law would be the result of intellectual property rights in the name of the private companies. Patents in space and treaties only talk about the country’s patents registered at the ISS. However, when private companies create innovations which need to be patented they will need a separate inclusion. Since private companies in space have in recent years proven to make honourable innovations and have succeeded in understanding in a way which is much better than government owned programmes, they deserve to have the goodwill of patents being registered for their work. Understandably, this could bring up a lot of legal issues and so this quote from a WIPO report published in 2004 is an idea which can be considered, it states that the “best solution” to legal uncertainty related to intellectual property protection for the space industry is “to declare space and its accessories (for example, launch sites and vehicles) as a single territory with a single and uniform law and with a single and universal enforcement body.” Thirdly, Indian participation in the recent space projects by private organisations is one of the most beneficial results of privatization. After an invitation to join hands with ISRO and the private sector, India aims to build 30 satellites in this joint venture. This initiative has given birth to a lot of private start-ups thereby giving India a chance to explore the space in a new light. When talking about the legal repercussions in space due to privatization, it is easy to believe that a lot of legal issues will need to be amended or discussed in order to privatize the space sector. But it also important to note that these treaties which are in place are quite flexible enough and can be amended to include the new aspects. It is important to keep in mind that the results from a private space programme are so much more efficient and target based rather than the government owned. Politically, giving the space to a private entity to function gives times to the government to address other issues and let professionals and experts in the fields take over the baton. To conclude, one must take into account what the past has to offer us and not what we believe the future would hold. In recent times, private institutions have proved to be efficient, beneficial, responsible and much more advanced in achieving the latest technology. The notion that government is the only body which can work for the growth of the country and its citizens is an age-old notion and we must be sure to be practical on this front.



Agenda 3- Assessing the Scope, Liability and Interpretability of the Paralytic Nature of AI Ethics Board in Corporeal Entities

The use of AI in corporeal entities has become all the more prevalent with the advent of Machine learning. This means that a major chunk AI software is responsible for conducting the day to day financial activities of some big names in the industry. This means that there arises a need for a larger role of ethics and liability in such circumstances. Such ethics must be based on the same principles that Richard Stallman stated when speaking of the 4 freedoms of free software. These corporates must provide the freedom to study how the program works. This becomes essential in determining the scope of the power AI software has in this regard. In Frank Pasquale’s book The Black Box Society: The Secret Algorithms That Control Money and Information, the term black box has been referred to the various algorithms which are applied by corporate entities to handle the data exchanged between them and their consumer. Such complex algorithms which process data are kept away from public scrutiny and handled as trade secrets. The paralytic nature of such algorithms is very well exemplified in the use of algorithms which represent racially biased results to African American users, for example an algorithm which performs the function of targeted advertisements. A Harvard researcher put forth this argument in her article, “Discrimination in Online Ad Delivery”, where searches for stereotypically African-American names display advertisements suggesting a criminal record. Its not only this but tech giants like Facebook have also been accused of altering public views with the help of social media or painting the government in a favourable light. Which is why it is absolutely necessary for the government to bring such companies under scrutiny and audit and making legislation which penalises the use data by such companies when it comes to socially damaging algorithmic outputs as cited above. The application of AI as a problem-solving tool offers great promise for advancement. 76% of executives responding to a Deloitte survey said they expected AI to “substantially transform” their companies within three years. About a third of this group added that ethical risks were a chief concern about AI technology. Conceptually, AI ethics applies to both the goal of the AI solution, as well as each part of the AI solution. AI can be used to achieve an unethical business outcome, even though its parts machine learning, deep learning, NLP, and/or computer vision were all designed to operate in an ethical way. For example, an automated mortgage loan application system might include computer vision and other tools designed to read handwritten loan applications, analyze the information provided by the applicant, and make an underwriting decision based on parameters programmed into the solution. These technologies do not process such data through an ethical lens—they just process data. Yet if the mortgage company inadvertently programs the system with goals or parameters that discriminate unfairly based on race, gender, or certain


AI & Glocalization in Law, Volume 1 (2020)

geographic information, the system could be used to make discriminatory loan approvals or denials. In contrast, an AI solution with an ethical purpose can include processes that lack integrity or accuracy toward this ethical end. For example, a company may deploy an AI system with machine learning capabilities to support the ethical goal of non-discriminatory personnel recruiting processes. The company begins by using the AI capability to identify performance criteria based on the best performers in the organization’s past. Such a sample of past performers may include biases based on past hiring characteristics (including discriminatory criteria such as gender, race, or ethnicity) rather than simply performance. In other words, the machine learns based on the data that it processes, and if the data sample is not representative or accurate, then the lessons it learns from the data will not be accurate and may lead to unethical outcomes. To understand where ethical issues could arise and how in the future of work those issues might be avoided, it helps to organize AI along four primary dimensions of concern: • Technology, data, and security. • Risk management and compliance. • People, skills, organizational models, and training. • Public policy, legal, and regulatory frameworks, and impact to society. The CEO, CRO, CCO, and CFO have leadership roles across the first three dimensions, while the fourth dimension relies on leadership from politicians, regulatory agencies, and other policymaking bodies but adequate compliance is required from the corporeal entities. AI ethics is a sweeping endeavor with many moving parts. At the same time, technology aside, the initial approach should follow a similar path as other ethics and compliance programs, including: Start at the top. The board of directors and senior management set the tone for any ethics and compliance program. Inform top leadership of the use of AI tools across the organization and how AI presents opportunity and risk to the organization. Then, review and wherever necessary expand on the enterprise’s existing policies, procedures, and standards (or consider adopting new ones) to address AI ethics as a key priority of the organization. Clearly communicate your intentions. Ethical constructs and mechanisms protect the organization while addressing risks associated with the use of data and technology. To begin with, consider developing an AI “code of conduct” for data scientists and other data professionals while setting up channels for escalating issues. Also consider establishing principles for the use of AI among management, employees, investors, and customers, as applicable to the organization. Assess the risks. AI ethics is as much about understanding the risks as it is about establishing a process for avoiding them. Be clear on what kind of AI solution you are building and who you are building it for. Identify the processes in the AI life


cycle that could negatively impact stakeholders, order them by priority, and allocate resources to mitigate the risks. Give examples. Let product teams know what to look for in monitoring solutions for AI ethics. One way is to embed ethical use of data and technology considerations into IT architecture design (similar to the Privacy by Design framework). Another is to design control structures and embed them into AI enabled solutions. Put up guardrails. Organizations should consider proactively establishing guardrails to guide, monitor, and assess how AI is used by the organization, employees, vendors, and customers. Such guardrails may either be technical or organizational akin to internal controls. An example of a technical guardrail is a control framework embedded within the design of an AI solution that prevents specific actions from being completed. Another technical guardrail example is the use of explainable and interpretable AI, where the decision-making behind AI enabled solutions is both transparent and explainable. An example of an organizational guardrail is a cross-functional panel that assesses all AI- enabled solutions before they are built (similar to an Institutional Review Board) and considers the impact of emerging AI ethics-related issues on existing and as-yet implemented AI solutions. Such a panel should have accountability and a clear charter to drive change and influence decisions across the organization. It is easy to get caught up in the complexity of AI. But starting with the basics can create near- term impact while offering maximum room to learn as you go. Over time, the organization can integrate the finer nuances of AI ethics as its implications for the organization and stakeholders alike become known. The role of AI is ever increasing in our life. Alexa and Google Nest are present in our bedroom and are listening to all our conversations. Facebook knows all our social activities, likes, and dislikes. To guide and have oversight over a company’s activities, the idea of AI Ethics Board came up. But the biggest challenge is related to its transparency, accountability, and teeth to regulate the company’s activities. Last year, Google announced its creation of an external ethics board to guide its “responsible development of AI.” This seemed like an admirable move on the face of it, but the company was hit with immediate criticism. It lacked transparency. No one except the company knows the process of appointment and the duties and role of the committee. For the government institutions, public sector companies, universities, and hospitals, one knows whose interest is primarily served; but this is not so for the private companies. Therefore, an AI ethics board should be accountable to some other body where the board’s decision could be challenged, and an appeal could be made. A three-tier structure could be created with a committee at the departmental level, an oversight committee at the organizational level and a regulatory body at the level of the government. Independent intellectuals, philosophers, and people from varied backgrounds could be made a member of these committees. DeepMind Health’s ethics board is such an example that further increases its accountability to the public by publishing annual reports.


AI & Glocalization in Law, Volume 1 (2020)

The scope of the AI Ethics Board extends to four broad areas. Firstly, guiding leadership to establish policies and procedures regarding AI ethics and setting up training programs. Secondly, examining proposals for AI research and maintain the repository. Thirdly, auditing the use and proposed use of AI and data. And finally, to handle any complaints regarding the use of AI or its associated data. For example, Microsoft has its internal committee named as AETHER (AI and ethics in engineering and research). It includes “senior leaders from across Microsoft’s engineering, research, consulting and legal organizations who focus on proactive formulation of internal policies and responses to specific issues as and when they arise. 1.

The benefits of an AI Ethics Board

24. Develop public trust as well as the trust of the employee in AI products and services 25. Public consultations on AI align the goal of the company with the public interest. 26. Self-regulate the company for possible future AI regulation 2.

The challenges that lie ahead of the implementation of the AI Ethics Board

1. Cost and time incurred in setting up and managing the board. The compliance time will further hamper the rapidly changing and developing field of AI. 2. Lack of transparency may lead to ineffective public accountability. On the other hand, to be publicly accountable, the board might release some data in public domain. This might give sensitive information in the hands of competitors. 3. Risk of shifting the burden rather than fixing accountability. 4. Finding appropriate members to the board who have a good understanding of both the AI technology as well as of the business and its related legal and human resource issues will be a challenge. Since AI is a rapidly evolving field, it will be very advantageous to collaborate, with others working in AI ethics, including competitors. Membership of cross-industry AI ethics groups should be considered to benefit people and society. The problem is, the Big Tech companies think themselves to be good people trying to help, and they believe they are doing their best. But one cannot be the judge of one’s own case. They also need to be under the regulatory oversight of the government. However, the regulation cannot be done without the collaboration of the Tech companies. They are the one who has all the information and the resources. In the world of constant development, the field of technology has a rapid growth over centuries. Starting with computers to machines and now to robots, the event has evolved and is now on the verge of replacing simple human activity. The most recent in this innovation is Artificial intelligence (AI). Artificial intelligence has always been a topic of interest for many people around the world. Many scientists and engineers around the world have been working for


in the development of this for maximizing human potential and minimizing human labor. With this development, A.I. has become more skilled and is now able to perform every critical task with utmost accuracy. Many scientists around the world have a prognosis that A.I. would not only be useful in the field of medical and communication but also there will be a significant impact on the economy. It can even be used for solving complex challenges of the world like climate change. It is believed that if A.I. sectors grow, it has the potential to transform the normal living of people by introducing new platforms and digital assistance. But by fantasizing by the world of Artificial intelligence we must not forget to look at the legal and ethical issue that is arising with the rapidly growing technology. The most famous and asked question is, who will be held liable in the accident caused by a self- driven car. It is a famous example of the USA, where Uber's automated car, accidentally killed a 49-year-old lady. Although AI is developed, to perform a minute task with great precision, but it cannot be denied that sometimes AI may fail. Let’s take a case of fraud or malfunctioning by AI. Is there any recourse available to the victim? Who shall be held liable? Obviously, not the machine, who itself can't judge its action. A debate has been going around. Some are of the view; the programmer should be liable or the manufacturer while some say the technical team monitoring the car. Undoubtedly, with the rapid growth of AI, they are no more a device only. They are matching the status of humans and some even call then non-human entities. In that case what are the rights given to them? Would they be considered the same as corporations or companies? Every organization has an ethics board that governs the ethical aspect of the company. So every organization must create a strong AI-ethics board which can govern the framework on the designing and working and sponsoring research on algorithm bias. It will also increase transparency as to how the application is used and will contribute to the responsible development of AI. 3.

Scope for AI ethics board

Governance for AI Ethics- there are always some unintentional consequences of AI which can pose risk to the organization. An organization can work on to identify future potential risk and implement proper governance that reduces the hurdle and delay. To work on fairness and biasness- the organization can work on what biasness may exist in data and make sure AI product is not discriminatory and are safe. A mechanism for recourse- Organization employing AI should work on the easy-to-go recourse when the system fails to meet the user expectation like setting up a technical team, internal governance (AI ethics committee), and external governance.



AI & Glocalization in Law, Volume 1 (2020)

Problem with AI ethics board

The whole purpose of the AI ethic board is to make sure that the AI algorithm works fairly and there is no discrimination in the data. But a recent debate of AI ethics board might come to a surprise. Google removed its ethics board soon after its establishment. Academician Ben Wagner said after that ethics board is just ethics washing, a plan, to avoid government regulations. The ethics body lacks proper regulating framework and transparency. According to specialists, the board is not capable of making changes but can only give suggestions. There has been a great rush toward this without understanding the mechanism of ethical codes. A very big example is IBM. The company used an ethical dataset to remove the biasness from its facial recognition system. They were used by the Philippines police force, where thousands were killed in a brutal war on drugs. Interest in ethical algorithms doesn't stop companies from assisting deeply unethical causes. The experience with the AI ethics board does not mean that we need to completely scrap it, but it requires a proper framework upon it working and which can be done analyzing all the factors and working upon that. Steps made to regulate AI, has to be global. The technologies being developed by Google, Facebook, Amazon, and others will be deployed worldwide, and decisions made in America and Europe will affect more than just them. The challenge now is finding crossovers between existing legislation and the negative impact of new technology. Then, new legislation needs to be drafted to fill the gaps. Technology is ever-growing and every day we are faced with new challenges and opportunities that as a society we must alter to fit our humanity and secure it enough to avoid injustice and misuse. In the wake of a need for machines and electronic assistance in almost all aspects of life, we are advancing in our use of artificial intelligence aiming for AI to have human intelligence along with capabilities faster than humans. Over the last 50 years, AI has changed drastically from a computational discipline into a highly trans disciplinary one that incorporates many different areas (Embodied Artificial Intelligence, 2003). Embodied AI is one of the different branches that AI research has shifted onto. A simple definition of the same could be the physical existence or corporeal identity given to AI. It is a prerequisite to understand that as we advance into the field, the classical approach to AI is limited and therefore there is a need for embodied AI, a concept first presented by Rodney Brooks in 1991. Embodied AI works on the principle that living beings mature and derive their intelligence from their surroundings as they interact through their physical existence. It was only right to use this dynamism to refurbish AI which will help in gaining the human intelligence and work at par with living beings. Robotics is the simple answer we are looking at, it is the embodied AI and the existence of corporeal entities with the advent of artificial intelligence. This gave way for engineers as well as neuroscientists, linguists, biologists, biomechanics, and material scientists to be associated with embodied AI. To better comprehend goals of embodied AI, can be summarised into three basic functions:


• To gain understanding of intelligence • Knowledge about biological systems • And use of these to build devices or robots. However, this is not the end to this branch of study of AI. Research and study of robots moves into smaller specialisation into Development robotics, Bio-robotics and even Artificial Life. Embodied artificial intelligence is a much more advanced area and perhaps cannot be referred to as AI anymore, rather robotics or embodied AI. Thereby, raising some complicated questions in regards to the society and the human interaction. It is to be kept in mind that with the need for intelligence to be a part of the body, it became essential to allow it to interact with the world; this Heidegger’s phenomenology is the basis for embodiment of AI. It also gave a pathway for robots to function like humans and thus bringing up specific areas of work. Developing robots to behave in a certain way is one of the most integral aspects, while study and research is still being conducted in this field, it is necessary that robots walk, talk or have locomotive movements similar to that of humans. Their understanding and ability to showcase facial expressions for instance is also a huge factor. Since the prime aim is for robots to gain higher level of intelligence, it makes it necessary for robots to have intelligence related to emotions, reasoning, aptitude, common sense and even consciousness. This dictates the need to bring the concept of evolution into play, human beings of today are by products of years of evolution, and do we expect that kind of evolution in robots? Studies in this area prove that we need developmental perspective which the robots should be able to comprehend and adjust accordingly. Lastly, there shall exist challenges about moving into the real world, once all of the above have been dealt with. Here, studies into the field required the expertise of body dynamics to function. A quote Gerald Edelman fits the situation –“It is not only enough to say that mind is embodied: one has to say how” (Edelman, 1992). An important point of discussion under embodiment of AI is the ethics. It is no question that such advancement in the field requires that morality and ethos be dealt with before we bring the robots to the real world. The answer to this is the need for an AI ethics board. It can be defined as the leader and guide to safeguard against exploitation of AI and ensure that robotics functions in an ethical manner in the society. This concept however has only been used by organisations but it is important to note that robotics no different than an organisations requires an AI ethics board as well. One of the major characteristics of AI ethics board in robotics is the understanding of the model. To understand this accurately, we shall look at following elements: • Moral AI- Moral AI ensures that robotics or any form of embodiment AI will be answerable to a moral code of conduct regulating its behavior. This aspect can be either developmental or pre-programmed.


AI & Glocalization in Law, Volume 1 (2020)

• Just AI- When we talk about the ability of robots to handle everyday function it is necessary for them to be just and not only adhere to pre-programmed scenarios. This stresses on the need for robots to understand justice and differentiate between rightand wrong. • Philosophical AI- Philosophy is not philosophy in AI but the ability of robotics and such devices to understand philosophy and form judgements. This form of paralytic nature of the AI ethics board in corporeal entities is evidence that to develop an ethical AI corporeal existence of intelligence is a must. We therefore look into the scope of AI ethics board in physical bodies. Accountability is a feature we must not neglect as there must be a proper chain of command and accountability to establish the ethical elements in AI. These can be either accountability to another body or programmers or even the companies. Accountability helps maintain a sense of responsibility and makes the process democratic in nature. Composition of the AI ethics board is also an important detail. Professionals in the field, technicals, administrators and also human resource professionals will make a diverse group of experts that can work together to ensure that the committee is wholesome and target oriented. Apart from committee, existing factors like government and businesses must also have AI ethics boards in place to ensure that when they make use of robotics or embodied AI they do so ethically. The scope of such a field isn’t limited to one actor but rather everyone who is involved in this sphere. Even when all aspects are taken care of there exists a lot of puzzling questions about the ethical nature of embodiment AI. Will robots be successful in understanding and adapting to human reasoning? How long will this developmental phase take? In the present context there exist studies which talk about AI in mental health and in Psychiatry, Psychology, and Psychotherapy. These studies have successful concluded that embodied AI can be highly beneficial in mental health however, further research is mandated to address the ethical and societal concerns so as to negotiate the most innovate health care. Towards the closing, it can be understood that embodiment AI will have liabilities as well. One of the major liabilities will be the process of having to train them and make them at par with human beings, also not to forget that robots are after all machines and breakdown of such will be inevitable. Another important point being that it will take another decade for robotics to be completely functional and free of disappointments. Lastly, it is a journey that is exciting and shall be one of the many wonders that man shall create.



Agenda 4- The Role and Framework of Plurilateralism in AI-enabled Crimes and Judicial Governance:

It has been argued by many academics that the development of AI should be all inclusive and diverse, otherwise that might result in inequitable outcomes for certain sections of societies and that holds especially true in the field of Judicial Governance. The USA is opting for AI driven programs like predictive policing and other criminal sentencing software which is having a direct impact in shaping their criminal administration. If the development of such AI algorithms is majorly being done by the privileged white men sitting in the Silicon Valley it might lead yield results which might only favour the perception of one class of society. The outcome which it will yield might turn out to be racist or sexist because of lack of multi-stakeholder ship. Machine learning algorithms often use statistics to find patterns in data. And because correlations are not the same as causations the algorithm might end developing false conclusions and sending the wrong people to jail. For example, if the algorithm finds low income is related recidivism, it might end up convicting people with low income more than often even if they have not committed a crime. This can also cause further ruin to the social position of minority communities in the United states. Since such algorithms will reflect human intention, it is necessary that they are tested against human biases and developed in a more plurilateral environment, contrary to which they might lead to detrimental results for the society. The relationship between AI and crime can be understood under two perspectives- AI to detect and prevent crime and AI to commit a crime. As AI has been traditionally understood to make the life of human efficient. It takes enormous data sets and predicts the probable output, thus suggesting the users about their next moves. It has been ubiquitously used in voice assistants like Siri and Alexa and in predicting customers’ shopping behaviors on an e-commerce website. In law, AI can be used in multiple ways ranging from legal drafting to detect, predict, and prevent crime. However, AI could also be used to commit a crime, or it can itself commit crime without any direct human involvement. And such crimes are beyond the control of any national boundaries. Criminal law is based on a Latin maxim- Actus Non Facit Reum Nisi Mens Sit Rea i.e., An act does not make anyone guilty unless there is criminal intent or a guilty mind. Thus, crime must have both Actus Rea ie, wrong act, and Mens Rea ie, guilty mind. The presence of mens rea generally means that a careless action is less severely punished as compared to deliberate action. For an AI-enabled crime, it is easier to understand the act or omission which causes harm. Tesla’s AI-enabled self-driven car hitting a child or damaging property is a criminal act. However, to make this crime culpable, guilty mind would be required. And here lies the challenge. How to prove the intent of a non-human, capable of thinking like a human being, and doing this under the present framework of national and international laws. AI, because of the coding at its core, will always be


AI & Glocalization in Law, Volume 1 (2020)

a human-made creation. But the Machine Learns based on data it gets and changes the output. So, who shall be responsible? If a bug or flaw in the code caused the accident of a self-driven car, can it be said that the coder intended to commit that crime. Also, can the machine be held liable because it acted according to the codes or data fed to it. Or, because a sensor or a connection malfunctioned. For the Judicial Governance of such crime, it needs to be decided whether the same laws and same court governing the humans govern the AI-enabled crime. To what extent should the machine and the human be responsible for the crime. If the machine is treated as a legal person and tried, should the property it destroyed be treated the same way, having its own rights. Judicial governance seeks to protect the Global community. AI, similar to the internet, has global interconnectivity. Local legislation, a local court, or an ethics board or committee having a local oversight would be ineffective. It needs to be dealt at a plurilateral and multilateral level, involving supranational and global organisations. It also needs to have private companies like Google, Apple, Amazon, Facebook, Microsoft, etc, which has much more influence across the globe than many nations. Judicial governance needs the collaboration of these companies because those have all the information and knowledge of the past, the present, and the aid of AI, the future. Artificial Intelligence is putting its foot in every sector of the society and the judiciary can be not left behind. Lately, there has been a discussion going around about the shortage of judges and consequently increase in the pending case. Justice SA Bobde, the current chief justice of India, was of the view that the introduction of Artificial Intelligence can prove to be a helping hand in critical decision making and achieving the target smoothly and quickly. Artificial Intelligence won't be a substitute to judges and advocate but will only act as external help. The researchers in the Stanford Law School, in a recent study, has found that an AI-based system was capable of performing a task of 20 senior advocates having 10+ year of experience with 94% of accuracy. The story is not limited to the judiciary but every branch for AI-crime detection and governance. AI systems surely have potential in AI-enabled crime and governance, however, there are certain apprehensions associated with it that cannot be ignored at any cost. 1.

The complications associated with AI-system in crime detection

One such apprehension is the possibility of automation bias. It is a situation where the machine learning analytics system discriminates against a particular section of society based on sex, color, age, or nationality. Recently Google's AI-based hate speech detector made it to the news as it was proved that it is biased against black people.



Case study: Amazon Rekognition

Amazon came up with AI-based software that was aimed at identifying criminal mugshots. A research team from the American Civil Liberties Union after analyzing the data found out that approximately 40% of the Rekognition was a false match and was discriminated against based on color. They matched 28 US Congress members with a criminal mugshot. 3.

Case study: LAPS’s Pred-Pol program

US State Department never fails to impress the world with the latest technology and innovation. On such surprise was the Pred-Pol program, which was introduced by the US state department to analyze the crime data figure out the location with the maximum crime, and develop a strategic plan to reduce the crime. Exciting right? But the idea didn't work well and had to face a lot of criticism for the same. Recently, during the protest in the USA for the rights of black, it was found out that the application was focusing only on a particular area the minority group. The group that was already the target of the police due to ongoing protest. In April, during the protest, the program was shut down by the department stating the reason for financial funding. After reading the case studies of Algorithmic biasness, it feels the importance of Plurilateralism in AI-enabled crime and judicial governance. AI can surely be proved to be fruitful is various departments to tackle crime and smoothen the process but it is important to include every section of people, irrespective of race, color, caste, nationality to avoid the biasness by the program. The action taken by the authority for crime detection and governance would not be accountable for their unjust action as they will always be saved under the umbrella of Artificial intelligence. It will give the result which is fed by the authority itself. This calls for a strong framework for stronggovernance and reduces discrimination. It becomes extremely important to keep a check on the information fed in the system. The data must not be biased. Once the records are updated it is also important to maintain the data. This is the only way to get a fair prediction and use the system in a better way. Rather than a complex machine learning system and techniques, one must use the simple algorithm to ensure the reliability of the system. Artificial intelligence (AI) technology has proliferated throughout many industries and is now making its way into judiciaries. For instance, the Hainan High People’s Court in China deployed an AI system that incorporates language processing and deep-learning tools that can produce sentencing decisions based on the case law data it processes. The system helps to ensure consistency and accuracy of judicial decisions, while reducing the time of the judgement process by over 50 %. It is one of the many technologies employed by the Chinese judiciary that can assist judges and other legal professionals in retrieving decisions, providing litigation guidance and predicting the outcome of cases. In the United States, AI technology


AI & Glocalization in Law, Volume 1 (2020)

helps to predict recidivism risks of offenders and then inform bail and sentencing decisions. The role of Artificial Intelligence technology in contemporary jurisprudence is attracting worldwide attention. The Global Judicial Integrity Network at the United Nations Office on Drugs and Crime has researched on this issue. AI systems’ ability to process a myriad of data quickly and accurately make them a valuable tool for judiciaries burdened with a high workload. They are not only aimed at increasing judicial efficiency, but also at ensuring the consistent fulfilment of judicial functions and increasing public confidence in the judiciary. Algorithms utilized in information and communications technology already provide benefits in terms of increasing the quality of justice. The United Nations Development Programme reports that the reduced human interaction and traceability allowed by the ‘e-Courts’ case management system have great potential to reduce corruption risks in the Philippines. Similarly, the use of AI in the performance of more complex tasks could help to prevent manipulations in judicial decision-making, as well as monitor the consistency of case law. As the examples of China and the Philippines demonstrate, the potential benefits of emerging technologies have increasingly encouraged judiciaries around the world to use and explore the possibilities of using smart technology in the performance of judicial functions. An increase in investments in the development of such technology, both in the private and public sector, also suggests that we will see more smart justice tools in the future. However, as recent studies and policy documents show, artificial intelligence poses significant challenges for judiciaries in terms of reliability, transparency and accountability. In cases where machine learning and predictive analysis are involved in the judicial decision-making process, there is a risk that technical tools are replacing the discretionary power of judges and judicial officers, creating an accountability problem. Therefore, it is crucial that the judges are aware of the limitations of such technology to ensure compliance with judicial integrity and values endorsed by the Bangalore Principles of Judicial Conduct. One challenge involved in the use of AI in judiciaries is how judges will maintain control over the judicial decision-making process if AI technology is increasingly involved. The Villani Report, which is the result of a parliamentary mission that aims to shape AI strategy in France, points out that judges may feel pressured to follow decisions made by AI systems for the sake of standardization, instead of applying their own discretionary powers. This poses a serious risk of undermining judges’ independence, while reducing judgements to “pure statistical calculations.” This concern is addressed by the European Ethical Charter on the Use of Artificial Intelligence’s guiding principle “under user control” that suggests that judicial officers should “be able to review judicial decisions and the data used to produce a result and continue not to be necessarily bound by it in the light of the specific features of that particular case.” Another challenge concerns whether the internal operations of AI and the data fed into it are reliable and accurate. AI provides certain outcomes based on the processing of the input of existing data. As a 2016 policy document from the United


States (U.S.) government phrases it, “if the data is incomplete or biased, AI can exacerbate problems of bias.” This poses a significant challenge for judicial impartiality, as judiciaries cannot render impartial decisions based upon biased AI recommendations. The biased decision-making would also threaten judicial integrity and due process rights. It is, therefore, recommended by the U.S. that federal agencies in the U.S. conduct evidence-based verification and validation to ensure the efficacy and fairness of the technical tools that inform decisions bearing consequences for individual citizens. So, in respect to the Plurilateralism aspect on AI crimes and judicial governance, the plurilaterals have to maintain the integrity of multilateral agreements and follow the principles of universality, inclusiveness and transparency because its often witnessed that Plurilateralism fragments and disrupts the larger multilateral process, including multilateral cooperation on different issues. According to my opinion, plurilaterals should not modify existing multilateral rules and disciplines, or introduce new obligations in any sector or agreement or else the basic pillar of international cooperation maybe affected. As AI has been used from healthcare sectors to dating applications. Hence, these sectors benefit from these new technologies but they can also be misused in other ways also. The agenda shall focus on these three areas i.e governance in ethics, interpretability of algorithms in order to promote fairness and transparency, ethical auditing. Secondly, the agenda shall also address the question as to what the AI governance seeks to achieve? Further, as AI is going to be the future of technological evolution in mankind and with the dependence of human on these algorithms and machines with regards to decision making process would be deep and one needs to be aware of the consequences which would affect the areas like unethical conduct, AI based crimes. Further, one needs to be aware about potential vulnerabilities which are dependent on the judgements of humans and which are subject to discrimination and inherent bias. Most solutions in AI is hampered from the phenomena called as “Black Box phenomenon” and secondly there are privacy concerns with regards to data collection taken without proper consent which leads to two concerns firstly that the data collected is unethically used to gain consumer insights and secondly, these companies are amassing data sets which is helping them to build a competitive edge. Time and again, we are always facing issues with regards to dealing with privacy. Hence, the delegate feels that there must be some solutions which shall be drawn attention to such as establishing data protection through legal systems, making laws regarding privacy and data protection stringent and fair, to promote selfregulation, and to adhere and invest in privacy preserving AI research and techniques. With regards to judicial governance the agenda shall be addressed as to how AI is being used for crime detection and how AI can be used for crime prevention. With regards to AI for crime detection, the AI is said to help in detecting gunfire, clues on crime scene, and bombs if any planted for mass killing. Further, AI is said to help in crime prevention by assisting in predicting crime spots, who will commit a crime and also helps in deciding whether he can be released from pretrial or not.


AI & Glocalization in Law, Volume 1 (2020)

As AI crimes could be on rise it would be interesting to see how courts would tackle such crimes, i.e. whether they would assign the liability on the human or the AI, or whether the liability is still on human if he could not foresee the harm that AI would cause. Hence, these legal principles shall also be addressed in this agenda. Further, they may be civil liability or criminal liability in AI crimes. The civil liability mainly deals with tort law and the criminal law deals with criminal statutes. Hence, if the tort law specifies that the harm was foreseeable, the criminal law would state that this offence or harm was intended. As the judicial systems gives importance to the concept of “mens rea�, the AI based applications are developed which engages in human behavior, which are created by a human. Hence, it would constitute a crime and the judicial system shall have to go through a puzzle as to whom to be held liable and on what basis such person or a thing shall be held liable.


2 Separate Position Statement by Swatilina Barik Swatilina Barik Delegate, AI General Assembly

Statement received by Abhivardhan, President of the Conference Mr Sanjay Notani, President, AI General Assembly, 1st Session, 2020 Dr (H C) Sandeep Bhagat, Vice President, AI General Assembly, 1st Session, 2020

On Agenda 3: Assessing the Scope, Liability and Interpretability of the Paralytic Nature of AI Ethics Board in Corporate Ethics Artificial intelligence (hereinafter AI) progressively pervades each part of our general public, from the critical, like healthcare and humanitarian aid, to the mundane like dating. AI, for mechanical technology and strategies can upgrade financial, social government assistance and the activity of basic freedoms. The different areas referenced can profit by these new innovations. Simultaneously, AI might be abused or carry on in unpredicted and conceivably hurtful manners. Inquiries on the function of the law, morals and innovation in administering AI frameworks are accordingly more significant than any other time in recent memory. The digital revolution transforms our views about values and priorities, good behaviour, and what sort of innovation is not only sustainable but socially preferable – and governing all this has now become the fundamental issue. • Ethical governance: Issues such as fairness, transparency and privacy, allocation of services & goods, economic displacement; these are basically some issues raised by AI. Such issues if addressed in a correct manner will definitely going to bring major changes. • Explainability & Interpretability: These two concepts are seen as possible mechanisms to increase algorithmic fairness, transparency and accountability. For example, the idea of a ‘right to explanation’ of algorithmic decisions is debated in Europe. This right will entitle individuals to obtain an explanation if an algorithm decides about them (e.g. refusal of loan application). However, this right is not yet guaranteed. Further, it remains open how we would construe the


AI & Glocalization in Law, Volume 1 (2020)

‘ideal algorithmic explanation’ and how these explanations can be embedded in AI systems. • Ethical Auditing: For inscrutable and highly complex algorithmic systems, accountability mechanisms cannot solely rely on interpretability. Auditing mechanisms are proposed as possible solutions that examine the inputs and outputs of algorithms for bias and harms, rather than unpacking how the system functions.

On Agenda 1: The Scope of Splinternet & 5G Governance in Multilateral Governance & Data Sovereignty Policy In October’ 19, Germany and China started commercializing rollouts of 5G, the wireless technology infrastructure that is transforming the way the world computes. We have seen in the UK, USA, China that machines and people are talking to each other over the borderless network we call the Internet but with 5G, a new networking infrastructure is emerging, dependent on the Internet but distinct from it and subject to much more government and private control. This can be termed as losing the sovereignty of internet as it varies geographically. The Model GDPR is the first mover with comprehensive regulations, the Commission is taking aggressive aim at being the preeminent global standards setter in AI. The European Union’s heavy-handed preliminary proposal for AI regulation diverges sharply from the U.S. approach. In its white paper on AI, the Commission has proposed ex ante conformity assessments to control access to the EU market for AI applications originating outside of the EU. That would likely require a new framework with criteria, benchmarks, and standards that European authorities will use to determine if an AI product is trustworthy, secure and in respect of European values and rules before it is allowed entry into the European market. This approach could include a pre-market review by EU authorities of algorithms, training data, documentation on programming, and how the system was built, as well as accuracy tests and other requirements. • Also, under consideration are data quality and traceability requirements that would require non-EU firms to train AI applications on GDPR compliant data, an extraterritorial regulation that seemingly would burden U.S. firms with requirements to completely retrain many proprietary algorithms developed in the United States with new data sets as a condition of market access in the European Union. • European Commission officials are clear about their goals for achieving tech sovereignty through a new regulatory framework for the European digital economy but these aspirations, couched in protectionist rhetoric, should be balanced against the need to avoid a balkanization of the internet and a further dampening of the environment for innovation in Europe. Fragmentation of the internet is not good for European companies, not




good for U.S. companies, not good for governments on either side of the Atlantic, not good for economic growth generally, and not good for the internet. The ongoing choice by the Court of Justice of the European Union (hereinafter CJEU) to nullify the 2006 U.S. - EU Privacy Shield has tossed overseas information streams by and by into a terrifying condition of administrative vulnerability yet the choice is an update that the United States and the European Union can arrive at arranged understandings on delicate advanced economy issues and have done as such previously, with the Privacy Shield and the Safe Harbor before that, even as mediators showed up seas separated and confronted various mishaps, including a few past decisions by the CJEU. Reciprocal exchanges will be important to by and by resolve the overseas stalemate on security, however they offer the opportunity to work out a superior transoceanic association in the business advanced space that can be intended for helping an atmosphere for development in Europe. At the point when vital contemplations are considered, an organization dependent on the standards of majority rules system, straightforwardness, security, and individual freedom, which Europe and the United States share, would remain as a sound difference to China's way to deal with protection, AI guideline, rivalry strategy, and free discourse on the web.


AI & Glocalization in Law, Volume 1 (2020)

3 Recommendations Report approved by the AI General Assembly, 2020, 1st Session on October 1, 2020 Report No. 0110-AIGA-S1-2020-01-REP Passed by the Assembly with Absolute Majority of 11-0 on October 1, 2020. Authored by:

Dev S. Tejnani Sanad Arora Emmanuel Goffi Mridutpal Bhattacharyya Samar Singh Rajput

Anurati Bukanam Prantik Mukherjee Prakhar Prakash Kashvi Shetty

Executive Summary Submitted by: Mr Sanjay Notani, President, AI General Assembly, 1st Session; Abhivardhan, President of the Indian Conference on AI and Law, 2020 and Kshitij Naik, Nodal Advisor, ISAIL. The Executive Board in the Report for the Day 1 recommends the following considering the quality, purpose and relevance of the scheme of discussion in the Assembly: • Rights of consumers must be respected and tackled accordingly in the wake of splinternet and its abrasive manoeuvres; • Governmental participation and collaborative governance can be utilized to assess the development of splinternet in India; • The principles of Data Erasure, Data Quality and others as emanated from the GDPR, on a ideation perspective can be sought and transformed one-by-one accordingly; • The issue of cyber security around 5G and ways and means to secure privacy and dealing with disruptive technologies such as the Internet of Things and others have an important significance in the same. • Corporate responsibility has an intra-perspective in the geopolitics of splinternet;


• Individual responsibilities in using these technologies, based on educational training beyond awareness, based on indigenous and global needs, must be calibrated and decided accordingly. The disruptive and amorphous transformation of splinternet will be strategic and thought leadership based on the nature of sciences will have great significance;

Introduction • It is imperative to understand that the previous mobile generation networks have solely focused on providing a high-speed voice and data service. However, 5G on the other hand can be deemed to be regarded as something phenomenal. It is not just a network but it is an entire system consisting of high-speed, low latency, and low-power 5G applications, such as the Internet of Things, innovative advancements in the field of machine learning. It can be said that 5G will enable individuals to rapidly develop the infrastructure in their countries and technology could become a part of each country’s critical national infrastructure, as well as it can be deemed to be regarded as a core capability on which every other critical national infrastructure sector will depend on. It is said that 5G is capable of offering up to 600x times faster internet speed as compared to the current 4G networks. 5G is said to process data up to 2.7 times faster as compared to the 4G networks and the 5G technology is designed in a way which would enable users to send data to and from as many as a million devices per square kilometre, compared to the 100,000 devices per square kilometre through which data could be transmitted via 4G networks. 5G is not just limited to being a high-speed network, an important part of 5G specifications are Ultra Reliable and Low Latency Communications (ULLRC) and Massive Machine Type Communications (MMTC). ULLRC allows 5G to enable instantaneous data transfer between two devices and MMTC would allow a large number of users to be connected to a 5G network simultaneously, this can allow for multiple devices to communicate amongst each other with limited to no latency, this can usher an era of autonomous robots all functioning in harmony. For example, 5G is implemented on a mass scale, cars with autonomous driving systems can communicate with each other, essentially eliminating the possibility of road accidents ever happening. • 5G can exponentially benefit the use of Internet of Things devices (IOT), by helping in transitioning from cloud computing to edge computing. When it comes to cloud computing, the data from an IOT is directly collected and sent to the cloud. Because IOT devices collect huge amounts of data every second, it has become difficult for companies to process and take care of so much data on the cloud. So instead companies have been adopting the technology of edge computing where the data so collected on the IOT device instead of being sent on the cloud will be processed inside the device itself. But this data so collected can be assessed by


AI & Glocalization in Law, Volume 1 (2020)

the companies who make the IOT device which poses a major risk to the privacy of users. Since such IOT devices are connected to a 5G, it can also be prone to hacking. Which is why, privacy can be deemed to be regarded as a major concern and a counter-espionage concern, more importantly, risks pertaining to 5G networks can be deemed to be regarded as cyber-kinetic in nature. • The consequences of 5G being sabotaged, or being used to overpower the critical build up related to it, can be understood as something which is comparatively more serious that the privacy concerns which are apparent on the face of it and thereby impacting the physical well-being and lives of the citizens or the environment as a whole. The power to transfer or transmit more data, achieve better network responsiveness with the help of lower network latency, thereby reducing energy consumption, which can be rendered possible with the advent of 5G can foster the development of machine-to-machine transformation and communication over 5G. This method of communication can be deemed to be regarded as an essential for various military-related endeavours which the governments of various countries may enable. It is imperative to note here that sensors can be used from a number of locations and these sensors which are on a 5G network can allow the military to generate an unified picture of the battlefield and analyse the surroundings before going ahead with their tactics. This can be deployed with an encrypted communication method and it can provide visibility, command and control. 5G growth can unable the development of AI support whenever the government aims on taking up a mission, it can enable the government to have a clear visual with a motion-to-photon latency with the help of a 5G enabled access to the abundance of computation, which could enable virtual reality, mixed reality and augmented reality and this can be deemed to be regarded as a step towards making the soldiers of the military much more efficient. • Now, it is imperative to understand that 5G can have a significant impact on how a civilisation works or functions and the decisions pertaining to 5G cannot be made solely on the basis on how profitable it could be for the businesses developing it, however, it is imperative to understand that 5G has certain political consequences also. If one considers India, the framework in India pertaining to Internet governance can be deemed to be regarded as something which is based on the old cliches of Indian Diplomacy. It is imperative for the Indian Judiciary to step out of their lethargy and establish a comprehensive or a robust piece of legislation pertaining to the sustainable use of 5G and how the Internet of Things is going to pan out. • India has previously declined the request made at the UN with regards to the direct broadcast of satellite technology in the 1970 on grounds that it would violate its territorial sovereignty, however, the IT sector in India is at present booming and the need to govern the internet is crucial. Internet governance has never been a part of India’s plans, however, there exists a dire need for India to work upon a robust regime with regards to Internet governance, if it aims on


establishing a 5G network in India. Data localization can be deemed to be regarded as an aspect which needs to be considered primarily. India needs to consider its position on internet governance and how the data localization regime needs to work. It is imperative to understand what is meant by data localisation. Data localization simply means that whichever company in India stores information pertaining to its users, should be capable enough to store and process this data within the Indian borders and the data of its users should be protected. Data localization would enable the data to be stored safely within the territorial limits of a country rather than entrusting the personal data of its citizens to a foreign entity which has its own setbacks or repercussions in the global field. New models of cyber governance laws need to be developed and these models need to be substituted with the existing global stakeholder community. • Moving on, Cyber-statecraft is something which can be deemed to be regarded as customary, however, the act pertaining to the balkanization of the internet will eventually lead to war and not render peace. The entire aspect revolving around cyber-balkanisation can be divided into two major schools of thoughts, namely security driven and business driven. The security driven aspect school basically enumerates upon national security which is something who’s base relies upon the Post-Snowden era. Business driven school of thought on the other hand is a class of section which is an old school that supports splinternet. It is imperative to understand that balkanisation of the internet has innumerable security implications and in particular, innumerable concerns have emerged with regards to surveillance, privacy and how forthcoming attacks with regards to critical information infrastructure operate.

Recommendations and their Basis 1.

Agenda 1: The Scope of Splinternet & 5G Governance in Multilateral Governance & Data Sovereignty Policy

1.1: 5G Governance and the Emergence of Splinternet • 5G is difficult to govern at the global level due to the balkanization of the Internet; • Nonetheless there are global concerns regarding human rights that need to be tackled at the global level; • We have to take into account cultural differences to avoid imposing one perspective over others, namely the Western one; • Splinternet is inevitable and norms and regulations have to be in place to promote a democratic form of governance; • The need of the hour is a system of transparent monitoring to tackle issues on online freedom and restriction;


AI & Glocalization in Law, Volume 1 (2020)

• Understanding the restriction that developing countries face in regard to lack of resources must give way to careful diplomacy where, developing nations have a say in the final establishment; • It is important to keep in mind that the main essence of the internet was the absence of boundaries, which is a driving factor for economic growth and development; • Applicability and adaptability of any regulative structure is going to be important for a concrete action plan in years to come. And in avoidance of failure of the same; • Separate body for legislative, judicial, investigative actions must exist to govern actions of the countries; • It is humbly submitted that we have to look at different jurisdictions for formation of EU GDPR debates and the intention behind formation of those policies. Not only EU nations but multi nation jurisdiction; • Understand the perspective of the formation of policies and the main agenda behind it. • Indian perspective vis a vis perspective of developed nations on formation of those policies help us in understanding the true nature of the policy; • Furtherance of awareness spreading to ensure proper understanding of the concepts of splinternet; • Suggestion of amendments & regulations to be inserted into the statues to make sure that justice prevails, irrespective of circumstances; • Splinternet is a system that is inevitable in the current generation. It cannot be denied that the global governance of the Internet is an ideal solution. However, in this day and age, such a governance structure seems implausible; • The roles of various stakeholders are critical to achieving the free Internet. Hence, adopt a multi-stakeholder approach. Non-governmental Organisations, Civil-Society Organisations, and even individuals must advance and advocate for free internet rights; • In India, attempts were made through “Facebook basics” and by Reliance towards splinternet which was vehemently opposed by the policy makers and the people at large; On the other hand, it has slowly made its way under the government’s efforts to regulate the contents of the internet; • The nations that have taken steps to isolate their national internets from the international internet are far better placed to survive all-out cyberwar then the countries that are pushing for a single global internet; • Countries differ in morality and in the sense of right and wrong. An example could be the difference of censorship laws in a conservative and a liberal government. Splinternet thus has a scope in promoting national interest and restricting anti-social views e.g. fake news on the internet. However, the counter view could be that it would restrict one’s freedom of speech;


• On one hand, in 2016 the United Nation declared “online freedom” to be a fundamental human right that must be protected and on the other hand European Union under GDPR gives the right to be forgotten to address the privacy concerns which makes Google inaccurate and non- representative of what a world wide web actually is; • The international conventions and agreements ratified by the countries should be strictly enforced and any lapse by a country to uphold its international commitments should be met by heavy penalties;

1.2: Data Protectionism & Splinternet • Data protection is already a critical issue to be addressed. Data is the “New oil” for which many countries are fighting; • Need for a global governance that would support and monitor national/local initiative on the matter; • A global governance would also offer the possibility for every individual all around the world to benefit from protection; • Need for more training and information towards grassroot people. This can be monitored by the same global structure. The point is to inform people about the ins and outs of data privacy and their digital identity; • We need to understand the role of the government informing these policies as well as private players and corporates in the market and their roles over the years in formation of those policies related to data protection; • Role of non-state actors in providing assistance and expert knowledge would allow for more elaborate efforts. • Further we have to look at the curve as how those policies have affected those nations for better or worse. all this will give us a proper picture portrait of these policies and what we can do to correct it; • Legislative recommendations aimed towards improving the regulatory frameworks concerning VPNs, proxies, etc, which might prove to be national threats in terms of diplomatic relations, as well as instrumental to terrorism or other threats; • Discussion on the provisions present in respect to splinternet in order to ensure that human rights aren’t affected or rather, infringed; • Data Protection through data localisation is important to exercise sovereignty on the data a nation has. RBI recently issued regulation to secure all the financial translation data on the servers placed in India. Aadhar data has a humongous amount of sensitive personal data of Indian citizens; • A concept of different levels of data protection and localisation could be applicable depending on the type and sensitivity of the data. A global standard could be reached upon and through an internet freedom and transparency index the nations can be influenced to provide the required protection to their data;


AI & Glocalization in Law, Volume 1 (2020)

• A GPS enabled data access can be thought of to grant access to a particular type of data to a resident of a particular location. Thus, restricting the access to sensitive data from global access and attacks; • An international data encryption standard can be set, which needs to be adhered to by the companies to ensure the data security of their stake holders; • The growing awareness and importance of data protection laws are essential in the context of Splinternet and 5-G. There will be issues that occur amongst states and even within counties; • Cross-border data transfers are an inevitable practice, and with nations having jurisdiction of the Internet, such data governance will be problematic. The Indian Personal Data Protection Bill (PDP), 2019, has asked to store a copy of India's data. This provision will help prevent manipulation to a certain extent; • Adjudication authorities grant the Right to be Forgotten under the PDP Bill 2019 as per their discretion. This provision does not allow individuals to have power over their data and lack transparency in using data mechanisms. Therefore, regulations must maintain the rights of individuals along with economic development;

1.3: Data Protection Infrastructure and their Liability Frameworks • Data protection must be tackled at the national levels. Sovereignty is a strong hurdle for global monitoring; • A debate on the balance between financial interests and values must be set. Clear statement of all stakeholders must be available on the subject. Diplomatic (too often hypocritical) statements are counterproductive and prevent from moving forward to any efficient normative instrument; • The GDPR is a perfect example of such irrelevant and inefficient tool; • Any protection and liability framework should be assessed in terms of operationalization and accompanied with a regime of sanctions; • Ethics codes and declaration of intent are not sufficient; • Education upon the philosophy behind the infrastructures, to ensure absence of bias; • 5-G will tap into a large market, and governments in the future will rely a lot on such networks. So, if entities controlled by respective states manipulate another state's network, this could lead to the falling of the state's entire critical infrastructure. There is a vast amount of data used by these 5-G network systems; • Accountability mechanisms and regulations provide specific parameters for entities and developers involved in this 5-G network system, will be an efficient mechanism;


• Huawei is at the helm of recent debate on data protection infrastructure. The ban on it by USA was lifted. The IPR related to 5G is mostly with Ericsson, Nokia, Qualcomm and Huawei only comes at distant 4th position. However, the latter provides the services at a much cheaper rate; • As Indian telecom industry is under financial stress, it cannot afford a costly solution. Nevertheless, the Indian Govt has formed a 3-year plan for developing indigenous 5G testbeds and Jio recently announced that it has developed the technology on a global standard; • The Tech is fast changing and the statute is changed by legislature which needs to be expedited replacing them by Tech Savvy policy makers;


AI & Glocalization in Law, Volume 1 (2020)

4 Recommendations Report approved by the AI General Assembly, 2020, 1st Session on October 2, 2020

Report No. 0210-AIGA-S1-2020-01-REP Passed by the Assembly with Absolute Majority of 7-0 on October 2, 2020. Authored by: Dev S. Tejnani Prakhar Prakash Shruti Somya

Kashvi Shetty Emmanuel Goffi

Recommendation Report on the Legal and Political Repercussions of Privatization of Autonomous and Augmented Systems in Space and Conflict Activities

Governance and Auditing Considerations over Autonomous Systems in Space Tech The Space Sector is an ever-evolving industry and therefore it is imperative to develop an autonomous system. • Space Law can be deemed to be regarded as the law which governs the activities of the various states in outer space. It can be deemed to be regarded as a piece of legislation which governs and determines the rights and the duties of the individuals resulting from all activities which are carried out by individuals or organizations in a particular country in the outer space and within it- and to do so, in the interest of the mankind as a whole, to offer protection to life, terrestrial and non-terrestrial, wherever it may exist. Now, Space is a sector which is emerging at a very fast pace, however, the rules and regulations with regards to autonomous systems and space tech are not developed. It is imperative to understand that autonomous systems will play a major role in shaping the mankind and shaping its ability to explore and operate in Space, “by providing greater access beyond human space-flight limitations in the harsh environment of space and by providing greater operational handling that extends the capabilities of


the astronauts who are performing their experiments and duties in the outer space. Autonomous systems in the Space Sector have developed before, for instance geospatial satellites, unmanned satellites are already being sent in the Outer Space by a number of countries. Autonomous systems need to be developed, not to replace individuals or humans, however, they need to be developed in order to aid professionals. • These autonomous systems can reduce human workloads by managing routine activities requiring constant monitoring over long periods of time. Space is at present undergoing transformation, however, due to the miniaturisation of electronics and the increased ease and reduced cost of access to space, for both government, commercial and research/education players. This can also pave the way for innumerable opportunities increasing in the development of space-based tasks in disruptive ways, or enable entirely new applications. There could be innumerable opportunities which could be developed with the help of autonomous formations and with the development of miniature spacecrafts having sensor capabilities, it could be proven to be a boon for carrying out spatial activities. However, a number of implications may arise with regards to the emergence of the need for managing a congested space environment, for example, through autonomous spacecraft collision avoidance capabilities. It is highly imperative to understand that most spacecraft operations are autonomous in nature so far as the control functions and routines are uploaded and this is done via telecommand for immediate implementation, or usually, such functions are undertaken at predefined times and in a strictly pre-decided sequence. • Autonomous systems will allow humans to venture beyond their limitations, by providing greater operational handling that extends astronauts capabilities. An important notion for this is ‘trust’. The human-machine interaction, and the safety of the human in that interaction, is critical for advancing this notion. There are two stages of governance. Firstly, precautionary approaches and secondary damage control approaches. The precautionary approach will be verification and validation approaches (V&V), that employ runtime analysis and model checking, software design architectures that enable tractable modular verification tasks. Secondly, for damage control countries will have to come together and frame guidelines for specific issues relating to conflict in space related to these systems. Model-based software can help address both of these key issues , this is the ability to reconfigure in response to global goals, and to self-model from limited observables, requires extensive reasoning about system wide interactions. To build an infrastructure and develop a “flight heritage,” such developmental activities should lead to several flights of small spacecraft that incrementally advance capabilities as they add to the flight heritage and experience of the technology and the team. Importance of Autonomous Systems and its development


AI & Glocalization in Law, Volume 1 (2020)

• With the advancements in technology when it comes to dealing with the Space has brought about tremendous benefits to mankind and to various individuals. Taking into consideration the Indian aspect, India being one of the developing countries has certainly accounted for a development in this field, however, the laws to govern these activities are not in a good shape in India. With the advancement in satellite-based communication and Space technology, satellites which have been positioned in the Earth’s orbits provide all countries with critical information, when dealing with climatic changes and other disasters, which could hamper one’s existence on Earth. • The revolution that Space Technology has created is certainly unprecedented and needs to be protected with laws that are well-constructed and properly drafted and such laws are clearly not present in India. Space Activities have channelled the desires of various countries for further exploration and at the same time, exploring the intricacies of the space jurisprudence. Space manufacturing, using autonomous machines to mine on the surface of other planets, taking up tourism, inculcating space solar power projects and fast suborbital transport services are some of the potential future applications which could mark a niche for India in the Global Space Sector. This would highly benefit India as a country, leading it to becoming a powerful nation. However, laws and regulations need to be in place, both domestically and globally which could govern these outer space activities. • There exists a dire need for a proper piece of legislation in place when it comes to any country planning on stepping foot into the Space Sector. However, few countries such as the United States of America, Luxembourg and Japan by far have been the only countries who have been able to formulate their own national space laws. Clearly, the only legal instruments for space activities exist in international law. The first international instrument which deals with the subject of space law is the 1967 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies (Outer Space Treaty). Subsequently, the 1968 Agreement on the Rescue of Astronauts, the Return of Astronauts and the Return of Objects Launched into Outer Space (Rescue Agreement), the 1972 Convention on International Liability for Damage Caused by Space Objects (Liability Convention), the 1976 Convention on Registration of Objects Launched into Outer Space (Registration Convention) and the 1984 Agreement Governing the Activities of States on the Moon and Other Celestial Bodies (Moon Agreement) were signed. Two significant Declarations and three important Principles also exist under international space laws that are, the 1963 Declaration of Legal Principles Governing the Activities of States in the Exploration and Uses of Outer Space (Declaration of Legal Principles), the 1982 Principles Governing the Use by States of Artificial Earth Satellites for International Direct Television Broadcasting (Broadcasting Principles), the 1986 Principles Relating to Remote Sensing of the Earth from Outer Space (Remote Sensing Principles), the 1992 Principles Relevant to the


Use of Nuclear Power Sources in Outer Space (Nuclear Power Sources Principles) and the 1996 Declaration on International Cooperation in the Exploration and Use of Outer Space for the Benefit and in the Interest of All States, Taking Into Particular Account the Needs of Developing Countries (Benefits Declaration). However, it is imperative to understand that the private sector is emerging and soon it would be playing a major role in developing autonomous machines with the help of Artificial Intelligence and Machine Learning and a proper regulation is extremely necessary. • As these concerns can halt and impede the New Space evolution, the international community should discuss the optimal way forward. There is a fair amount of proposed solutions, with three distinguishable trends. A first line of thought may be to keep the existing international framework of space law because it offers freedom for national space laws to develop. A second trend may be to amend the existing international framework of space law in order for it to support the contemporary space business. Finally, a third suggestion could be of drafting an entirely new international framework of space law that can possibly harmonize all national space laws. However, a clearer vision of the common heritage principle should be strived for. Limitations with the autonomous system • The International community should set binding legal instruments in order to protect space as a global common. Limits to access to space should be decided keeping in mind the interests of humankind instead of vested interests of states and private actors. • These legal instruments should be based on broad consultations of citizens around the world and monitored by an autonomous and neutral body. • With unlimited advantages of an autonomous system, it creates new opportunities for problems to arise. These problems are more prominent to the new space industries and start -ups who are not used to the aggressive and comprehensive testing needed to roll out problems pertaining to software and establish a balance between automation and manual control. • An excellent example of this is the Boeing's starliner capsule which failed to make it to ISS because of a glitch in the timer. A minor error in one million lines of code could be a deciding factor and change the mission from success to a complete failure. During the Boeing’s mission, it was later revealed that not only timer glitch, but there were many other software errors which were not even detected before the launch by the system. A former pilot was of the view that this error could have been easily detected by human crew members. • Space is additionally getting progressively clogged. The danger of in-orbit crashing is growing and can eventually restrict our utilization of space. • In the event that we think back there have been examples of overcoming adversity and ahead, we unquestionably have opportunities. but even after a long time


AI & Glocalization in Law, Volume 1 (2020)

after the first autonomous system was launched, appropriate administration is yet to be standardized. Ethics and Strategy Concerns in Space Politics • It is essential that non-Western countries such as India would bring their own specific perspective on ethics based on their own wisdom, philosophies, religions, and cultural perspectives. In a globalized world, it is critical to have a broad view of ethical stances. • So far, ethical issues related to new technologies are mainly, if not uniquely, shaped by Western reflections that we can find in institutional documents or in ethics codes all around the world. • So far, ethical thinking suffered a strong and problematic bias given its Western oriented approach while there is a valuable richness of lenses through which we could assess the ethicality of technologies such as AI or autonomous systems. While we are focusing on gender or skin color biases, we are not aware of the biases introduced by the Western perspective on ethics. The mere fact that gender biases or skin colors biases are debated in countries where these practices are widespread demonstrate that the ethical debate is under the influence of the West. There is room for countries like India to bring a breath of fresh air to the debate and open minds to new ways of thinking ethics. Otherwise we accept to submit to the Western ethical tyranny. • Space is no exception. Non-Western countries do have a say in the ethical debate raised by the exploitation of the global commons. they must think outside the Western box. Particularly since it is clear that so far the Western perspective has not been really efficient in protecting people from technological drifts. Satellites indulging in electronic surveillance-a great threat of intrusion and misuse of data gathered. There is no legislation or regulation on the monitoring of the data gathered by these satellites, and how one can assess that intrusive measures are not taken. Ethical concerns are heightened when entities could use such data to gain a competitive advantage and in turn breach privacy and rights of Individuals • Artificial pricing and destabilization practices must not be indulged in at all. Private entities must be regulated in this aspect when operating in space. State’s have to take a step ahead to ensure such standards are met. So the states also must not intervene and promote favouring to particular entities, as such unfair trade in space could cause grave environmental damage. Framing of Competition Rules and Guidelines for private players operating in Space Private entities will also have conflicts related to unfair competition practices and a level playing field for private players. These laws will need to comply with their respective national laws. • Asia-Pacific Space Cooperation Organization supports satellite development by training students and academics, supporting the development of the radiometric


calibration capabilities of member countries of the organization and developing small satellites through its Joint Small Multi-Mission Satellite Constellation programme. Bilateral agreements can support science and technology partnerships involving both public and private sector actors. This will help understand nations best practices, as per theory cultures and ideologies. This could lead to a future of norms that will develop for ethical practices in space globally. Private Ownership and Control Issues. • Space Activities has consistently been an administrative issue, as it something which was highly required by every individual, and was never seen with a commercial point of view. However, recently, we can see a move toward commercialization and inevitably toward the privatization of space activity. • Lately, there have been numerous developments in the augmentation of space and conflict activity, both on the national and international front. Governmentled agencies have achieved amazing things since the Moon landings, but one simply does not have the in-house capacity to address the country's growing requirements. • The space sector and its requirements have grown enormously in the last decade to include television and broadband services, space science and exploration, space-based navigation, and, of course, defence and security applications. Around 32 States and 2 international organizations (ESA and Eumetsat) have registered space objects with the United Nations. • SpaceX became the first private enterprise, which launched its first successful launch in the year 2019. This became a landmark moment for the private space sector. Not very back, Jeff Bezos's Blue origin entered into the game of the private space sector. • The privatization of Space is by all accounts an extraordinary thought somewhat. Public funding isn't the way to genuine advancement. Real space exploration needs a development equipped for delivering income and benefit. This will require private undertakings to venture outside the administrative agreements and assemble a space voyaging revenue industry. • Consequence with privatisation and control issue boosted by cheaper launch prices and new microsatellite technology which has seen devices shrink to the size of a loaf of bread, companies are now launching more and more satellites into space, and that has consequences. The small area of space around our planet is becoming quite crowded, and the potential for damaging and expensive collisions has increased. • Monopoly in the sector cannot be hidden. Now, since private companies have entered and in the coming even more will enter, there are high chances of monopoly in the business. It won’t be surprising to see the powerful throwing other


AI & Glocalization in Law, Volume 1 (2020)

organizations out of business, which will eventually lead to other problems like an increase in the cost. It is evident that in the upcoming year there will be a lot of hustle in space activities. With a greater number of private enterprises along with government organization, entering the space, it becomes necessary for proper legislation which must be made by the government of the state and enact national space legislation. Before privatisation, there arises a dire need for a Space Legislation Policy or a robust piece of legislation, which needs to be enacted, thereby paving a way for the growth of the emerging Space Sector in India. For India to reach its leaps and bounds when dealing with the matters pertaining to Space Laws are concerned, it needs to understand the importance of having a proper piece of legislation in place. The laws pertaining to Cyber Crimes and Intellectual Property Rights have reached great heights and are still constantly developing. Similarly, a thorough contribution is required towards national development of Space Laws in India. Space Law in India is still in its nascent stages and an advancement in this aspect is extremely crucial and needs the attention of the law-makers. There exists a desperate need of a proper piece of legislation in place when it comes to India diving into the outer space sector. On 1st February, 2011, the Indian Space Research Organization (ISRO) successfully launched a record 104 satellites in a single flight, on board its Polar Satellite Launch Vehicle from the Satish Dhawan Space Center and ever since ISRO has undertaken a number of projects and has never disappointed its country and its people and has always received accolades for the work it has done so far. However, ISRO is not provided enough capital, skilled workmen and opportunities to venture into outer space and create more and more miracles. ISRO, despite never disappointing the country or its people, has failed to reach the position it should have, however, if there were proper domestic laws in place and regulations which could regulate the space activities, then ISRO could have perhaps done wonders. In fact, allowing private players to enter into the market with the blessings and mentorship provided by ISRO, private players could certainly do wonders. It is imperative to understand that the Global Space Market, at present is valued at USD $350 Billion and if one looks at India’s share in this huge market, it only constitutes a meagre 2% of this enormous Space Market, which is an extremely small share if one takes into consideration the amount at which it is presently valued. It is therefore time that the Government and the legislative bodies come out of their lethargy and implement a proper set of rules and regulations in the form of a policy, taking into consideration the roles and the guidelines which the businesses would adhere to who wish to set their foot in this industry.


• It is imperative for the government to back up its statements with actions and these actions cannot be regulated in the form of guidelines or regulations, however, it needs to take the form of a policy. India needs a robust policy. It is highly recommended that an independent regulatory agency be set up, which would enable the ISRO to focus more on research and innovation and not get embroiled in anything else. The roles of different agencies need to be clearly demarcated in order to avoid confusions. • The Swedish National Space Agency allows the companies in Sweden complete autonomy while dealing with their business. Similarly, the Naro Space Center in Korea also aids the businesses in its space sector by providing them complete autonomy while dealing with their business activities and this helps in the promotion of Foreign Direct Investments. It is necessary for India to have a separate regulatory authority or a separate regulatory agency which aids businesses in the private sector and provides them complete transparency while dealing with various issues concerning the activities which a particular organization aims to carry out abroad. It is imperative that an independent authority be set up and this separate regulatory agency needs to have a single window clearance as it would be feasible as the regulatory authority would itself have the necessary powers conferred in it to allow the private firms to undertake activities. • The Government of India needs to take a holistic approach while dealing with this industry on all fronts, for instance, it needs to consider the GST and the various taxation regimes A lot of private companies aspiring to enter into the space sector hesitate because of the high tax rates that the government levies upon them, this is why, a number of companies which are of an Indian origin are looking to establish themselves in other countries and carry out space activities from other countries as the taxation regimes in a number of other countries are comparatively liberal as compared to India. Private enterprises want something more as private enterprises in India are suffering from the non-availability of resources and it needs to outsource if it simply wants to conduct a launch and this can be deemed to be regarded as quite expensive and therefore a lot of companies then set themselves up in countries wherein the government run space organizations provide them with complete autonomy and unable them to develop and carry out a number of space activities. • This trend will significantly change the role of public agencies in the development of space technologies and of private low-cost space operators, and open possible new configurations of public–private partnerships and collaboration. ISRO has recently opened its door to private entities, allowing sharing of data and facilities. The states must formulate certain guidelines of use of such facilities, and what are the limitations. So, the private entities can be promoted as well as regulated. • Adopting a Joint-Liability Framework When entering into agreements with private sectors for Augmented systems to be used in Space, the state will require a well-framed Liability clause. This clause would prevent private operators from


AI & Glocalization in Law, Volume 1 (2020)

escaping liability. Augmented and Autonomous systems are developed at various stages by different tools and human resources. Hence, private operators might escape liability by blaming outsourcing parties or other involved parties. This situation is when a joint-liability framework would prevent that situation and incentive third parties to operate or develop such systems to maintain point level accuracy. • Uniform Export Guidelines for Space Objects These systems that need to be built for Space require high-end technology and infrastructure, because of which it is highly dependent on international trade and export. Private companies indulging in trade with the extra-territorial application must have a certain quality and security guidelines. The USA is a significant exporter of 'space objects,' already has guidelines relating to space objects' export. • Transparency of government and private entities showcases to states with clear intentions about their space project and use. Also, it will keep the public informed, and in turn legitimize space use and prevent any wrongs. A public dialogue will be important to maintain this balance between states and private entities. • The idea supporting privatization was that the person who owns the property would better care for it. However, this does not seem to be happening at the commercial level. For a company, the ownership and control of the resources can be traded, that too at shorter intervals, profit remains the sole motive. It also results in a concentration of ownership in the hands of a few big corporates. Once rights get concentrated, it’s all too easy for the new owners to hijack the regulatory and legislative process. As a result, the rights don’t get scaled back when they need to be. A system of limits that was designed to protect a resource ends up ensuring its slow destruction. a corporate owner that seeks only a financial return will have a much shorter time horizon when making decisions. It may simply not be worth it to preserve the future productivity of a natural resource if that means forgoing a much greater profit today. The incentive to chase a quick buck may outweigh the financial and social rewards of long-term stewardship. Sustainable Development and Economic Rejuvenation considerations via AI. • All the 17 Sustainable Goals of the United Nations can be achieved in a better way with the help of AI. This ranges from reducing poverty and hunger to providing gender justice and equitable education opportunity. • AI can be used to focus the policies in a more effective and efficient manner. Sending a robot mission will definitely be much cheaper than sending a human mission to other planets or satellites. • The huge amount of data available through the internet or from satellites is beyond the comprehension of human beings. This can be converted to meaningful data with the help of AI. An Indian example could be the data of COVID patients


• •

• •

through Arogya Setu Application. It was useful in detecting the clusters which needed to be isolated, thus restricting the pandemic to a great extent. There is a need to meet the balanced requirement between general interest of public at large and specific interests of private players involved in business. Sharing of information in a manner that environmental and fundamental research in space activity can be contributed. However, not commercial info that is to be shared only in circumstances if required. The sharing of rest is in best interest of private entities and states Different players to be treated differently, i.e policies that favour developing countries to integrate them with the developed countries. In 2018, data from National Aeronautics and Space Administration (NASA) satellites were used for cholera forecasting in Yemen, with a 92 per cent accuracy rate. Hence, to achieve such goals better for the public at large there needs to be cooperation of states on a large scale. Article IX of the Outer Space Treaty, for its part, mainly focuses on damage to the earth environment. Now, the problem nowadays is exploitation of space also and not only protection of earth. These treaties must be read and implanted according to the modern age and its growing needs. Space science and space tech play a major role in fulfilling a number of objectives pertaining to sustainable development. Space science is something which can be deemed to be regarded as a discipline which under its ambit, takes into consideration the various activities pertaining to space exploration and also deals majorly with the analysis pertaining to the natural occurrences and how the physical man-made bodies in the outer space function. Space science also includes under its scope, innumerable disciplines such as astronomy, aeronautics, avionics, space medicine and astrobiology amongst others. Space tech is something which can be understood by making reference to the various Earth Satellite observation systems and how satellite communication works. Technology has reached its zenith and the technological advancements which help weather forecasters understand the weather, thereby involving innumerable aspects pertaining to remote sensing data, global positioning systems and satellite television, along with the incorporation of the communication system, all of these rely on how space science works. Now, Space tech can be deemed to be regarded as an important innovation in a number of fields. It can be said that Space Tech plays a major role in modern agricultural innovation and how it develops. The use of space technology can be deemed to be regarded as an effective method when it comes to farming and the management that is solely limited to developed countries as now. This is because of the high costs that are involved in setting up such a huge space infrastructure which could aid the farmers and it is a well-known fact that in developing nations, the farmers live in utmost adversity, therefore the implementation of this technology in developing nations for sustainable development can prove to be something which they could perhaps achieve in the future. However, in recent


AI & Glocalization in Law, Volume 1 (2020)

years, open access to geospatial data, data products and services and the lower costs of geospatial information technology has certainly reached its zenith worldwide. • Apart from this, satellite Earth observation and satellite communication in outer space is something which can aid in managing natural resources and the environment. It is highly imperative to understand that Earth observation and satellites can be developed in order to support fisheries, freshwater and forestry management and at the same time it can help in analysing illnesses and providing the treatments to such illnesses. Earth observation data from satellites can also be used to overcome various challenges related to air pollution and in areas such as water management and forest preservation. For example, the observation of precipitation is useful in addressing water-related diseases such as floods, typhoons and landslides. Earth observation is also a tool for monitoring illegal mining activities. Remote sensing can be used at the same time to monitor variations. IP Recognition and Safety Policies. • The first type or range of problems relates to the applicability of the patentability criteria of novelty, non-obviousness and usefulness or functionality to ‘space’ inventions. For example, when microgravity is the most important element based on which an invention is made, it is extremely difficult to prove novelty. Hence, creating a grey area for regulation of such patents • National laws should be strengthened to include an extra-territorial structure. They need to be in accordance with Article VIII of the Outer Space Treaty that prescribes the states to retain jurisdiction and control over the objects they launch into outer space. The entry of private entities will only aggravate the challenges. Therefore, a strong patent system nationally will help development and growth of states. • Nations should also focus on catering to interests of private entities, as well as maintaining public interest. Hence, an approach of incentivising private entities that are working towards matters that further public interest. This will create a solid foundation for public-private partnerships. • Most of the times, the accidents are not intended, but beyond human comprehension. This could be checked through global collaboration. • Risk management in an unfamiliar environment and high-risk activities becomes even more important. There has to be Health standards, Fitness for duty and limiting Hazard and exposure. For this, liability needs to be fixed (unlike the Bhopal Gas tragedy) and strict liability need to be there for errors which could have been avoided.


5 Recommendations Report approved by the AI General Assembly, 2020, 1st Session on October 3, 2020 Report No. 0310-AIGA-S1-2020-01-REP Passed by the Assembly with Absolute Majority of 12-0 on October 3, 2020. Authored by: Prakhar Prakash Kashvi Shetty Samar Singh Rajput Dev S. Tejnani Shruti Somya

Sanad Arora Anurati Bukanam Mridutpal Bhattacharyya.

Executive Summary Submitted by: Mr Sanjay Notani, President, AI General Assembly, 1st Session; Abhivardhan, President of the Indian Conference on AI and Law, 2020 and Kshitij Naik, Nodal Advisor, ISAIL. The Executive Board in the Report for the Day 3 recommends the following considering the quality, purpose and relevance of the scheme of discussion in the Assembly: • The problems with AI Ethics Boards would predominantly start with how they make decisions, and how should they give prima facie importance: precedents or principles? • AI Ethics Boards can have a constituted support of additional bodies that can foster support for public-private partnerships, analysis and support so that the Ethics boards are responsible and there is a second audit to whatever the Ethics board decides, which would also gives the consumers an assurance; • Mandatory testing for a certain period for all AI based programmes based on better experiential risk assessments is recommended; • Companies should be made more responsible for AI-based products and services. However, the responsibility should be based on operative transparency based on experience, and not be the strictness of the dictum;


AI & Glocalization in Law, Volume 1 (2020)

Recommendation Report on Assessing the Scope, Liability and Interpretability of the Paralytic Nature of AI Ethics Board in Corporeal Entities Nature of Data usages and Transboundary consideration • Importance of global data flow has been well recognised by various global bodies like European Commission. Next generation internet services require seamless data flow across the globe e.g. Google Glass, Driverless cars. As far as Indian perspective is concerned, free flow of data will facilitate 200% growth in virtual goods and services through e-commerce and growth of the digital economy by 2030 (from $58bn to $197bn). It will help SME to digitally engage with the global supply chain. However, the Cambridge- Analytica- Facebook incident and Justice Puttaswamy judgement which included privacy under Article 21 of Fundamental Rights has further brought this debate of “Data localisation vs Transborder free flow of data” to the forefront. RBI came up with a regulation to keep all the payment data of India on a local server, which was however vehemently opposed by global financial service providers. • As a method to protect the data of India, the legislatures came up with The Personal Data Protection Bill, 2018 based on the recommendation of Justice Srikrishna Committee Report. Section 40 of the Bill deals with Restriction on Cross Border Transfer of Personal Data. It states that every data fiduciary shall ensure the storage, on a server or data centre located in India, of at least one serving copy of personal data to which this Act applies. Also, the Central Government shall notify categories of personal data as critical personal data that shall only be processed in a server or data centre located in India. • However, the focus on Data Localisation has its inherent challenges. The setting up of servers and other necessary infrastructure requires huge cost, which would only be viable for the large players in the industry which would eventually lead to reduction and elimination of competition. Also, the customers are the ones who are ultimately going to bear the financial burden. Moreover, foreign startups might hesitate to invest in India as the business won't be feasible for them. Restricting data to a geographical territory does not ensure data protection. The requirement is to use global security standards. • China is a lot like a developed country in terms of technological advancements and it won't be wise for India to compare directly with it. India needs to frame its own policy having strong security and privacy measures that are not based on protectionism. • The legal protection of privacy on a global scale began with human rights instruments such as the Universal Declaration of Human Rights 1948 and International Covenant on Civil and Political Rights of 1996.


• Many institutions and regulations have recognised the economic and social benefits of transborder data flows and its increasing importance in promoting economic and social development. • Ethics with regards to transfer of data must be based on the same principles that Richard Stallman stated when speaking of the 4 freedoms of free software. These corporates must provide the freedom to study how the program works. This becomes essential in determining the scope of the power AI software has in this regard. • The ethics governing transborder flows of data should aim to achieve the 5 Cs, which have come to represent good ethical and design processes in AI: consent, clarity, consistency (and trust), control (and transparency), and (a focus on) consequences. • companies ought to develop these checklists on an “open source” basis and share them with others working to develop AI applications and an active and accessible channel by which any AI professional can dissent from how an AI application is being structured or used. This brings the necessary context to light that the data usage is to be done in a legal or right manner and shall protect the privacy of personal data. • There is evidence that transborder data flows are key to infrastructure for efficient industries and critical productivity, but at the cost of critical information being circulated without transparency and without knowledge. On the other hand, localization of data will not only affect decreasing data security but will also shift burden costs on consumers and have disproportionate effects on smaller businesses. • Privacy is important but privacy need not be the enemy of prosperity. Strong, innovative privacy regimes that promote trade and growth are the need of the hour. The one fits all approach is rigid and will not be applicable in long term. Initiatives should focus on developing flexible, protective regulations that can coexist with and adapt to technological advances. At every level of working and use of data, there must be some privacy regulations so that there is enough protection and transparency to the data providers. • Policies and standards must be technically efficient, economically and financially sound, legally justifiable, ethically consisted and socially acceptable. • Data Portability has made transborder data convenient and at the same time challenging to regulate the entities and operators that use such data. Entities tend to have different definitions of Personal Identifiable Information. This portrays that the company is indulging in ethical practices, but actually it is playing on a loophole in good practices. For example, what Google considers Personally Identifiable Information (PII) may be substantially different from Microsoft’s definition. This makes it difficult for having a uniform regulation model • India in the amendment bills introduced two important aspects, that is privacy by design and sandbox. Sandbox provisions that were introduced, have been


AI & Glocalization in Law, Volume 1 (2020)

given an exemption from following data protection laws. However, to test AI tools and designs entities ideally need to follow higher data protection standards, and such assessments and checks must be done at this stage in such a safe space. Data Minimisation is an essential point for these entities to follow. This point under the Indian PDP Bill, 2019 is mentioned in the preamble, but has not been elaborated upon. This will form a foundation for privacy by design, because there is a need to have transferability of data at a scale only absolutely necessary Data is basically a piece of information which is available in different formats around the internet universe. Data information has revolutionized the e-commerce industry, economy market, trade, international affairs on a different level. Therefore, the importance of passing data from one person to another or one entity to another. The thing is transformation or transfer of data is regulated in an ethical and transparent manner as misuse of such data can lead to security issues, economical issues, privacy issues, and other trans border issues between the countries. Therefore, the EU came up with GDPR for proper structuration of free flow of the data. Cybersecurity has been a concern within the state and everyone wants to get hold of maximum quantities of data. Therefore, we have to look at the existing rules and lacuna which cannot be side-lined. We have to look at existing laws and improve upon existing structures. Data is the lifeblood of the modern global economy. Digital trade and cross-border data flows are expected to continue to grow faster than the overall rate of global trade. Businesses use data to create value, and many can only maximize that value when data can flow freely across borders, yet a growing number of countries are enacting barriers that make it more expensive and time consuming, if not illegal, to transfer data overseas. Some nations base their decisions to erect such barriers on the mistaken rationale that it will mitigate privacy and cybersecurity concerns; others do so for purely mercantilist reasons. Yet, whatever the motivation, as this report demonstrates, the costs of these policies are significant, not just for the global economy, but for the nations that “shoot themselves in the foot” by using these policies. It is imperative to understand that there are innumerable risk assessment models which have been developed in recent times when it comes to data protection, which is even recently upheld by the EU General Data Protection Regulation which can also be deemed to be regarded as the “GDPR”. There exist various types of risk assessment models that can be adopted or taken into consideration and they can be either mandatory or voluntary, self-assessed or third party/licensing schemes. These models are models which are only capable of assessing specific kinds of data or only a specific kind of risk, however, and it is important that they are based on risk/benefit assessments or rights-based assessments. Finally, these systems or models may solely focus on the jurisprudential issues or they may solely take under its scope, societal issues. Against this background, the first and the foremost point of information that arises is whether the model


can be deemed to be regarded as a sector-specific or general specific model. This could be deemed to be regarded as an important question, since data uses cannot be dealt with a specific domain or technology. It can be safely said that it is possible to adhere and shift to a technological background, for instance, an Internet of things (IoT) impact assessment, a Big Data Impact Assessment, a smart city impact assessment or lastly an Artificially intelligent assessment. All these technological assessments take into consideration data processing as a means of decision making, however, they may somewhat differ to each other in the ways in which their scope extends to. • Coming to the applications which can be deemed to be regarded as driven by data, an assessment which is focused on a specific technology can be deemed to be regarded as inadequate and only partially effective. In light of this fact, it is important to take into consideration various other forms of applications or domains, which deal with the aspects of healthcare or crime prevention, different sets of rights, freedoms and values need to be considered. Therefore, a sectorspecific approach will help in focusing on the rights and values in question instead of the technology. Sectoral models may not focus upon technological advancements; however, they may focus upon the context and the values that assume relevance in that context. This cannot be deemed to be regarded as the nature of the technology which has no importance when it comes to taking into consideration the risk assessment process as a whole. A given technology can understand the most apt measures that are imperative to be undertaken when it comes to safeguarding the benchmark values. Whilst adopting a value-oriented approach, it is imperative that the assessment should focus on the impact that the data has on the society. This impact takes under its ambit, the potential negative outcomes on innumerable fundamental rights and principles and also takes into consideration the ethical and social consequences when it comes to data processing. The GDPR enumerates in detail an advanced example of regulation in this field and it focuses on risk assessment, however, it is still far from a compulsory model or framework which focuses taking into consideration societal issues in consonance with AI. • The EU legislator can be deemed to be regarded as something which is capable of recognizing data processing risks such as discrimination and “any other significant economic or social disadvantage,” and Article 29 of the Data Protection Working Party and the European Data Protection Supervisors, elucidate into a broader assessment pertaining to the analysis of the societal and ethical consequences of AI and data use. Regardless of the innumerable steps which have been taken in the direction of an assessment which can no longer be deemed to be focusing on the quality of data and on the quality of data security. Article 35 of the GDPR and the early assessment models from Data Protection Authorities or the DPA’s, therefore do not properly delve into the ethical or the societal issues. • In this context, it is clear that there exists an increasingly high demand for the data which is ethically and socially related to citizens, companies, developers


AI & Glocalization in Law, Volume 1 (2020)

and computer scientists. This gap is partially filled with innumerable initiatives, corporal guidelines and ongoing investigations which are carried out by the public. The main hindrance that brings about a harbinger of problems in terms of these initiatives deals with the variety of values, approaches and models adopted. Predictive policing softwares, credit scoring models and many other algorithmic decision-support systems delve into how data analysis can be supported by the society at large. Another important point to note is that the potential negative outcomes with regards to the data use, is something which cannot be restricted to the more widely recognized privacy related risks, which could be the illegal or illegitimate use of personal data, information or security, however, it also takes under its scope, a lot of other potential prejudices, for instance, discrimination which can be better understood by placing data processing softwares in the broader context of human rights. Explanability of AI • Explainable AI are said to make black-box decisions and tend to get away with a lot of decisions taken by a machine. This is where the entities need to realise that they have a role to make these AI and their Algorithms much more accountable and user-friendly. • It is pertinent to note that governments and international regimes should press for an open development platform for Artificial Intelligence, considering that it will have a direct impact on a large amount of the population, all of them make for essential stakeholders and their views should be taken into account. • The source code of any A.I which is being developed should not be a trade secret and open to public scrutiny. Unless, there is a substantial amount of algorithmic literacy amongst the public, they won’t be able to ascertain whether the A.I so being developed is to their benefit or detriment, so international forums and local governments should push A.I and algorithmic literacy, so that common public can adequately contribute to the process of the development of A.I and argue for their rights. • Research in this field became more proactive in 2016, firstly to explain the complicated neural networks. Secondly, with the advent of the GDPR area and Thirdly, industrial users understand the need of safe- critical systems. An approach towards making AI explainable would be Global as well as Local explainable methods. So, a global explainable would be user’s manual will depend on the level of potential risk of the system, and local explanations are factors in control of users and how they could effectively use these systems. AI would exist in a world where a plurality of ethical frameworks governs the morality of this world’s denizens, so it would be pertinent that A.I developers take into account all these philosophies and moral codes, so that the A.I so developed can have a more nuanced approach to its decision making process, thus accommodating


• •

people of diverse cultural backgrounds. Technical Solution for explainable systems would be Hybrid AI. This model consists of combining several AI techniques in order to get the best from each method. This is a more reliable method, and something that is not goal based but an approach to find the safest and efficient method. A court in the Hague, Netherlands, reached a similar conclusion with regard to an algorithm used by the Netherlands government to predict the likelihood of social security fraud. This court used the proportionality test to highlight the need. Therefore, entities must take into consideration such tests and reasoned description to promote public opinion and accountability. Explanability and accuracy has been a topic of discussion lately. The explainable models are very easy to understand but don't work well whereas accurate models are very complicated but totally do justice to the work they are deployed for. The understanding of BlackBox problems is necessary & imperative in order for the better understanding of explanability to come into effect. I.e. Perfect understanding of what a BlackBox problem is, rather - what a blackbox problem isn’t, is important. Machine learning is often thought of as black boxes that are impossible to understand and this makes it difficult for AI to be widely accepted and trusted. What is needed is the awareness and understanding into the complex working of the system. What is needed is simpler algorithms that will be understandable to the layman and will build a trust among them to make use of the technology. Transparency, accuracy and explanability is of prime importance and therefore we must aim to address these.

Legal Entity / Juristic Entity of AI • As laws are meant to be human-centered so if in future we are able to interpret the law without perceiving it from the lens of humanly association or rather from the perspective of machined association laws, we can break a bubble of human oriented law or rather recognizing the different status of different entities as well. That should be a breaking point and innovation of new interpretation of law where different entities other than human body corporates other disregarded entities should be regarded a certain status in law. Which will bring about the change in application of law altogether. • When it comes to the United States, its government does not strive to consider the legal status of AI as an individual person and focus on the AI legal definition. Section 3 of the bill on AI provides the AI generalizing definitions - Artificial systems capable of performing tasks without human presence (autonomous systems); - Systems that think as by analogy with the human brain and are able to pass the Turing test or another comparable test by processing natural language, representing knowledge, automated reasoning and learning; - Systems that act


AI & Glocalization in Law, Volume 1 (2020)

rationally achieve goals through perception, planning, reasoning, learning, communication, decision making and action (Cantwell, 2017). • As for EU countries, they pay specific attention to legal regulation of unmanned vehicles. Thus, the German Traffic Act (Czarnecki, 2017) imposes the responsibility for managing an automated or semi-automated vehicle on the owner and envisages partial involvement of the Federal Ministry of Transport and the Digital Infrastructure. A more comprehensive and understandable approach to the definition of current and prospective legislation regarding robotics is presented in the EU resolution on robotics (European Parliament Resolution, 2017). It defines types of AI use, covers issues of liability, ethics, and provides basic rules of conduct for developers, operators, and manufacturers in the field of robotics, the rules are based on the three laws of robot technology by Azimov (1942). Factors that need to be taken into consideration while debating upon the issue of whether Legal personhood should be assigned to A.I or not: • Who would assume liability if any detriment is cause to an or any other legal entity by the action of AI? • Whether there is a requirement of separate standards of rules specifically governing AI and robots’ systems or whether they coincide with Human Regulations which already exist? • In case of an IP Regime whether AI can be given creative rights of itself as first owners in copyright? • Whether AI is capable of interpreting its own thoughts on information which it collects from its surrounding? • The freedom of volition would be an important factor in understanding whether the AI should be granted a status of separate entity; • Legal identity given to humans is very different which AI cannot always adhere to. Even though the need for a legal identity or a juristic identity is important, this must only be done under the proper consideration of all aspects and must keep in mind the shortcomings of AI. Another point of discussion is that humans, when speaking criminally, we punish them for the decision-making process and organisation of the entity. Therefore, one must keep in account the fact that only the decision-making process of the AI should be held accountable, which brings up the question as to who develops this decision-making process? • A human understands, interprets and applies legal rules in nuances situations of everyday life which AI is not capable of doing, therefore it makes it difficult to hold them responsible. • Freedom of speech, moral losses and responsibility make no sense to AI. • Also, it is hard to say that it will commit prohibited acts on its own purposes, this again makes it difficult to hold A.I accountable. AI code may ensure that AI complies with certain rules but application of such rules is not the result of an act of will.


• A study however looks into the application of slave laws which were applied by the Romans on AI. Where the slaves had only duties and were present to only serve the romans. They did not have any rights and their acts were not the responsibility of the owners. Therefore, starting off a legal system which is limited to the context of responsibility for injuries that AI causes is the necessary first step. • Another effective suggestion would be to make a separate subject under law for AI which will not be only a one fits all approach but a case by case method which will allow for adaptability and flexibility. A way to look into this issue can be by comparing an AI in relation to firstly, the other juristic entity and secondly, to entities which are similar to humans but have been denied such rights. An example of the first aspect can be companies which have legal entities and work under the corporate veil. The example for the second one can be understood in the context of a case dealing with the issue of granting legal rights to a Chimpanzee in front of New York Court. • For companies as a legal person do not take decisions; the human members do. Also, when the need arises, Judiciary can pierce the corporate veil and prosecute the humans taking decisions. On the other hand, unlike a company, AI is more like a human. It can move things and cars, press triggers of autonomous weapons, listen to private conversation of humans and can create, buy and sell IPR. The biggest problem is, if an accident happens, whether intentionally or unintentionally, who shall be held liable. If an AI is a defendant, what property can the claimant seek. AI has no asset of its own. A solution could be to create a pool of assets spreading the liability to a group of people including the AI developer and investors and not on a single individual behind the AI programming. This could be a positive development to restrict the liability of AI developer as is the case of limited liability concept in the domain of corporate law. This could motivate more investment in the field of AI. However, the biggest challenge is, granting the legal rights would allow the creators to escape the contractual and tortious liabilities and the responsibility of doing the wrong would fall on the AI, which actually did not have any will to decide the actions it took. • Also, for humans, in the case of killing, it is difficult to decide whether the act falls under Section 299 (culpable homicide) or Section 300 (Murder), or will it fall under Chapter IV (General Exceptions) of the Indian Penal Code, 1860. If the case is of killing by a robot having a juristic entity, how are the laws going to govern it? Shall we need a completely new Judicial setup. And if yes, how fast would the legal system evolve. Moreover, would it be able to keep pace with the rapidly evolving AI. • It needs to be understood whether the liability of an AI as a Juristic Entity is by extension Vicarious Liability in a newer instance. Article 12 of the United Nations Convention on the Use of Electronic Communications in International Contracts, a person (natural or an entity) on behalf of whom a programme was created must, ultimately, be liable for any action generated by the machine. This


AI & Glocalization in Law, Volume 1 (2020)

reasoning is based on the notion that a tool has no will of its own. However, this could vary in different situations and contexts. Robo-advisers have a wide practice in investment markets today. The USA regulations regulate as stock- brokers, so they have the same regulations as humans. This would say that these corporations are liable for human and machine errors equally. However, in this case robots make critical decisions that value human life. Then will these entities be prepared to take liability and how will this be done. An initiative taken by the European Parliament seems important in this case, the provision gave a specific legal status for smart robots as well as the creation of an insurance system and compensatory fund. Creators to an extent would still be liable, but in such cases a consumer centric approach is required. In 2018 there was a case that a self-driving car of Uber killed a pedestrian, and an out of court-settlement was made. This again points that entities do take liability for their creations in such situations. However, at the same time later in the year Uber issued a reflective 70-page safety report, to state the potential for its selfdriving cars to be safer than those driven by humans. This is an ethical washing practice, where the companies are not taking positions to defend their liabilities. Now, AI entities that are based on more autonomy of AI and their understanding. For example, Microsoft launched an artificial intelligence programme named Tay. Endowed with a deep learning ability, the robot shaped its worldview based on online interactions with other people and producing authentic expressions based on them. The machine picked up racist and sexist comments, leading to its takedown in 24 hours. On whom the liability of such an instance must lie is a challenging issue. Precautionary principle becomes critical while using such AI systems in an ethical manner. There is a need for comprehensive and thorough assessments of computer systems and their impacts, including the analysis of possible risks. This will help prevent risks, and at the same time understand the challenges in the initial stages of developing and make the changes as required. The technological paradigm of the digital economy forms new markets that give rise to new regulatory measures and subjects for control, including artificial intelligence (AI). This trend primarily concerns the formation of technologies that will radically change the sustainable market economy, forcing professionals out of different areas. In the legal field as well, there have been plenty of developments being incorporated since a number of countries and developers have developed technologies or applications that could perhaps displace the lawyers from the market. It is imperative to throw some light upon a project which was incorporated by a Harvard Graduate called the, “DoNotPay” chat in the UK, and it currently covers over 1000 fields of law. The popularity of the above service is due to the fact that it has successfully challenged the veracity of more than 1,60,000 fields of law, right from filing claims against illegal parking tickets to filing claims against corporations like Microsoft and Sony wherein they outrightly denied to cancel the Microsoft Xbox Live Subscriptions or the Sony PS4 Playstation Network Subscriptions.


• Apart from this, in the Russian Market, Sberbank launched a robot lawyer who was powered to file claims for individuals; apart from this, a company in Russia, Glavstroy Control launched a bot which was coded to settle insurance disputes. Apart from this, it is quite difficult to control the current technological wave wherein there arises a dire need of a legislative base which could regulate AI. At present, the approach to AI can be implemented in the form of mere software packages, for instance, virtual platforms, chat bots, programs, etc, which may not necessarily have a material shell. Apart from this, the approach to AI is also being undertaken with the development of robots and drones which could be deemed to be regarded as an instrument which could perform specific goals laid down in the framework of legal relations which are developed by various legal entities. • At the same time, there are innumerable cases when actions with regards to the status of robots contradict national laws of a country as for instance in Saudi Arabia, a robot Sofia was positioned as a woman and at the same time was granted the citizenship rights in Saudi Arabia under the provisions of the Saudi Arabia Citizenship Act which brought along with it a harbinger of innumerable issues as for AI to comply with the legislation specified by the host country is highly imperative. It is necessary to understand that now in Saudi Arabia women can act in the executive branches, participate in labor relations and even get married. They are allowed to roam freely without any guardian and are even allowed to drive vehicles. However, there is no adequate state regulation with regard to securing and terminating the respective legal relations. As a consequence, when the robot is equated to a person, there will be innumerable problems both in Sharia courts and in courts of general jurisdiction, since the model of conduct is not specified by the law of the land. Requirement of AI Ethics Boards in Corporeal Entities • Ethical boards are required but not for the names but to actually create an environment of public accountability. Companies are promoting the values, considering this ethical discourse is the need of the hour. Now, if there is a similar system what will differentiate tomorrow. The potential game changer would be AI systems that are much more transparent and secure. • The regulatory framework must include ex-ante and ex-post guidelines. There is a need to change the regulatory framework for such boards. States must have both ex-ante and ex-post guidelines while assessing the functioning of these boards. The ethics board is the main link to check if the entity is progressive while applying ethics in AI systems. The ex-ante regulations must include impact assessment, damage minimization framework, and even inherent designs that promote ethics driven systems. Further, the ex-post regulations must include a significant human rights complaint system and guidelines that prevent the Ethic boards' bureaucratic nature.


AI & Glocalization in Law, Volume 1 (2020)

• Certification Method to ensure actual implementation of Ethics The governance framework for the Ethics Board is one side of the coin, and the other is that there has to be an alternative approach to incentivize the private players. Certifications of AI systems are a new approach that would require entities to meet specific standards. This process would improve the credibility of the entity and, at the same time, would also end up being a benefit for the public at large. Hence, it will help move from a 'have to' approach to a 'need to' one, placing the entities in a better position to promote ethics in AI. • Inclusion is a way to prevent bias- The penetration of this bias is also due to a lack of diversity. Entities must ensure that the boards they set up are diverse and comprehend the difficulties minorities could face with such loopholes. Also, the entities must take a coordinated approach to spread awareness regarding ethics in AI systems and the need for people to raise their voice against such bias. • In a sense, we are playing into their hands by polarizing [the issue of] ethics,” Poulson tells The Verge. “The major battle is accountability. And accountability is less likely if we polarize what is currently bipartisan concern over Big Tech. An AI ethics board should be accountable to some other body where the board’s decision could be challenged, and an appeal could be made. • The scope of the AI Ethics Board extends to four broad areas. Firstly, guiding leadership to establish policies and procedures regarding AI ethics and setting up training programs. Secondly, examining proposals for AI research and maintaining the repository. Thirdly, auditing the use and proposed use of AI and data. And finally, to handle any complaints regarding the use of AI or its associated data. • The benefits of an AI Ethics Board can be -Develop public trust as well as the trust of the employee in AI products and services. Public consultations on AI align the goal of the company with the public interest. Self-regulation prepares the company for possible future AI regulation. • The challenges that lie ahead of the implementation of the AI Ethics Board- Cost and time incurred in setting up and managing the board. The compliance time will further hamper the rapidly changing and developing field of AI. Lack of transparency may lead to ineffective public accountability. On the other hand, to be publicly accountable, the board might release some data in the public domain. This might give sensitive information in the hands of competitors. Risk of shifting the burden rather than fixing accountability. Finding appropriate members to the board who have a good understanding of both the AI technology as well as of the business and its related legal and human resource issues will be a challenge. • A three-tier structure could be created with a committee at the departmental level, an oversight committee at the organisational level and a regulatory body at the level of the government. Independent intellectuals, philosophers, and people from varied backgrounds could be made a member of these committees.


• Since AI is a rapidly evolving field, it will be very advantageous to collaborate, with others working in AI ethics, including competitors. Membership of crossindustry AI ethics groups should be considered to benefit people and society. The regulation can not be done without the collaboration of the Tech companies. They are the one who has all the information and the resources. The challenge now is finding crossovers between existing legislation and the negative impact of new technology. Then, new legislation needs to be drafted to fill the gaps. • The creation of a machine which has the capacity to think and work on its own can raise innumerable ethical issues. These issues could be pertaining to such machines causing harm to humans and other moral beings and it could also pertain to issues relating to the moral status of the machines itself. These issues ought to arise with the significant developments that are being witnessed in the recent times pertaining to artificial intelligence and machine learning. The process of decision making in our daily activities nowadays relies on machine-learning algorithms and artificial intelligence (AI), which can be said to be motivated by the speed and efficiency that it has provided in the overall decision-making process. • However, AI and machine learning differ from humans in a lot of aspects, one of the pertinent aspects being the ethical assessment of things around. It is necessary to enumerate on how the legal status of AI has been shaped in recent times and how a number of companies are constantly working towards reducing the workload of its employees and adapting to capital intensive methods of working, by adopting machine learning. Corporeal entities, however, face innumerable challenges while dealing with ethically informed data collection and how this piece of data could be shared and used across the workspace by their employees. The companies usually suffer from innumerable issues with regards to the incorporation of an AI ethics board as AI and machine learning is something which is relatively new and companies are still in their nascent stages when it comes to dealing with the machines powered by AI. • Companies are extremely worried with regards to the protection of their data and companies are clearly sensing an increased sensitivity to ethical issues, however, there is very little or perhaps no knowledge that companies possess apart from the legal compliances that it is adhering to. Companies are unaware when it comes to being ethically responsible, let alone how to incorporate an AI ethics board or an AI ethics committee and how it could consider whether their products and services have been designed keeping in mind the various ethical considerations that ought to follow. A few organizations like Google, IBM, et cetera have developed an AI ethics committee or an AI ethics board, however, these organizations too have had limited success with regards to the decision-making process and with regards to the products that they manufacture. Such a situation can be deemed to be regarded as problematic for the stakeholders of the company, and also for individuals who are dealing with the company as the privacy, data control measures and the security and fairness of such individuals is clearly


AI & Glocalization in Law, Volume 1 (2020)

at stake since the company may be in its nascent stages when it comes to the development of an AI Ethics board. • The world is witnessing tremendous advances in the field of artificial intelligence. There are innumerable applications being developed in the field of finance, defence, health care, criminal justice and education, amongst other fields. Machine learning and algorithmic data provides spell-checks, voice recognition systems, advertisement targeting and also help in identifying fraud detection. At the same time, there have been innumerable concerns with regards to the ethical values which fit within the AI ecosystem and the extent to which these algorithms respect the rights and privileges guaranteed to humans. Ethicists in a number of countries are concerned about the level of transparency, the issues pertaining to poor accountability, unfairness and bias with regards to these automated tools. With millions of lines of coding that undergoes during the development of the AI tools, it is quite difficult to understand the values which need to be inculcated when dealing with softwares and how various algorithms actually reach to a particular decision. It is imperative to understand an example here. A bank may design a machine learning algorithm which has been coded with an algorithm to recommend mortgage applications for approval. An applicant whose application gets rejected may bring a lawsuit against the bank alleging that the algorithm or the coding which has been powered into the said machine discriminates individuals or applicants on the basis of their nationality, colour, origin or gender and is blinded on purpose to discriminate on such grounds, however, finding an answer in such a situation is not a cakewalk. • If the machine learning algorithm is based on a complex neural network, or a genetic algorithm produced by directed evolution, then under such circumstances, it may be nearly impossible to understand why, or even how, the algorithm works thereby judging applicants on the basis of their race, nationality, colour or gender. Therefore, in order to avoid all these issues, it is imperative for firms to establish an AI ethics board, which would ensure the development of data which, if properly functioned and protected would help companies in achieving their paradigm shift from being a labour-intensive organization to a capital-intensive organization. It is imperative for companies to have a committee-based model consisting of ethicists, who could be hired and they could work with corporate decision makers and software developers who actually deal with the complex neural networking of the machine learning algorithm. Further, the committee-based model will enable firms to protect the interests of their users and at the same time, it will ensure that individuals with a range of expertise may come together and efficiently and effectively analyse, access and deal with the complex problems arising from machine learning and AI data analysis. • Further, companies could develop a code of AI ethics that could lay out a separate set of issues pertaining to how various issues that arise with the machines are to be dealt with. It is also essential for the AI review board consisting of the corporate decision makers, the ethicists and the software coders to regularly look into


aspects concerning the various issues revolving around corporate ethical questions. AI audit trails could be looked into specifically showing how various coding decisions have been undertaken or implemented. It is also imperative to implement AI training programs so that the individuals or professionals in a company could get an hands on experience on working with these machines and how they could operationalize ethical considerations in their daily work. • The CEO, CRO, CCO, and CFO have leadership roles across the first three dimensions, while the fourth dimension relies on leadership from politicians, regulatory agencies, and other policymaking bodies but adequate compliance is required from the corporeal entities. AI ethics is a sweeping endeavor with many moving parts. At the same time, technology aside, the initial approach should follow a similar path as other ethics and compliance programs, including: a. b. c. d.

Start at the top Clearly communicate your intentions Assess the risks Give examples

Philosophical and cultural differences with regards to AI, management sciences and Ethics • The importance of a global conversation about the social impacts and ethics of AI has started in both industries and academic regions. • For ethics, we must keep in mind that culture and understand that different societies have unique ethical vocabularies, understanding and expectations. • The words “fairness” and “privacy” mean different things in different places and this does not mean that all these values are equal. We have to understand that ethics is not something for philosophers but it is just what we do, and thereby it requires action and engagement. • Some key issues that rise out of these debates are:- Regional differences are present, Al will be susceptible to social equality and therefore further action is needed. The need is to start from the bottom and move our way upwards, which will have a target-based approach. • Questions on what hopes and fears drive the technologies we choose to develop and how we accept, reject and use technologies around us, are important bearers of how AI will solve cultural differences. • It should be kept in mind that when a society develops a technology it does so because it attained technological mastery and know-how. • Who is AI made for? Where most areas of the world still don't have access to basic internet facilities, how will they be able to accept such technology? Use of AI is the next important question which springs up issues in a cultural background.


AI & Glocalization in Law, Volume 1 (2020)

• Cross-cultural cooperation is essential for understanding these AI ethics and management. By ‘cross-cultural cooperation’, we mean groups from different cultures and nations working together on ensuring that AI is developed, deployed, and governed in societally beneficial ways. • The debate of western conception of ethics and privacy being valued over eastern conception, would be wrong. AI revolving around such a notion, can never work for humanity at large. Today, the largest tech companies Google, Amazon and Facebook are based in the USA. However, if the systems are just based on values of one country, this will just deepen the difference of values and ideologies. • Now, suppose AI has to make a life-death decision and considering that it has information fed about previous genocides in the world. It might deduce that a person in Rwanda will not be as important as a person in USA. This is concerning from an ethical perspective as well as basic human rights and values. • Countries must take into account positions of different states on such AI ethics. UAE has taken an initiative towards Open Sharing of Big data, a bold move with its own challenges, this is in line to move to smart cities and development. Whereas need for AI in countries like Ghana, is to promote collective rights within the ambit of individual growth and protection. Hence, these are the diverse values for which international co-operation will be required. • To encourage global participation a good starting point would be leading conferences, research groups and academic surveys to understand ideologies of AI ethics, from a global perspective. Further leading conferences and fora on these topics alternate between multiple continents will help the developing countries get an opinion. A separate federation of AI should be formed for regulating these Laws consisting of public private members which forms boards. We have to keep it as this far from political agendas because it will create a bias; • Feeding of data to the AI machines should be transparent and universal. Proper propagation of data should be there all cultures and linguistic and religious ethical moral properties would be provided to the AI. • AI should regarding the independence of itself would be unclear as a better preposition would be AI intelligence would subsist along with human interaction. Because AI has originated from human intellect therefore in future it would consist human touch as well algorithmic infrequencies. • When we compare a Natural intelligence with an Artificial one, we are comparing a living being which is a result of billions of years of evolution in an everchanging dangerous environment, continuously fighting for its survival with a computer system which performs relatively simple calculations for inputting a set of data, processing it on the basis of present algorithms and giving an output. Can the AI be even said to be Intelligent or are just acting as an intelligent one supported by the “Chinese room argument”.


Cultural considerations contain multiple dimensions ranging from unemployment and inequality concerns to its impact on cultural diversity. Using AI to replace human in a western ageing nation would be welcomed but not in a developing country like India and in LDCs which is on its path to achieving Demographic dividend. Here it might leave a huge population jobless. Inequalities will also creep in. Rich will use AI in the place of human workforce, thus saving their resources, and the less privileged would have to struggle for their basic needs. This world is full of cultural diversity and if an AI system is expected to serve all of them equally, it would bring uniformity in human behaviour. And this behaviour would be controlled by few individuals, say those who are designing the Android system and the way it should interact with humans. This would pose a threat to global diversity. We can have an uniform AI system having goods of all the cultures. But who will decide and how will it be decided what is good? Wouldn’t it be wise to have a customized AI based on the cultural values of different regions?


AI & Glocalization in Law, Volume 1 (2020)

6 Recommendations Report approved by the AI General Assembly, 2020, 1st Session on October 4, 2020 Report No. 0410-AIGA-S1-2020-01-REP Passed by the Assembly with Absolute Majority of 12-0 on October 4, 2020. Authored by: Dev S. Tejnani Shruti Somya Sanad Arora Samar Singh Rajput Kashvi Shetty

Prakhar Prakash Mridutpal Bhattacharyya Anurati Bukanam Prantik Mukherjee

Recommendations Report on The Role and Framework of Plurilateralism in AI-enabled Crimes and Judicial Governance Transformation of Natural Justice Principles and their usage vis a vis Artificial Intelligence. • Artificial Intelligence (AI) is something which is certainly reaching its zenith, whether we talk about remote sensing technologies or deal with the creation of AI machines and their use in the society. However, it is imperative to understand that with the development of something, innumerable issues ought to follow. Considering the development of AI, it is imperative to understand that AI mechanisms could severely violate the principles of natural justice, when it comes to adjudicating matters wherein machines or robots are involved. Artificially Intelligent Robots who are programmed by innumerable connections and coding, may tend to learn new things by itself and it could have a significant impact on the criminal and civil laws. A lot of countries are making various pieces of legislation in order to legalise and provide an adequate support to robots, however, developing economies are still working towards these aspects and are constantly endeavouring towards developing a robust piece of legislation, however, they are


still in their nascent stages. Arguably, the most important near-term legal question associated with AI is who or what should be liable for tortious, criminal, and contractual misconduct involving AI and under what conditions. • Now, it is being noticed that artificial intelligence can be used within the financial markets in order to enable traders to quantitatively analyse each stock and understand whether the stock is going to provide enough returns or not. AI machine learning and quantitative analysis can surely act as an aid to traders, thereby enabling them to efficiently trade on the stock exchange. The quants could be coded in a manner which could enable traders to monitor every single move pertaining to the share or the commodity in question and they can simply analyse the data and the rest would be done by the quant system. However, it is imperative to note that this could lead to a rise in AI-enabled crimes. Before delving into the discussion of whether criminal liability can be assigned to an A.I or not, it is quite pertinent to understand what a crime is. • A crime consists of two elements: a voluntary criminal act or omission (actus reus) and an intention to commit a crime (mens rea). If robots were shown to have sufficient awareness, then they could be liable as direct perpetrators of criminal offenses, or responsible for crimes of negligence. If we admit that robots have a mind of their own, endowed with human-like free will, autonomy or moral sense, then our whole legal system would have to be drastically amended. Although this is possible, it is not likely. Nevertheless, robots may affect criminal laws in more subtle ways. Considering the development of AI related crimes in the field of finance, the liability which may be imposed upon traders cannot be determined when stock broking firms use quants to deal with the shares on the stock exchange. The increasing delegation of decision making to AI will also impact many areas of law for which mens rea, or intention, is required for a crime to have been committed. What would happen, for example if an AI program chosen to predict successful investments and pick up on market trends make a wrong evaluation that leads to a lack of capital increase and hence, to the fraudulent bankruptcy of the corporation. Then in such cases as the intention requirement of fraud is missing, traders or financial brokers could only be held responsible for the lesser crime of bankruptcy triggered by the robot's evaluation. Since, these quant systems would be automated, there are chances that the system may be highly prone to external attacks. Trading agents could analyse and execute a “profitable” market manipulation campaign comprising a set of deceitful false-orders, however, all of this could be a result of the machine making decisions on behalf of the traders, since quant analysis does not necessarily involve traders, however, it solely takes into consideration the performance of a stock and provides the details with regards to its performance to the traders, which enables the traders to understand the fluctuations in a better manner, thereby understanding the trend lines and the pattern in which it functions. However, there may arise major concerns with regards to AI’s involvement pertaining to market manipulation, price fixing


AI & Glocalization in Law, Volume 1 (2020)

and collusion. Traders in a market understand the emotion of the market, however, quants may not and they may make calls which could be criminal in nature or could short a stock, thereby calling undivided attention from regulatory authorities, however under these circumstances, it would be highly difficult to understand who is at fault, whether the traders or the autonomous quants used powered by AI used for trading. Now, existing liability models may be inadequate to address the future role of AI in criminal activities. For example, in terms of actus reus, while autonomous traders or quants can carry out the criminal act or omission, the voluntary aspect of actus reus would not be met, since the idea that an autonomous agent can act voluntarily is contentious. This means that agents, artificial or otherwise, could potentially perform criminal acts or omissions without satisfying the conditions of liability for that particular criminal offence. • When criminal liability is fault-based, it also requires mens rea (a guilty mind). The mens rea may comprise an intention to commit the actus reus using an AI-based application, or knowledge that deploying an autonomous agent will or could cause it to perform a criminal action or omission. However, in some cases the complexity of the autonomous agent's programming could make it possible that the designer, developer, or deployer would neither know nor be able to predict the AI's criminal act or omission. On the other hand, legislators could define criminal liability without a fault requirement. This would result in liability being assigned to the person who designed the AI algorithm, regardless of whether they knew about it, or could predict the illegal behavior. Apart from this, it is imperative to understand that market manipulation is something which could again lead to innumerable issues arising for traders, thereby putting them under the watch of the regulatory authorities. Artificial agents or traders or quants could use the trades on the stock exchange and adapt to techniques pertaining to order-book spoofing. Now, it is imperative to understand what is meant by order-book spoofing. Order-book spoofing means placing an order for a particular share, however, with no intention whatsoever to buy or execute the trade. This is done simply in order to short the prices of various shares, however, despite it being a profitable move, it cannot be deemed to be regarded as ethical and could put a financial broking firm in jeopardy, since social bots or quants have been proven to be highly effective instruments dealing with such schemes. However, this scheme can clearly be deemed to be regarded as an AI enabled crime and thereby there could be no judicial control over it. Therefore, a joint liability system needs to be regulated and developed and therefore, only one entity should not be held responsible. • Criminal actions have two important elements, that is, actus reus and mens rea. The actus reus taking place may still be proved, but proving mens rea, i.e guilty mind can be specifically challenging for judicial actors to rule upon • This burden to prove criminal activities, has led to a move towards product liability. This can help prove faultless liability. The problem is that, if mens rea is


• •

• •

• •

not entirely abandoned and the threshold is only lowered, then, for balancing reasons, the punishment may be too light Natural principles of reasoned decisions become critical for rule of liability. There should be proportionate tests, cause of action and nexus of activity with people are tests that must be administered to ascribe liability in cases of AI enabled crimes. Another point of view, which could be taken is to abandon the natural principles of justice when it comes to adjudicating upon matters where an A.I has committed the crime It is pertinent to take note that an A.I, if solely based on machine learning algorithms does not have any conscience of its own and negligible cognizance. In such a case the liability of the criminal act can be directly associated with the entity responsible for developing the A.I or who owns the rights to it. Even when an A.I programme possesses cognizance abilities and processes data in real time while interacting with a complex environment, the entities who develop A.I can use this as a “jail free card” to evade liability Joint -Liability and Vicarious Liability framework needs to be regulated and developed in the case of AI enabled crimes. Research groups and Independent Committee’s must be set up to deal with contention to develop such a framework, while considering the complex structure of AI systems. Understanding of BlackBox situations perfectly to the very roots in order to ascertain the intent of the AI in order to discern mens rea. Understanding of the concept of “free - will of AI” & requisite Boards of Ethics to govern the philosophy behind the very algorithms that will facilitate deep learning or machine learning towards free will. Furthermore, technical education in the field of AI is imperative for AIs to be effectively represented in Courts of Law, & advocates need to be trained. Principles of natural justice are the basic fundamentals of our justice framework and these need to be upheld in all aspects of the legal framework. But aspects like bias, is something that can be inevitable, the algorithms so developed may be neutral but will result in indirect discrimination against a certain category of individuals. India being a diverse country where discrimination and victimization exists so much, how will AI be able to avoid this. This discrimination can occur because of the inaccurate training data which also relates to the prejudices and does not take into account how the statistic came into being. Another important point is in algorithmic processing where a quantitative value will be given to the individuals and this will eventually call for discrimination. Transparency, another principle of natural justice is also at stake since transparency firstly, must exist in the social surroundings. Secondly, the legal framework


• • • • • • • • • •

• • •

AI & Glocalization in Law, Volume 1 (2020)

must also be able to provide the parties with the AI assessment that the judgement is based on. Now this is important and the framework is not able to do so, therefore violating the rule of law. Studies also show that even though AI could predict a suspect’s behaviour better than a judge, it does not take into account as many parameters as a judge does. It also gives way to the fact that there might be additional facts of the case which might be unique and go beyond the parameters. It is important to note that a lot of times AI takes into account parameters that are not in accordance with the legal issue at hand and take into account the activities of an individual in a holistic manner. For instance, a tracking app collects information on the individual and its movement and takes this into account in delivering justice on a particular issue. It is clear that this will be a clear violation of the rights of the individual. These make us question what our goals of AI in the legal framework would be, would it be decreasing crime with AI or would it be the fairness of procedure? Certain solutions would be the use of a charter for AI in justice to make sure it remains in ethical boundaries but is beneficial at the most. A human rights approach is of prime importance to upload the principles of natural justice. These changes or innovation in our legal framework will affect our fundamentals of the system and may even answer some of the smaller but important questions. Liability framework and jurisdictional competence with regards to AI in private international law. AI has been proving itself in shaping the world. A large discussion has been done on how IBM’s Watson will prove to be disruptive in industries. But AI in private international law is still a matter of discovery. Private international law has always been a composite field and its application was always a challenging topic both on jurisdictional gap and overreach. Discerning which laws shall govern which acts or omissions of AI systems. Alleviation of ambiguities in municipal laws to ready the world of the future for an AI enabled criminal empire/ecosystem. Discussion of extradition in reference to AI systems, & further, designing of treaties to ensure that the original softwares/algorithms for the AI system(s) are to be handed over to the Court of Proceedings in discovery, & further checking of the same to ensure that it’s not a decoy. Separate Law related to robotics and AI laws concepts and different sets of Legislation to understand and adjudicate, regulate could be made Nations have cyber-laws that determine their jurisdictions in their respective laws. This would help cover broad ranges of criminal activities. However, at the same time conflicting laws with other nations need to be regulated. Because it is often the case when it comes to cyber crimes that there is always an issue of jurisdiction, considering that there are certain A.I programs which


would solely operate over the internet, international forums can be set up for specifically dealing with A.I related crimes, to clear out any form of confusion of jurisdiction For example: Identity-cloning bots have succeeded, on average, in having 56% of their friendship requests accepted on LinkedIn. Now, in such examples we would have a problem of several countries claiming jurisdiction. Therefore, we need a contextual legal system, where the laws of other countries must be applied. Conflicting jurisdiction can be resolved through a co-operated approach of judicial actors as well as nations co-operation globally. ·To determine jurisdiction judicial actors and states must consider basic principles relating to jurisdiction. Legitimate as well as balancing interests must be taken into consideration, while determining jurisdiction of regulating AI criminal activities. Now, artificial intelligence is something which can be deemed to be regarded as the new marketplace reality. It is quite necessary to understand that with the advancements in technology, there has been a significant rise in computing power, improved algorithms and the availability of massive amounts of data which is being used to transform the society. Now, a lot of countries are coming up with various acts and policies which they wish to implement in order to regulate the rights and the liabilities that follow with the extensive use of AI. For instance, in Canada, the Treasury Board Secretariat of Canada (the “Board”) is looking at issues around the responsible use of AI in government programs and schemes implemented by the government. The European Union can be deemed to be regarded as the most active when it comes to proposing and contributing to various rules and regulations, with existing or proposed rules in seven out of the nine categories of areas where regulation might be applicable to AI. On the other hand, the USA maintains a not-so-strict regulatory posture when it comes to legislation revolving around AI. Autonomous vehicles are beginning to make an impact on the roads and a lot of governments and legislative bodies are working tremendously towards ensuring that their traffic laws and other relevant pieces of legislation pertaining to autonomous or unmanned vehicles remain relevant. It is also quite necessary to throw light upon the fact that a number of governments are adopting a “wait and see” approach when it comes to dealing with laws and regulations pertaining to AI. Just like how governments deal with new aspects when it comes to a technological wave being adhered to, it is hard to predict as to how AI would be implemented in the laws of the various countries. Countries like Australia, China, India, Indonesia, Japan, Malaysia, New Zealand, Pakistan, Singapore, South Korea, Taiwan have taken steps when it comes to the regulation of AI. In 2018-19, the Australian Federal Government incorporated in its budget, a sum of AU$ 29.9 million, which was allocated for the development of artificial intelligence and machine learning capabilities of Australian busi-


AI & Glocalization in Law, Volume 1 (2020)

nesses and workers. The package consisted of four elements which were developed specifically for the growth of AI. It included the funding of postgraduate scholarships and the “development of online resources to engage students and support teachers to deliver AI content in the Australian curriculum, development of a technological roadmap to inform government investment in artificial intelligence by identifying global opportunities in both artificial intelligence and machine learning, and any barriers to adoption in Australia. Further, the National Transport Commission of Australia also undertook the publication of a particular set of guidelines which could be adopted for trials of automated vehicles in Australia. Similarly, China’s State Council passed a Next Generation Artificial Intelligence Development Plan which elucidated in detail the AI development which China aims to undertake by 2030. It enumerates upon the various guarantee measures which include the development of a regulatory system, thereby strengthening intellectual property protection and at the same time, promoting the development of AI. Vertical Hierarchies in Judicial Governance and the role of AI in those hierarchies • Tribunals will need to be set up by states to deal with such specialised laws. There will be numerous challenges for the tribunals to give reasoned decisions. Training and Capability building is critical for making the stakeholders aware about the different aspects of these specialised laws. • Expert Opinions and Independent Committee’s dealing with technology will be an essential aspect that can help give judgments. The apex courts will depend on lower courts and their reasoning to achieve justice. Hence, along with expert opinions, decisions by courts at lower levels will be essential for development of this law. • The United Nations Development Programme reports that the reduced human interaction and traceability allowed by the ‘e-Courts’ case management system have great potential to reduce corruption risks in the Philippines. Similarly, the use of AI in the performance of more complex tasks could help to prevent manipulations in judicial decision-making, as well as monitor the consistency of case law. As the examples of China and the Philippines demonstrate, the potential benefits of emerging technologies have increasingly encouraged judiciaries around the world to use and explore the possibilities of using smart technology in the performance of judicial functions. An increase in investments in the development of such technology, both in the private and public sector, also suggests that we will see more smart justice tools in the future. • However, as recent studies and policy documents show, artificial intelligence poses significant challenges for judiciaries in terms of reliability, transparency and accountability. In cases where machine learning and predictive analysis are involved in the judicial decision-making process, there is a risk that technical


tools are replacing the discretionary power of judges and judicial officers, creating an accountability problem. Therefore, it is crucial that the judges are aware of the limitations of such technology to ensure compliance with judicial integrity and values endorsed by the Bangalore Principles of Judicial Conduct. One challenge involved in the use of AI in judiciaries is how judges will maintain control over the judicial decision-making process if AI technology is increasingly involved. The Villani Report, which is the result of a parliamentary mission that aims to shape AI strategy in France, points out that judges may feel pressured to follow decisions made by AI systems for the sake of standardization, instead of applying their own discretionary powers. This poses a serious risk of undermining judges’ independence, while reducing judgements to “pure statistical calculations.” This concern is addressed by the European Ethical Charter on the Use of Artificial Intelligence’s guiding principle “under user control” that suggests that judicial officers should “be able to review judicial decisions and the data used to produce a result and continue not to be necessarily bound by it in the light of the specific features of that particular case.” • Another challenge concerns whether the internal operations of AI and the data fed into it are reliable and accurate. AI provides certain outcomes based on the processing of the input of existing data. As a 2016 policy document from the United States (U.S.) government phrases it, “if the data is incomplete or biased, AI can exacerbate problems of bias.” This poses a significant challenge for judicial impartiality, as judiciaries cannot render impartial decisions based upon biased AI recommendations. The biased decision-making would also threaten judicial integrity and due process rights. It is, therefore, recommended by the U.S. that federal agencies in the U.S. conduct evidence-based verification and validation to ensure the efficacy and fairness of the technical tools that inform decisions bearing consequences for individual citizens. • Contemplating the inclusion of AI at every rung of the ladder of the federal structures. • It is imperative to understand that with the rising global interest with regards to the various advancements in the field of transformative technologies, such as Artificial Intelligence, the Indian Judiciary has marked its niche. The Chief Justice of India, Justice Bobde specifically enumerated upon how the launch of a neural machine translation (NMT) tool called the SUVAS (Supreme Court Vidhik Anuvaad Software) has been deemed to be regarded as capable of transforming orders and translating judgements and the various orders passed by the courts in English to up to nine vernacular languages spoken and read in India. Justice Bobde further enumerated upon how the use of AI has significantly improved the overall efficiency of the Indian Judiciary and how it can enable the judiciary to lower the backlog of cases. The backlog of cases is a major issue in Indian Courts, however, with the development of AI tools, the process of adjudication could be undertaken in a time bound manner, swiftly and smoothly. However, the implementation of such a mechanism would take long considering the


AI & Glocalization in Law, Volume 1 (2020)

fact that India is still in its nascent stages and the large interest in the growth of transformative technology in India, especially in the Indian Judiciary, can be deemed to be regarded as a challenge. For the most part, it can be said that it is highly a difficult task to adapt to such a technology given the fact that there is immense scarcity of open access to judicial information and datasets. With the Covid-19 outbreak, innumerable conversations have been developed with regards to the various technological interventions such as video conferencing in Indian Courts and these need to be institutionalised within the judiciary itself even after the crisis averts. Unless, the current judicial data regimen is not opened up for a technological development, attempts to inculcate a strong, AI judicial ecosystem seems like a farfetched option. This is, however, possible in the higher courts like the High Courts and the Supreme Courts. In fact, higher courts have already adopted in some way or the other, the benefits arising out of an AI legal ecosystem. However, with regards to the district courts or the small courts, there has been no significant development yet and it is highly imperative that the computerisation and interconnectivity of all courts take place on an urgent basis. However, it cannot be rendered possible unless an overarching open data policy is taken into consideration. This policy needs to be implemented and the information with regards to the cases and orders from each court in the vertical hierarchy needs to be collated by the judiciary, which at present remains scattered and haphazard. • It is safe to say that Data is something which acts the primary driving force when it comes to AI innovation and the absence of curated data which could be fed to these machines needs to be compiled and coded before it could be deployed in the Courts. Therefore, it is highly imperative that all the existing information with regards to the information that needs to be collected and collated in the future, should be archived into a readily available data set, which should be in consonance to the recognised principles of open access to data. A number of countries are already conducting test runs with a huge variety of cases and analysing this data with the help of smart searches. Courts are also implementing software which could prepare legal briefs based on precedents. • There have been innumerable developments in the judiciary wing of a number of countries. Such tools, which are developed in consonance with the judiciary could be deemed to be regarded as a remarkable achievement when it comes to improving access to justice. AI has already been adopted and used in a number of decisions passed by the American Courts. For instance, in the case concerning Washington v. Emanuel Fair, the defence in a criminal proceeding sought to undermine the results of a genotyping software program which had the capacity to analyse complex DNA mixtures based on AI while at the same time, asking that its source be disclosed. The Court accepted the contention that the use of the software was valid and reiterated that a lot of states have accepted the use of such a program. The court however, declined the request that the source code be disclosed.


• Next, in the case of State v. Loomis, the Wisconsin Supreme Court contended that a trial judge’s use of an algorithmic data risk assessment software while granting a sentence does not violate the accused’s rights, despite the fact that the methodology which may be used could be used to produce the assessment of the judgement, however, this assessment was not disclosed to the accused and neither was it disclosed to the court. It is imperative to understand that AI in Litigation is evolving and it is in its early stages in a lot of countries. For instance, in Argentina, AI is relied upon extensively in order to assist district attorneys while they pen down their decisions in cases which are less complex, such as a taxi license dispute wherein presiding judges could either approve the same, reject the same or rewrite the same. The algorithm relies upon the district attorney’s digital library containing 2000 rulings and the AI program is coded in a way which enables the judges to match cases specifically with the most relevant decisions provided for in the database and this therefore enables the court to provide its ruling. • However, for such systems to be implemented in India, it is necessary that Indian Judiciary truly exploits the transformative potential of emerging technologies such as AI and recognises the impediments in the current data system and provides a remedy for them in a swift manner. An open data policy could be undertaken and passed which could set out the ground rules of data accessibility, as well as carving exceptions to preserve the privacy which can be deemed to be regarded as a sine qua non for the Indian Judiciary in the present times. • In the vertical framework of judicial governance, the advent of AI has many beneficial aspects and recent news on the introduction of E-filing proves that there are many areas we can progress and include AI into. • What is necessary is the facilitating functioning communication between human and machine. • Lower courts are of prime importance. Since, this is the first level of the judiciary, it is necessary that the processes and the functioning be consistent starting from here. • As these courts will be in contact with the lowest strata of the economy, introduction of AI in the lower courts must be in consideration of all classes, specially in a diverse country as India. The need of the hour is an AI which not only is comprehensible but also culturally acceptable to the society. One must keep in mind that certain areas of the country are still lacking in their acceptance or use of technology, hence making it absolutely crucial that we make sure that we are ready for the introduction of AI in the judiciary. • Taking into example the filing system which arose due to the Covid-19 pandemic has seen to be very beneficial and has been the opportunity of 24*7 filing, work from home and also paper free and economical processes. • This is the moment to urge the innovators that the daily wage workers who constitute a major population of the Indian economy are the ones who can benefit and also be against the introduction of AI if not done properly with prior trials.


AI & Glocalization in Law, Volume 1 (2020)

• A simple example would be the jobs of the important person in a courtroom who is the Clek, with the introduction of AI systems in courts or court rooms, a clerk’s job is reduced to only a few things and this is going to create social problems for the community and may lead to further issues. • One must also keep in mind that the dependency on AI must be limited and should be kept in check so as to avoid being completely dependent on AI. As this would create more issues and take away the human touch from the judiciary. • Well in case involvement of AI in the vertical structure of judicial governance. Firstly, we need to understand the all our current structural system is standardised enough to adopt such an advance AI • The first goal is to educate these forums and people who were working there about the advancement of AI technologies and how it can aid them in their working process. Secondly, we need test runs trails on these forums before final working mechanism • Thirdly we had to locate to what extent of Ai has been used in its day to day process so that AI can be utilized at its fullest. That would be the first step towards moving into the Age of AI. Otherwise all the efforts will be pointless. • When people were educated enough to utilize its resources enough. That will give us additional data to innovate on the basic structure of applicable AI. The additional data will greatly help in understanding the composition and thought or interpretation of AI and Ai can innovate their after on the fast scale. • We also had to understand the Limitation on application of AI as well because as well as to what extent we are letting in this technology. We have to understand that one should control a technology rather than it control us. • Of course, anything new and innovative always interests the human mind but excessive usage of the particular application might result in its own demise. Plurality of Tribunal mechanisms and dealings • Plurilateralism has always been a part of the International Order. Disputes settled by Negotiation, Arbitration or even under National and International courts, there could be varied opinions. However, this can be used to form and understand best practices of entities across the world. • Judicial actors will have a critical role to integrate the larger International order and its principles, in a specialised law such as AI. Also, they must indulge in cross-references and coordinate decision making to improvise and develop the AI norms and values in the future. • Plurilateralism has to maintain the integrity of multilateral agreements and follow the principles of universality, inclusiveness and transparency because it is often witnessed that Plurilateralism fragments and disrupts the larger multilateral process, including multilateral cooperation on different issues.


7 Resolution adopted by the AI General Assembly, 2020, 1st Session on October 4, 2020 Abstract. The resolution adopted by the Assembly was approved unanimously by the delegates of the Assembly.

Statement received by Abhivardhan, President of the Conference Mr Sanjay Notani, President, AI General Assembly, 1st Session, 2020 Dr (H C) Sandeep Bhagat, Vice President, AI General Assembly, 1st Session, 2020


AI & Glocalization in Law, Volume 1 (2020)

Resolution 0410-AIGA-S1-2020-01-RES (2020) Adopted Unanimously by the Assembly on October 4, 2020. Authored by: Samar Singh Rajput Dev Tejnani Kashvi Shetty Prakhar Prakash

Anurati Bukanam Prantik Mukherjee Sanad Arora Shruti Somya

The AI General Assembly, Recalling the principles of the Charter of the AI General Assembly, Respecting the principles of international law and human rights, and affirming that securitization and liberalization are equally important; Affirming that the Report 0110-AIGA-S1-2020-01-REP (2020) on the Scope of Splinternet & 5G Governance in Multilateral Governance & Data Sovereignty Policy was passed; Affirming that the Report 0210-AIGA-S1-2020-01-REP (2020) on the Legal and Political Repercussions of Privatization of Autonomous and Augmented Systems in Space and Conflict Activities was passed; Affirming that the Report 0310-AIGA-S1-2020-01-REP (2020) on the Assessing the Scope, Liability and Interpretability of the Paralytic Nature of AI Ethics Board in Corporeal Entities was passed;


Affirming that the Report 0410-AIGA-S1-2020-01-REP (2020) on The Role and Framework of Plurilateralism in AI-enabled Crimes and Judicial Governance was passed; 1.

2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Calls for incremental and tailor-made strategic reforms in multilateral governance and the role of statutory bodies in establishing risk assessment and analysis measures with regards to the implications of artificial intelligence in the domain of human dignity and liberties; Noting the need of Multistakeholderism to frame certain parameters and requirements to regulate AI systems; Calls for a disambiguated categorical assessment of legal and policy regulation cum regularization of artificial intelligence and their affiliated products and services; Recalls the role of independent organisations and individuals, to advance public dialogues and maintain transparency in driving and purpose and incremental auditing of AI Ethics subject-matters within the ambit of AI Ethics Boards; Notes the role of International Organisations, to favour developing countries, and allow sustainable development at levels respective nations find it financially viable. Invokes a request to focus on the role of corporeal entities to ensure that the measures taken while regulating AI systems are diverse and have an inclusionary nature; Calls for Research groups and Independent Committees to analyse the functioning of a Joint-Liability framework while regulating AI in developed and developing economies on an incremental basis; Requests to foster private-public regulations which stress the scope, limitations, compatibility, accountability, adaptability, profitability, adjudication and proficiency of the AI systems; Urges governments and minilateral bodies to further look into the current existing laws in field of AI in different jurisdictions and their relative, transboundary and transnational implications; Urges to focus on rejuvenation and diffusive measures in ethics and thought leadership to ensure cultural diversity in the approaches towards establishing AI Ethics models; Calls for educating the technological aspects to current societies and the importance of AI systems; Calls for the regulation to emphasize upon the need for a comprehensive and separate piece of legislation dealing with AI governance in different sectors. Calls for the regulation to focus upon the various aspects pertaining to the different sectors of industries and the imperative behind the need of regulations in all sectors when it comes to dealing with autonomous systems; Calls for the regulation to address the importance of having programs to educate the society about the innovations of AI, with the idea of preparing and spreading awareness of the same;


AI & Glocalization in Law, Volume 1 (2020)

15. Emphasizes that the splinternet is neither discouraged nor promoted but appropriately regularized to an extent as it can protect and promote national interests and restrict information warfare on the internet because countries differ in the social, constitutional, administrative and strategic application of morality; 16. Suggest that instead of retaining uniformity in approaches towards AI for all the nations and cultures, personalised AI Ethics approaches valuing the cultural identities of different regions could be promoted; 17. Emphasizes that focus of the international community should shift away from protectionism, e.g., data localization as the only method of ensuring data protection to meeting the global standards of data security and data privacy, which would ensure free flow of data across the globe and aid in global economic growth; 18. Notes that that the potential for empowerment through the use of robotics is nuanced by a set of tensions or risks relating to human safety, privacy, integrity, dignity, autonomy and data ownership; 19. Urges that policies drafted by countries must ensure that granting any sort of a legal entity to AI is not misused to escape any contractual and tortious liability possible to be caused with artificial intelligence or its categories or classes being the subject-matter; 20. Considers that civil liability in robotics is a crucial issue which needs to be addressed at the international level so as to ensure a degree of transparency, consistency and legal certainty throughout the world in order to benefit the multiple stakeholders participating in the development of AI; The Assembly adjourns sine die. 1st Plenary Session, October 2020


Policy Reports on the Research Panels of the Conference


AI & Glocalization in Law, Volume 1 (2020)

8 Decrypting AI Regulations & Its Policy Dynamics – Panel 1, October 1, 2020 Vrushali Marchande Rapporteur, Indian Conference on Artificial Intelligence and Law, 2020

Opening Remarks The Panel Discussion was Hosted by Adv. Sushanth Samudrala, Cyber Law Expert & Technological Consultant, And CEO of Sushant IT Law Associates, Member, Expert Panel on Law, Technology and International Affairs for the Indian Society of Artificial Intelligence and Law. He at first introduced the speakers: • Mr Sebastian Lafrance, crown Council for the public Prosecution service of Canada, member of the law society of Ontario and of the Law Society of Quebec, and Council for the Law Branch of Supreme Court of Canada. • Ms Luna de Lange, Partner and Data Protection Officer. Ambassador Adviser and Partner of Fin-tech working Group of Arab Monetary Fund. and Member of the Legal Practice Council of South Africa. • Mr Bogdan Grigorescu, AI Platform Manager at Combined Intelligence. Engaged in Large Scale of Implementation of Communication Platform worldwide. has led the Implementation of an platform at MNS Underpinning experience for customer calling retail services. • Mr Christoph Lutge, Director of a TUM Institute of Ethics in Artificial Intelligence at the Technical University of Munich, Holds the Peter Lauscher Chair in Business Ethics at the Technical University. Member of the Ethics Commission on Automated & Connected Driving of the German Federal Ministry of Transport & Digital Infrastructure as well as of the European AI Ethics Initiative i.e. AI for People


Ethical Risk to AI, and Legal Technology from an Indian Perspective Mr. Christoph Lutge, first thanked the organizers, and stated that the adoption of artificial intelligence by companies and additional risks involved for example in terms of the increase invasion of intuition, discrimination, security and privacy issues are important enough for consideration. He further stated a very significant quote said by Theresa May, the former British prime minister “In Global Digital Age there’s a need of Norms & Rules to be established and shared by all, which ultimately includes establishing rule & Standards that can make the most of AI” Miss Luna De Lange stated Legal Technology from an Indian Perspective Implemented by the Court with the Introduction made by at the Announcement of Indian National Constitution Day on 26th November, 2019. Adding to miss Luna Viewpoint adv Sushanth stated cyber security is important paradigm to be considered from a broader perspective with regards to legal technology. Mr Bogdan Grigorescu stated that AI systems are not morally accountable agents. Target Principles such as fairness, accountability, sustainability and transparency are meant to fill the gap via critical methods between the new smart agency of machines & the fundamental lack of moral responsibility. Bogdan believes that all AI systems have to be designed to facilitate the end-the-end answer-ability & audit-ability. There is a need of an actively monitoring protocol to enable the end-to-end oversight & review for observability of the entire system to go hand in hand. Mr Bogdan also stated specifically of public sector accountability Issues such as: • Accountability Gap & Complexity of AI System. • Production Process.

Global Bodies in AI Mr Sebastian is of firm belief of the fact that there are no borders in the perspective towards AI that non-regulation in countries like China, the US and the rest. For various Reasons. Challenges such as required establishment of international binding Regulation and legal instrument, He is of concise view that policy constraints in AI Regulations should be tackled at domestic and realistic regional level. Meanwhile Prof Christoph stated that a normative culture is important in AI Ethics with the need to go beyond in detail developing ethical guidelines for AI keeping transparency, fairness and accountability as very important consideration under due supervision & governance with regards to such technology in the implementation and its us. Ms Luna stated there are several global bodies to be looked at AI such as the World Economic Forum, G20 and OECD. Luna further stated a very principled thing from an AI Ethics perspective that there is an agreement between all the bodies governed under the UN that “No country or


AI & Glocalization in Law, Volume 1 (2020)

entity would undertake the build of AI technology so that it gains consciousness�. It’s imperative to have regulation & enforceable in order to prevent, reduce & mitigate the issues that fall into the domain of AI.


9 Algorithmic Diplomacy, Geopolitics & International Law: A New Era – Panel 2, October 2, 2020 Vrushali Marchande Rapporteur, Indian Conference on Artificial Intelligence and Law, 2020

Opening Remarks The panel discussion was opened by the moderators, Mr. Abhivardhan (Founder and CEO, Internationalism, Founder & Chairperson, Indian Society of Artificial Intelligence and Law), and Mr. Akash Manwani (Research Analyst, Internationalism). Mr. Abhivardhan then introduced the speakers: • Mr Roger Spitz, Founder, Techistential, Chairman, Disrputive Futures Institute. • Mr Emmanuel Goffi, Director of the Observatory on Ethics & Artificial Intelligence, Institut Sapiens. • Ms Nicol Turner Lee, Chief Executive Officer, Senior Fellow, Center for Technology Innovation, The Brookings Institution. • Mr Eugenio Vargas Garcia, Senior Advisor to the President of the UN General Assembly. • Mr Bogdan Grigorescu, AI Platform Manager, Combined Intelligence.

Algorithmic Developments and Human Relevancy Mr. Spitz started by saying how machines were used in the past for optimisation and repetitive tasks. But looking at it as a value chain of decision making, it is not infeasible to imagine AI tackling human dominated domains of creativity in the future. This journey will continue, irrespective of its outcome. What humans need to do to stay relevant is to understand where humans bring value. However, data and AI is less valuable in a complex domain, than a domain with known unknowns and a range of right answers with transparency and understanding of causality. Humans need to be more agile, anti-fragile, and adaptable, and need to understand the whole systems so as to realise where their value lies.


AI & Glocalization in Law, Volume 1 (2020)

An Independent, Multilateral Body on AI and Algorithmic Diplomacy Mr. Garcia stated that AI is General Purpose Technology (GPT), and may pose risks that need to be mitigated through the advent of certain standards and norms. While there is no UN multilateral body on the subject as of yet for many reasons. Technology has been evolving too fast, and national legislations are still being discussed in many countries making it difficult to engage in international deliberations on AI. People are also sceptical of any substantive agreement among the big tech companies, and reaching a workable, international consensus looks to be a remote possibility. An organisation overseeing the subject in the future would discuss remote cooperation, oversee, guide, and set standards on AI policies globally to promote safe, beneficial AI. It would be, however, difficult to strike a balance, however, between soft law and binding regulations. One idea that could be explored would be to create a body on AI similar to the Intergovernmental Panel on Climate Change (IPCC) to provide technical advice on AI policymaking.

Vices of Algorithmic Autonomy Ms Lee suggested that technology has the ability to enhance and be more efficient. We have seen technology become useful in a variety of sectors, and has allowed us to do things with unforeseen pace and decisiveness, and will continue to advance public good applications such as poverty, food, health, etc. But at the same time, it can have vices, and create systemic biases constricting people into certain realities. For example - use of technology for elections can both expand upon freedom of speech, and help spread misinformation at the same time. Another example is the use of algorithms in criminal justice. Risk assessment algorithms can err on the side of over-criminalising certain populations because the feeder data started off as flawed. How we look at the design of AI comes with humans’ lived experiences. The very thing that we are designing to accelerate the speed of though may have the ability to box-in certain people. If one does not couch these issues around civil and human rights within the regulatory frameworks, it is possible that we may have different sets of norms and values around machine learning algorithms.

Concept of an Ethical AI Mr Goffi noted the inherent philosophical difficulty in answering the question, stressed in the varying and complex definitions of ‘responsibility’ and ‘trustworthiness’ in different parts of the world. Also, one cannot apply those words to a technology. AI, till it is autonomous and sentient, is a tool, and trustworthiness comes from the people developing the tool. It is important to not use the terms as marketing tools to try to make people more comfortable with AI. He also noted that vigilance is more important, in his opinion, than trust. The tool itself does not matter


for moral judgments. He also pointed out that the most of the ethical debates are dominated, at the international level, by western countries. The rest of the world has just decided to adopt the same, despite the cultural differences and how they resonate differently in the minds of people of different countries.

An Autonomous, Sentient AI Mr. Grigorescu agreed with Mr. Abhivardhan that AI only needs to be good enough to be better than humans, noting how humans evolve at a much slower pace. There are some researchers that believe that a general AI will not be built in our lives, but he noted that it does not need to truly autonomous to be disastrous. Technology is neutral, and the onus is on us as to what kind of society we want to live. Agreeing with Mr. Goffi about how we need to look at these concepts from diverse viewpoints, and also agreeing with Mrs. Lee about human rights issues related to technology. Commenting on the theory of AI taking over the world and enslaving people, Mr. Grigorescu commented that people enslave people, not technology. He prefers to be wary of technology and be vigilant of new technology.

Algorithms’ Impact on Diplomacy Mr. Garcia noted that social media algorithms can make billions of people to believe in something, and can manipulate public opinion, in a way similar to the Nazi propaganda in the 1930s. The issue is the power of AI as a tool of manipulation that can be strategically used by a political regime for nefarious purposes. We face the problem of democratisation and normalisation of disinformation. There is also a debate on how applications and social media are contributing to the increase of polarisation by trapping people within their bubbles of information. This can also have a major impact on diplomacy as well, creating a tension between traditional diplomacy and the change brought about by the digital revolution. The pandemic has accelerated the adoption of many digital technologies, and diplomats have had to adapt very quickly. But virtual meetings do not allow for face-to-face interaction, making negotiation on sensitive topics difficult.

National Level Impact of AI and Algorithms Ms. Lee talked about the constant struggle on the race to AI with nations trying their hardest to become a forerunner in AI research. But we also see a disparate group around the world to come up with standards that may not actually materialise into operations and action statements. We see a digital divide and the extent to which we are not counting a majority of humans in the development and regulation of disruptive technology because they are not connected. She talked about the scary prospect of, what Dr. Ian Bremmer has termed, an AI Cold War, which has a lot to do with both disparate investments as well as the positioning of AI in different


AI & Glocalization in Law, Volume 1 (2020)

countries as to it is worth. She talked about how the American election may shape AI policies in USA, and how different the approaches could be under the Bush administration than the Trump administration, but it may also depend on who they put into office. However, despite the potential to solve human problems, Ms. Lee stated that AI systems can be weaponised and compromise sensitive data systems such as finance and policy. We also need to think about who will sit at the table so as to ensure the development of an ethical, legal AI, and these issues plague pannations.

Importance of AI Ethics Mr Goffi highlighted the importance of AI Ethics, especially in the absence of a legal framework. While he prefers stringent regulations, he notes that the technology is evolving too quickly, making it difficult to find a right kind of legal framework. On the other hand, we should also not hide behind cosmetics and making things seem better than they actually are, and these terms need to be used cautiously. While ethics are important, it is also used as a buzzword owing to the complex, philosophical nature of ethics. When one hears someone talk about ethics, especially those without a background in philosophy, such as companies and politicians, we can see that most of the time it is a cosmetic. The problem is that there is no way to apply ethics for sure. And in the absence of a regulatory framework, there is no constraint or sanction behind the use of the term ‘ethics’. And one needs to make sure to not fall in the trap of cosmetics. There needs to be strong conviction behind the importance of ethics, and the term should not be used to just sell the product.

Role of Splinternet and 5G Mr. Grigorescu stated the importance of 5G as an enabler for AI systems. It is a pipeline for transfer of huge amounts of data in a short period of time, with a lower latency. For systems such as Internet of Things (IoTs) that require low latency, 5G acts as an efficient fuel. It also mitigates the congestion and bottleneck of radio frequencies for a while. 5G also address more important issues such as general network planning, resolving outages. It also exponentially increases the number of devices that will connect the to the internet. This may increase security risks, but is also a space for machine learning to help fix the issues that prop up. Machine Learning with 5G can also be used for near instantaneous technical decision making without the need of Wi-Fi. We also need to ensure that data is not bundled into insulated silos that will hamper AI development.


New Era: Impact of AI on Geopolitics in the Future Mr. Spitz talked about how most of the information is driven by commercial entities rather than public institutions. The unlimited amount of information is transforming society, and technology has blurred the line between the provenance of information. China also has a strong strategy on AI and has established itself as one of the forerunners, in part due to their competitiveness and fewer safeguards. An example of this is the machine-driven surveillance, that may have huge ramifications on the position of different countries on topics of cybersecurity. It is also interesting to see what will come out of the recent ‘US–UK Artificial Intelligence Research & Development Agreement’ and how this geo-political alliance is driven by information and data. We can also see an increase in polarisation due to disinformation and misinformation, and it is interesting to note in whose interests political destabilisation may be.

Concluding Remarks Various issues were discussed ranging from AI Ethics, geopolitics, diplomacy, and such. One solution to the problems highlighted might be, in the opinion of Mr. Abhivardhan, for companies and governments to collaborate for better principles and solutions in order to come up with transitional principles that do not outpace the development of AI itself.


AI & Glocalization in Law, Volume 1 (2020)

10 Algorithmic Trading & Monetization: Policy Constraints for Disruptive Technologies – Panel 3, October 3, 2020 Mayank Narang Rapporteur, Indian Conference on Artificial Intelligence and Law, 2020

Opening Remarks The panel discussion was opened by the moderator, Arletta Gorecka, PhD Candidate, University of Strathclyde. She first introduced the speakers: • Mr. Ratul Roshan, Associate, IKIGAI Law. His engagement with AI is mainly on the policy level. IKIGAI has also assisted the governments in a limited capacity by sending in their comments, positions, and opinions on how AI ecosystem will best develop in the country. • Ms. Pooja Terwad, Start-up Lawyer, Pooja Terwad and Associates - Start-up focussed law firm that have worked with 500+ start-ups and 100+ investors. She has also advised Gujarat and Maharashtra governments with their start-up policies, especially tech-based start-ups. • Dr. Raul Villamarin Rodriguez, Dean, Woxsen University, Hyderabad, India. He specialises in the area of AI and Machine Learning, with a focus on quantum computing. • Ms. Akshata Namjoshi, Lead Fintech, Blockchain and Emerging Tech Counsel, KARM Legal Consultants, Speaker - She has advised multiple clients on Security Token Offerings and Initial Coin Offerings. setting up of crypto-currency exchanges, and deployment of public and enterprise blockchains. Additionally, she has also advised clients on policies and compliances for use of artificial intelligence in the financial sector. The floor was then passed to Mr. Ratul Roshan. AI is a fairly nascent industry in India. There is always a tussle between the Ministry of Electronics and Information Technology, the NITI Aayog, and the Depart-


ment of Telecommunication regarding its regulation. The DoT in its discussion paper talks about a central AI regulator responsible for overseeing developments in AI across all sectors, but ignores how AI is a sector-driven initiative and become a detriment to AI development. There is another recommendation to create a centrally controlled database of AI data, ignoring the possible allegations of data and IPR misuse, also ensures that one loses some control over how the data is used by other parties. Prescriptive hard regulations on how AI should develop are an anathema to AI develop, and will always be opposed by all companies.

Balance between allowing the creator to benefit while ensuring accessibility by issuing a non-assertion pledge when open sourcing algorithms One may want to voluntarily make one’s algorithms open source, or commercialise it depending on the products that one comes up with. The problem is not with what private parties want to do with IPR. The problem is the government’s intervention in the development of the algorithm, and the ‘voluntary mandatory’ participation in the DoT’s AI Stack. A lot of parties have come out and said that the Indian government does not have the technical wherewithal to execute such a program. However, this is a discussion paper right now, so there is time to modify. And as to if the AI Stack will stun innovation, the answer is yes. The government should not mandate sharing of data, as that should be left to the entities to decide for themselves. This might also go against India’s obligations under the TRIPS Agreement, and many IPR protections that Indian law affords us. All of these are open questions, part of a larger discussion on the government’s ‘Go Local’ initiative.

An Indian sector with substantial movement of AI: Agriculture (AgriTech) There is no industry that will not leverage AI looking forward. The one major sector that the government is trying to promote is Agriculture. AgriTech is an emerging sector, partly due to the government’s report on how to double farmer’s income by 2022. Due to that, a snowball effect started with NITI Aayog, Ministry of Commerce and Industry (MOCI), and the Ministry of Electronics and Information Technology (MeitY) chiming in on how to leverage AI in the field of agriculture, from remote sensing, weed detection, climatic predictions, animal recognitions, etc. Even in the agricultural infrastructure fund worth Rs. 1 lakh crores had a distinct section dedicated to agri-entrepreneurs to ensure AI is leveraged in agriculture. The NITI Aayog also came up with AIRAWAT or AI Research, Analytics, and Knowledge Assimilation Platform, a common cloud platform to leverage AI in India with both public and private participation, and an entity can leverage the data


AI & Glocalization in Law, Volume 1 (2020)

stored there in development of AI. The Agri Stack will comprise of 3 stacks - the farmer stack (identification of farmers), the farm stack (identification of dimensions and requirements of farming needs across the country), and the crop stack (identification of needs and impediments of specific crops in the country). So, the government is definitely focussing on agriculture more than any other field at this time.

The effect of the new data protection law on emerging tech companies, especially FinTech companies The RBI, in 2018, came up with regulations to store financial data, and stated that an Indian entity cannot send financial data outside India except certain stringent exceptions. The clash with the proposed Personal Data Protection Bill (PDP Bill) was related to the treatment of financial data as Sensitive Personal Data (SPD) which are subject to a totally different set of stipulations, at loggerheads with the RBI regulations; both of them try to regulate FinTech in some degree or another. The effect is more confusion and there is need for more alignment with sector-specific regulations such as those by RBI and over-arching regulations through the PDP Bill. The floor was then passed to Ms. Akshata Namjoshi.

Impact of algorithms on the FinTech industry, and the governance issues for the same There is very less policy discussion around AI and ML is to be treated with FinTech, and is equated with any other emerging technology. Anything that does not fit into the existing regulations can find itself a place being regulated in a closed environment such as a regulatory sandbox. There are different sectors in FinTech affected by AI, especially compliance, financial forensics, loan lending, and digital investment management sectors. The problems vary by sector, but majorly, depends on the level of reliance on AI and the level of interaction of a consumer with the AI solution. For the AI compliance sector, the problems are the lack of policy conversations and how compliance-based answers are drawn upon from the existing regulations which may not serve well with technological innovation. For money laundering, regulators have been positive in relying on technological solutions to crack down financial crime, especially after the FATF Regulations. For loan and lending, the change in thought process is more related to the FinTech sector (and AI has taken a backseat). Regulators, at least in the Middle East have looked more upon the outcomes of technological innovations than the innovations themselves, and are


weaving the regulations backwards. The major issue is in the field of digital investment management, especially robo-advice. There are major concerns to analyse and assess as to what the consumer is actually relying on, making it difficult to assess liability.

Role of algorithmic trading in open-banking and finance There are two ways to look about it: use of AI in open banking and finance solutions, or use of AI parallel to open banking and finance. The aim of open banking is to bring the control back in the hands of the users. When it comes to AI, the question then remains is that is one relying on an AI based outcome or solution, and data collection, processing, and selling of one’s trading and investment pattern data. When it comes to account information services and open banking services are concerned, AI plays a supportive role. But for payment initiation services, regulating AI could be difficult. The reasons for that are that one gives charge to a FinTech solution which will gather information and consent and the first layer of authentication. By introducing AI in that, a consumer’s interaction with both the regulator and a human being is significantly reduced, which would be hard to regulate. Open banking and finance’s interaction with AI is still thankfully a few years away.

How the middle east using technologies such as robo-advisory in the Banking, Financial Services, and Insurance (BFSI) market The regional dynamic is different in the middle east, with a massive population of High Net-Worth Individuals (HNIs). There is a huge potential for a mid-level financial trading, wealth, and investment management activities to take place, which might not be the case in India. Tapping into the industry by robo-advisories was unexpected. The regulators have stated their intentions to treat them as any other FinTech solution and comply with any specific financial regulations in existence. The regulators are looking more towards the output that flows to a consumer than the technologies behind the output, but they have not restricted the technology’s entry in the market and allowed it to seep in the market. The floor was then passed to Dr. Raul Villamarin Rodriguez.


AI & Glocalization in Law, Volume 1 (2020)

Some of the revolutionary trends in the education sector with the impact of AI AI and ML is moving beyond the classroom setting, for example, taking automated attendance through facial recognition, and also monitor unfair practices during examinations. At Woxsen University, we have developed a chatbot that has been logged with all FAQs that students might have, and is integrated with digital services such as Amazon Echo and Google Home. AI can revolutionise the education sector, but it will depend upon the country that implements it and its laws, which is difficult since it is extremely difficult to regulate the sector. He then poses two questions for the audience to ponder - what to do when AI goes wrong; and how to eliminate the programmer’s bias that in engrained in the code?

New developments in the augmented reality (AR) and virtual reality (VR) in conjunction with AI AR and VR have developed in aspects of tourism, shopping, entertainment, etc. The three key areas that are interesting are AR/VR and psychology (for treatment of phobias that are represented in the virtual world), AR and online avatars, and VR and 5G internet. The floor was then passed to Ms. Pooja Terwad.

How start-ups have adopted different algorithmic tools and the governance issues they face. Start-ups in India are only concerned with a small budgetary allocation, and a very small number of start-ups are working with core AI and ML. For AI to show results, a very strong understanding of the technology is essential, which is clearly lacking among the start-ups. The second problem is that it takes a considerable time and capital for a new technology like AI to show results, which is another challenge for start-ups. We see more development, therefore, in ready-to-use technologies. As for the governance challenges, the current government has tried to allocate funds and take steps, but conventional laws have hindered development.


Regulatory issues that India is facing now in the areas of cryptocurrencies and blockchain Regularity uncertainty is not exclusive to India. RBI and the entities engaging in cryptocurrencies been competing since 2016-17. Before we look at cryptocurrencies and blockchain and their laws, we need to look at whether India is ready for the same. During the pandemic, India has experienced a peak trading volume for cryptocurrencies as people look for an alternative to USD. Some banks have also started to work with cryptocurrency firms, and the industry is going through a revival right now. Therefore, it is for us to see whether the Indian regulators will provide a clear set of rules for a healthy business environment.

Issues related to assessing liability in the healthcare sector caused by AI This problem is not just limited to the healthcare sector, and is a focal point of the debate about AI in India. When it comes to healthcare, there are a lot of startups who want to revolutionise the industry, but the question is how to protect the consumers. The first aspect is the Consumer Protection Act. Even for a traditional medical negligence, the consumer has to go through a lot of turmoil to prove the violation. The first and most important step would be to make the judicial system aware about the impact of AI in the first place.


AI & Glocalization in Law, Volume 1 (2020)

11 Artificial Intelligence and its Synchronous Implications to Ecological Data Solutions – Panel 4, October 4, 2020 Mayank Narang Rapporteur, Indian Conference on Artificial Intelligence and Law, 2020

Opening Remarks The panel discussion was opened by the moderator Mr. Abhivardhan, Founder and CEO, Internationalism, Founder & Chairperson, Indian Society of Artificial Intelligence and Law. He first introduced the speakers: • Mr. Rodney D. Ryder, Founder Scriboard - a commercial law firm, • Mr. Raghav Mendiratta (London School of Economics and Political Sciences (LLM), Legal Research Scholar, Stanford Centre for Internet and Society; Legal Researcher, Columbia Global Freedom of Expression. • Mr. Pinaki Laskar, Founder, FishEyeBox, Expert Member, Indian Society of Artificial Intelligence and Law. He then passed the floor to Mr. Raghav Mendiratta.

Data Sciences and AI Mr. Mendiratta first thanked the organisers, and stated that AI is being looked like the magical panacea to solve all of the world’s problems despite the technology itself being value neutral. To give an example - in 2018, big oil invested more than a billion dollars in AI in order to better predict the locations of oil reserves and how to extract more oil from existing reservoirs. Google and Amazon have also started partnering with big oil. AI, while not being the be-all-end-all, has massive potential to not only mitigate current greenhouse emissions and also invest in the adaptation of new technologies to further to goal of countering climate technologies. For example - AI can help in streamlining existing power generation to further optimise the existing power grids; AI is also being used to develop cleaner technologies such as cleaner, more efficient batteries. But with potential also comes challenges, and therefore, we must take a balanced approach for regulation and development of AI.


Mr. Laskar stated that the current AI is not AI in its true form - it is not intelligent, not sentient. The conflict is not between the humans or machines. The aim is to achieve a General-Purpose Technology (GPT). One cannot regulate (true) intelligence, a form of mental ability through law. Currently, both humans and machines are biased which is also something to remove from the four-dimensional model that we are trying to achieve. One-dimension thinking is a narrow AI focussed on a particular vertical. Two-dimensional consists of different verticals. Third-dimensional is the mental model. The fourth-dimensional model is the AI thinking maze, that will enable machines to operate in terms of actual intelligence, as compared to the current algorithmic, machine learning phase focussed on automation. Mr. Abhivardhan agreed that evolution of artificial intelligence cannot be equated with human intelligence, as current AI is dominant in fields of data sciences with automation, augmentation, and analysis. He then posed the question of AI’s involvement in environmental issues.

Regulation of AI in India and South Asia Mr. Ryder stated that when one looks at the issue from a legal perspective, the question is how to assess and assign liability. Will AI be considered a person, and to that personhood can we attribute morality, cause, and legality? And can we also attribute ownership or agency? And if an AI performs actions unforeseeable or contrary to the intentions of the creators, then who should own the new IP? As far as deployment of AI in ecological systems for data collection and analysis of big data. In the present system when we look at compliance, the question remains that whether the current regulatory toolkit sufficient or not? What kind of regulations will be made in the future? Do we have an idea for a proposed structure on AI Law? Mr. Mendiratta further agreed with Mr. Ryder that the current regulatory toolkit in insufficient. He noted the use of AI in the criminal justice system, and how similar issues came up in that debate as well as those that can be raised in the field of ecology and environmental law - biases, morality, transparency, and privacy. In the EU, the GDPR has provisions for automated profiling, noting how some jurisdictions are more advanced. When we look at AI in terms of developing ecological data solutions, we need to be careful as to ensure that AI does not replace human intelligence, but enhance it. He made an analogy with how AI can build upon designs by architects to make the buildings more energy efficient. In 2018, Google gave an AI the duty to control the air conditioning in one of its data centres, that used local weather forecasts, building occupancy, and communication with the energy grid. Mr. Laskar noted the effect of automation and the importance of laws related to regulation of AI. The first step in that process would be laws related to data collection, processing, and usage, since current AI models rely heavily on large amounts of data. The system in which the AI operates, such as businesses, developers, organisations, or the users, needs to be held responsible. Everybody in the system is liable to variable extent, and we need not restrict ourselves to one vertical.


AI & Glocalization in Law, Volume 1 (2020)

Mr. Abhivardhan makes an analogy with how AI is like what Steve Jobs used to say about Macintosh, as an extension of humans and our bodies. He added how the concept of explainable AI in an interesting one, in addition with responsible AI and ethical AI. In the Global South, the policy measures can only be clarified when combined with conviction of action, which is lacking. He asked Mr. Ryder about his thoughts on the legal framework, considering the DoT’s paper on AI Stack, and the Personal Data Protection (PDP) Bill.

Technology Diplomacy Mr. Ryder stated that the government has prepared the groundwork with the privacy apparatus that is at the centre of the PDP Bill. How the apparatus, i.e., data commissioners and authorities, actually function and implement the policy is a point of concern. Even in the EU, the GDPR is implemented slightly differently within the EU member states. Even in India, certain laws are implemented differently depending upon the one implementing it. The question of whether the data protection authorities remain independent and stand up to powerful fiduciaries remains to be seen. Mr. Abhivardhan added that technology diplomacy and sustainable development are often dictated or represented by the United Nations, and we see that the G4 countries and P5 members are focussing on both the aspects. Can we see some ground for growth and development in that aspect considering that both of them are being transfused together? Mr. Abhivardhan posed the question Mr. Mendiratta added that cooperation between nations and need for unification of legal standards across different jurisdictions is extremely important in terms of data privacy and governance of emerging technologies. He recommended that multi-stakeholder consultations across jurisdictions be held before adoption of new rules. He cited the example of the government of Pakistan released their model of intermediary guidelines (Citizen's Protection (Against Online Harm) Rules) that govern social media platforms, and we saw how the big tech companies banded together and threatened to exit the country seeing as the rules were considered regressive. He further added to Mr. Ryder’s open-ended question as to whether the data protection authorities can stand up to the government as well, considering the PDP Bill allows for wide exemptions to the State.

Concluding Remarks: What creative avenues are possible for disruptive technologies in the future? Mr. Ryder, optimistic about the future, stated that it is up to the legislature to be up to date with recent trends and step up to fill the existing regulatory gaps and modify the standard operating procedures. The executive also needs to look at how


the laws can be enforced in an efficient manner. Data protection has to be interpreted in favour of the data subject, and that should be at the core of what our future should be. Mr. Laskar talked about two of his own projects - one that scrapes a person’s social media to provide their mental health status and help them, and one similar to deepfakes and synthetic media that changes one’s voice and face in real-time. There seems to be apparent danger in the gaps of legal framework in this area, and laws need to be amended to fill these gaps. Mr. Abhivardhan added the importance of the requirement of transparency and robust accountability paradigms. He added on Mr. Ryder’s point, how a country is helped the most when it learns humility. Mr. Mendiratta concluded by talking about two principles that are central in terms of development of AI and the legal framework that comes up to govern any new technology. On the technological side, socially beneficial motive should underlie the development of any new disruptive technology. And that at the development stage itself, developers must be cognisant that their technologies should not lead to creation or reinforcement of unfair institutional bias that may already exist. On the legal, regulatory side, the need for an adequate accountability framework and incorporating privacy by design are important. They should not be enforced or developed by the judiciary, but should be a part of any legal framework, by design. These would prove to be a good starting point moving forward. To give an example, Mr. Mendiratta noted, while answering an audience question, two inherent risks in usage AI to forecast crime: reinforcing socio-economic biases, and privacy concerns. When AI uses a predictive policing model, it is bound to reinforce certain biases. For example - there is a possibility that a higher rate of crime might be reported in densely populated settlements, and an AI can wrongly identify such settlements as crime hotspots. When such feeder data is used in predictive policing technologies in the absence of a robust PDP mechanism can be inherently violative of a constitutional right to privacy.


AI & Glocalization in Law, Volume 1 (2020)

Papers Presented in the Track Presentations of the Conference


12 Dr Robot went crazy – Liability issues arising from a medical device’s hackability Tomás Gabriel García-Micó Universitat Pompeu Fabra, Ramon Trias Fargas 25-27, 08005 Barcelona, Spain

Abstract. Is it defective a medical device that can be hacked? Is it defective a medical device that has been hacked? In 2017, Billy Rios and Jonathan Butts discovered some concerning vulnerabilities on some medical devices produced by the company Medtronic. According to their research, a hacker was able to exploit the vulnerabilities of the device to withhold or to give a lethal dose of insulin to patients. This case has been summarised by Kaiser Health News as one of the cases of the ‘Dark Side of Artificial Intelligence’. This is not the first - nor will be the last - episode of the ‘Dark Side of Artificial Intelligence’ (as was baptised by Kaiser Health News) applied to the medical field. Another case can be found in the Watson for Oncology (this time, not related with a hacking problem) of a device that, without any apparent reason, started to prescribe drugs with bevacizumab to patients suffering from blood coagulation disorders (when this active principle is expressly contraindicated for patients suffering these disorders). Two cases have been used: in one case, a vulnerability to hacking has been exposed; in the other one, the system (the Watson for Oncology, which was the most proximate example to Dr Robot we had) started acting in an unpredictable way without a reasonable explanation. Both were technologies bestowed with AI capabilities and both used in the medical field. This is a major public health issue, and in this case, one must be precise in the use of words: is defective a medical device that can be hacked? Is defective a medical device that has been hacked? This is the key question that will be solved in this paper accounting for the case law and the scholarship that has been produced around these questions. Keywords: Hackability, Medical Device, Defect, Defective Product, Artificial Intelligence, Healthcare, Technology and the Law



AI & Glocalization in Law, Volume 1 (2020)

Introduction: The ‘Dark Side of Artificial Intelligence’ in the Healthcare Industry

When one reminds of Shakespeare’s masterpiece Hamlet, it is easy that one of the most famous quotations of the history of literature arises: to be, or not to be: that is the question. In the case of this article, this quotation fits perfectly with some amendments: to be hackable, or to be hacked: that is the question. This article, in fact, deals with that precise research question: it might be deemed as defective a hackable medical device? Does the answer changes if the same device is hacked? The prima facie answer to the latter is affirmative: it is completely different that a device has been hacked in comparison with when a device could be hacked. Tort Law and Product Liability Law require that an effective damage has been suffered by the claimant (Naveira Zarra, 2004, 22). In many US jurisdictions, this specific topic, as we will see later on, is discussed on the grounds of procedural standing to have an action towards a potential tortfeasor under Article III of the United States Constitution.1 So that, the mere fact that a medical device is hackable is worrisome, and might be deemed as ethically unacceptable, but legally irrelevant. Law deals with realities, not with potential realities but, as we will see, hackability may be, in some cases, sufficient basis for a product liability claim. Is for this reason that the two realities must – and will be – distinguished in this article, and that will be the distinctions made in this article. Section 2 will deal with hackable devices, while Section 3 will deal with actually hacked devices.


Hackable devices

By hackable devices we refer to these devices that, due to a cybersecurity vulnerability affecting the software, are susceptible to illegitimate breaches that may result in a harm. As we can see, the keyword here is susceptible. Is what makes the difference between hackable devices and actually hacked devices: the fact that in the first case, it is a potentiality, a risk (a severe one) but a potential risk that has not had any effect in the reality. In this section of the paper, the analysis will take into consideration how the case law and the legislation have dealt with this topic.


See, among others, Flynn v. FCA US LLC (United States District Court for the Southern District of Illinois, case number 3:15-cv-00855), citing Cahen v. Toyota Motor Corp., 147 F.Supp.3d 955 (N.D. Cal. 2015) aff’d by Cahen v. Toyota Motor Corp., 717 Fed.Appx. 720 (9th Cir. 2017); U.S. Hotel and Resort Mgmt., Inc. 2014 WL 3748639 (D. Minn. July 30, 2014).



Real cases of hackable medical devices

According to MedWatch, the FDA Safety Information and Adverse Event Programming Program,2 there have been 5 incidents related to the ‘cybersecurity’ of medical devices. See image below.

Image 1. MedWatch reports filtered as affecting medical devices

If the search is done without using the filter of product type, 10 results show up. The five that did not appear in the last search are also medical devices, despite they are not filtered as such in the database, as one can see in the image below.




AI & Glocalization in Law, Volume 1 (2020)

Image 2. Five additional cybersecurity incidents affecting medical devices, but without not labeled according to the product type

Of these alerts, three of them are related to Abbott’s implantable cardiac pacemakers. The first one was reported in January 2017 and is related to the second report of August 2017. The FDA issued a safety communication regarding some Abbott’s (formerly St Jude Medical’s) implantable cardiac pacemakers for patients suffering from bradycardia or in need of resynchronization to treat heart failure due to a firmware vulnerability detected by the FDA that would allow ‘an unauthorized user (i.e. someone other than the patient’s physician) to access a patient’s device using commercially available equipment. This access could be used to modify programming commands to the implanted pacemaker, which could result in patient harm from rapid battery depletion or administration of inappropriate pacing’ (FDA, 2017; FDA, 2017b). Nonetheless, the FDA informed that they did not have any report of harm caused to a patient. In fact, since 2016, manufacturers of networked medical devices, susceptible to cybersecurity vulnerabilities, might follow the recommendations issued by the FDA on the postmarket management of cybersecurity (FDA, 2016). In April 2018, the FDA issued the third security communication related to Abbott’s implantable cardiac pacemakers, this time related to cybersecurity concerns that might suppose an unauthorized access to the system of the device that might lead to an unexpected battery depletion (FDA, 2018). In October 2018, the FDA reported a security communication about Medtronic CareLink and CareLink Encore Programmers, models 2090 and 29901, in order to inform that Medtronic was issuing a software update to address the cybersecurity vulnerabilities reported with these devices (FDA, 2018b). In early 2019, the FDA issued another security communication regarding Medtronic’s implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators ‘associated with the use of the Conexus wireless telemetry protocol which is used as part of the communication method between Medtronic’s ICDs, CRT-Ds, clinic programmers, and home monitors’, this would allow unauthorized users to ‘access and potentially manipulate an implantable device, home monitor, or clinic programmer’ (FDA, 2019). According to the manufacturer, to date of the


last reported update (4 June 2020) there has not been any reported case of illegal breach to these devices (Medtronic, 2020). In late 2019, three cybersecurity incidents are reported in MedWatch: MiniMed Insuling Pumps that might lead to ‘change the pump’s settings to either over-deliver insulin to patient, leading to low blood sugar (hypoglycemia), or stop insulin delivery, leading to high blood and diabetic ketoacidosis’ (FDA, 2019b); 11 vulnerabilities in medical devices using the IPNet software components called VxWorks, Operating System Embedded, INTEGRITY, ThreadX, ITRON and ZebOS, that might lead to ‘allow anyone to remotely take control of the medical device and change its function, cause denial of service, or cause information leaks or logical flaws, which may prevent device function’ (FDA, 2019c); and a recall of Medtronic’s remote controllers for MiniMed Insulin Pumps (FDA, 2019d). In 2020, three more cybersecurity incidents have been registered in MedWatch: vulnerabilities detected in certain GE Healthcare Clinical Information Central Stations and Telemetry Servers (FDA, 2020); an update related to Medtronic Implantable Cardiac Devices (FDA, 2020b); and cybersecurity vulnerabilities detected on SweynTooth devices (FDA, 2020c). 2.

Refusal of a tort law action based upon the lack of legal standing

The concept of legal standing is unregulated in the Federal Rules of Civil Procedure, but that has been developed thoroughly by the case law of U.S. Courts. In particular, one of the cases where the so-called ‘irreducible constitutional minimum’ of standing is Lujan v. Defenders of Wildlife, 504 U.S. 555 (1992), a case in which the opinion of the Court was delivered by Justice Antonin Scalia. In this case, the Supreme Court reminded the precedents of the doctrine of legal standing (or Article III standing) and the three elements that shall concur for a plaintiff to have legal standing in a claim: ‘First, the plaintiff must have suffered an “injury in fact” –an invasion of a legally protected interest which is (a) concrete and particularized […] and (b) “actual or imminent, not ‘conjectural’ or ‘hypothetical’” […]. Second, there must be a causal connection between the injury and the conduct complained of […]. Third, it must be “likely”, as opposed to merely “speculative”, that the injury will be “redressed by a favorable decision”’ (504 U.S. 555, 560-1). The Cahen case. Cahen v. Toyota Motor Corp. (United States District Court for the Northern District of California, case number 15-cv-01104-WHO), aff’d by Cahen v. Toyota Motor Corp. (United States Court of Appeals for the Ninth Circuit, 21 December 2017),3 is a case where the United States District Court for the Northern District of California dealt with an order to dismiss based on lack of Article III standing of, 3

It shall be clarified that the decision of the U.S. Court of Appeals for the Ninth Circuit is under the status of ‘NOT FOR PUBLICATION’ and has no precedential effect unless otherwise provided by Ninth Circuit Rule 36-3.


AI & Glocalization in Law, Volume 1 (2020)

in this case, the California class due to the allegation of the defect on the Toyota vehicles as denounced by the plaintiffs was too speculative. The facts of the case are, summarily, the following: the plaintiffs bought cars manufactured by the defendant (Toyota). These cars utilize electronic control units (ECU) which are fundamental for the vehicles’ safety, which is linked to real time communication between the ECUs. The ECUs communications are executed through a controller area network (CAN bus) by sending digital messages through the CAN packets. According to the plaintiffs’ allegations ‘anyone with physical access to a vehicle can utilize the CAN bus to send malicious CAN packets to the ECUs’ (Cahen, at 2). The plaintiffs also allege that the fact that the cars are Bluetooth-connected, leaves open the door to a distance malicious manipulation of the CAN packets sent through the CAN bus to the ECUs. The Court granted the defendant’s motion to dismiss, accepting the argument of the plaintiffs’ lack of Article III standing. Article III standing for the risk of future hacking. Citing U.S. Hotel and Resort Management, Inc. v. Onity, Inc., No. 13-499, 2014 WL 3748639 (D. Minn. July 30, 2014) – a case dealing with the future risk of hacking of the opening mechanism of a hotel’s rooms locks – the court considers that ‘while it is possible that a potential hacker would in fact attempt to gain control of a vehicle, “allegations of possible future injury are not sufficient’ (Cahen, at 15). The Court also cites Contreras v. Toyota Motor Sales USA, Inc., No. 09-cv-06024JSW, 2010 WL 2528844 (N.D. Cal. June 18, 2010) – a case dealing with the potential harm that the plaintiffs might suffer as a consequence of a defect present in the brakes’ system of the cars which, under certain low temperatures, they tend to fail to execute their function, but this defect did not affect any of the plaintiffs but other people –, saying that the ‘case for standing here is more speculative that the presented in Contreras, where the alleged brake problems had manifested with other drivers, if not with the plaintiffs themselves’ (Cahen, at 15). Nonetheless, the Court considers that the plaintiffs might have standing if ‘a credible threat of harm’ (Cahen, at 16) exists. But such actual threat did not exist in the case at hand, because the only case of reported hacking was in a controlled environment. But no real cases of hacking were reported in any vehicle.4 Article III standing for the economic loss allegation. The second argument of interest in this paper deals with the analysis that the Court does about the legal standing of the


See Cahen, at 17: ‘[I]t is difficult for me to conclude whether plaintiffs’ vehicles might be hacked at some point in the future, especially in light of the fact that plaintiffs do not allege that anybody outside of a controlled environment has ever been hacked’.


plaintiffs related to the allegations of economic loss due to the potential defect suffered by the defendants’ vehicles’ hackability.5 Again, as occurred with the first ground analyzed above, the Court also rejects the existence of Article III standing. According to the Court, ‘[p]laintiffs here do not assert any demonstrably false misrepresentations of value, but rather make conclusory allegations that their cars are worth less because of the risk of future injury’ (Cahen, at 17), it is necessary ‘something more’ than the pure allegation of overpayment based on a speculative defect on the product (Cahen, at 20). The Flynn case. In Flynn v. FCA US LLC (United States District Court for the Southern District of Illinois, case number 3:15-cv-00855), the Court discussed whether the Uconnect system embedded to Chrysler vehicles was defective due to a research article published in 2015 in the WIRED magazine where it was proven that was vulnerable to hackers seeking to take remote control of the vehicle.6 The Court in Flynn reminded the constitutional doctrine of legal standing or Article III standing, by citing Clapping v. Amnesty Int’l USA, 568 U.S. 398, 408 (2013), where it was proven that the U.S. Supreme Court maintained untouched the ‘irreducible constitutional minimum’ of Article III standing (Flynn, at 5). In this case, the Court criticizes the plaintiffs that ‘[t]he fact that Uconnect has vulnerabilities and could have been made safer does not make it defective when no vehicles have ever manifested the alleged defect’ (Flynn, at 7), concluding that the risk described by the plaintiffs is ‘future’ and ‘too speculative’ (Flynn, at 8); so that,

Just to start with, this allegation would not be able to be presented in a Court in a case of product liability in most European jurisdictions. See Article 9 of Directive 85/374: ‘For the purposes of Article 1, ‘damage’ means: (a) damage caused by death or by personal injuries; (b) damage to, or destruction of, any item of property other than the defective product itself, with a lower threshold of 500 ECU […]’. The exclusion of pure economic loss has been manifested in EU state jurisdictions when transposing the Directive: see Koch (2016, 134) for Austria; Holle and Møgelvang-Hansen (2016, 164) for Denmark; Oliphant and Wilcox (2016, 189) for England and Wales; Magnus (2016, 258) for Germany; Askeland (2016, 368) for Norway; etc.] On the other hand, jurisdictions like the Czech Republic [see Tichy (2016, 149)] and France [see Borghetti (2016, 223)] allow compensation for pure economic loss. Other countries, like Italy [see Comandé (2016, 296-7)] are considered as jurisdictions where ‘the issue is hardly recognized’ [see Bussani and others (2003, 122)], where the law does not include, nor expressly exclude the recovery of pure economic loss, nonetheless it has been recognized at some instance. In the case of The Netherlands, according to the scholarship [see Keirse (2016, 330)], the pure economic loss has to be claimed under the general tort law regime provided by article 6:162 of the Burgerlijk Wetboek (Dutch Civil Code), while pure economic loss will only be allowed to be recovered if the defect has harmed an object used in the private sphere (under articles 6:185 and 6:190 of the Burgerlijk Wetboek). 6 The hackability feature of the software ‘took place when two highly trained researchers hacked a vehicle in a controlled setting’ (Flynn, at 7). 5


AI & Glocalization in Law, Volume 1 (2020)

‘allegations of economic loss stemming from speculative risk of future harm cannot establish standing’ (íbid.). 3.

The lack of damage and the solution at an EU-level

In the European Union, the solution is not as crystal clear as it is in the United States. If rejection was to be pursued by the manufacturer of a hackable medical device, the relevant point would be to allege that the plaintiff in the case at hand has not carried the burden of proving the existence of an actual damage, which is one of the requirements under EU Law to have a cause of action against the manufacturer. See, in this regard, article 4 of Directive 85/374, which expressly says that the plaintiff carries the burden of proving ‘the damage, the defect and the causal relationship between defect and damage’. As it is well-known, the EU Law system of compensation is based upon the recoverability, in general terms, of damage caused by death or personal injury, or damage caused to other objects different from the defective device, provided that they are used in the private sphere of the victim (relating that object with the concept of consumer good). For medical devices such as those identified in the Abstract, one has to take into consideration one fundamental precedent stemmed from the Court of Justice of the European Union. This is the case of Boston Scientific Medizintechnik GmbH v. AOK Sachsen-Anhalt – Die Gesundheitskasse (C-503/13) and Betriebskrankenkasse RWE (C-504/13), of 5 March 2015, a case dealing with pacemakers and implantable defibrillators which, due to a defect detected by the company, were recommended to be replaced (in the case of pacemakers) and to deactivate the defective magnetic switch (in the case of the defibrillators). In this case, one of the legal question raised by the national courts involve was: are the costs produced by the removal operation included within the definition of ‘damage’ of EU Council Directive 85/374? The answer of the Court of Justice in this case is not an absolute yes, nor an absolute no. It depends on which is the recommendation of the company. The general principle is that ‘[c]ompensation for damage thus relates to all that is necessary to eliminate harmful consequences and to restore the level of safety which a person is entitled to expect’ (para 49). Here, one sentence of the Court has the solution to the case: ‘to restore the level of safety’. In the case of pacemakers, given that the removal operation was recommended by the own manufacturer of the defective device ‘constitute damage within the meaning of section (a) of the first paragraph of Article 9 of Directive 85/374’ (para 52). But in the case of implantable defibrillators, the solution is different, provided that the manufacturer did not recommend the replacement of the device but only switching-off the defective magnetic switch. In this very last case, the Court of Justice considers that ‘it is for the national court to determine whether […] the deactivation of the magnetic switch is sufficient for the purpose of overcoming the defect


in the product, bearing in mind the abnormal risk of damage to which it subjects the patient concerned’ (para 54). 4.

Conclusion – if Dr Robot is hackable, is the manufacturer liable?

Implantable devices. In the EU it seems difficult that a case of an implantable device where extraction or replacement is deemed as necessary to restore the level of safety that a person is entitled to expect (using the same wording as the CJEU in Boston) a national court might reject an action based upon the lack of damage caused under the Directive. As the CJEU said in Boston, the direct costs involved in the process of replacing the defective pacemakers are considered as physical damage under the Directive. Here, the question that raises in comparison with the Cahen and Flynn cases is: does the fact that a potential risk of a device being hacked by third illegitimate parties exist makes the device defective? In the United States, taking the Cahen and Flynn cases (including Onity, which has not been addressed directly, but mentioned) the fact that a device is hackable (here you may substitute the word hackable by any other related to a potential defect) does not make the device defective unless it is proven that the plaintiff has suffered from a damage stemming from such defect. In Contreras, Courts dismissed the plaintiff’s case against Toyota because the mere fact that other people may have suffered damage resulting from a defect present in the very same car driven by the plaintiff, this does not make the plaintiff’s car defective in itself. So, even if there are evidence that the very same device or asset is proven defective, if the plaintiff has not suffered an actual injury (or ‘injury in fact’, using the wording of Lujan and Clapping) the claim would probably be dismissed due to the lack of Article III standing. In the EU the situation would be different in the case of implantable devices. Again, the landmark case is Boston. To start with, Boston clarifies that in the case of pacemakers and implantable defibrillators, ‘the safety requirements […] which such patients are entitled to expect are particularly high’ (para 39). Having said that, the Court, immediately afterwards states: ‘the potential lack of safety which would give rise to liability […] stems […] from the abnormal potential for damage which those products might cause to the person concerned’ (para 40), and adds that ‘where it is found that such products belonging to the same group of forming part of the same production series have a potential defect, it is possible to classify as defective all the products in that group or series, without there being any need to show that the product in question is defective’ (para 41). In conclusion, if the so-called ‘Dr Robot’ was an implantable device which was proved to suffer from a vulnerability that made it a potential target of hacking actions, the mere fact that it has not been actually hacked would not, in the EU, be an obstacle to deem the device as defective, while in the U.S. that would not happen: if the plaintiff’s implantable device has not been actually hacked, Courts, following


AI & Glocalization in Law, Volume 1 (2020)

the Lujan and Clapping precedents, would most probably grant the defendant’s motion for summary judgement, as plaintiff’s would not have legal standing, and, also, because the device will not be deemed as defective. if the actual plaintiff’s actual implantable device has not been hacked, there is not a defect, but a mere ‘speculation’ (as said in Cahen). Non-implantable devices. A case of a non-implantable medical device would be a surgical robot. Which would be, in the case, the solution applying Boston (EU) and Onity, Cahen and Flynn (US)? The easiest conclusion can be drawn from an analysis of the U.S. case law, as in the case of implantable devices, the fact that a potential risk of hacking exists does not make the device defective in itself, unless the plaintiff is capable of producing evidence that the device used on him has been actually hacked and a real damage has took place. But what would happen in the EU? The opinion of this author, taking into strict consideration the judgment of the CJEU in Boston, lays in accordance to the statement of the CJEU which says ‘the safety requirements […] which such patients are entitled to expect are particularly high’ (para 39). This statement it was specifically referred to the pacemakers and implantable defibrillators, but nonetheless it is perfectly applicable to any other medical device, such as any surgical robot. Just to put an example, the da Vinci is used for the following surgical procedures: urology, gynecology, general surgery, thoracic and TORS (transoral robotic surgery) [see Intuitive (2020, 6)]. In all these procedures, the safety expectations to which a patient is entitled are, to stay short, similar to those that the ones he or she is entitled to expect in a surgery executed manually by a surgeon (if not more). So that, if a surgical robot bestowed with AI capabilities is proven to be vulnerable to hacking attacks that may let its control in hands of a malicious hacker, it is reasonable to believe that the safety expectations to which a patient is entitled are breached and, therefore, they may be sufficient reasons to consider the product as defective. Here, nonetheless being a defective product, it would be harder to prove the harm. There are not extraction or replacement costs, and relying upon the fact that procedures conducted by humans without robot aid require, usually, higher periods in a hospital than those conducted by a robot, or the fact that the blood loss is higher than in the latter, etc. [see Braga and others (2002, 759-67); (Murray, 2006) and Wang and others (2017, 50 ff)] is not sufficient to consider that a damage has took place in the terms of the Directive, nor with the favorable interpretation made by the CJEU in Boston. So, in conclusion, in the EU, nonetheless being the product defective, the problem raises with the proof of the damage, not being sufficient with raising the statistics of robotic versus human surgery to conclude that the burden of proof has been carried by the plaintiff. The position, in this case of the EU would be closer to the


U.S., despite the distances drawn above on the consideration of the product as defective.


Actually hacked devices


Real cases of hacking – from Twitter to medical records

In this article we are not discussing a universe of unreal situations. We are discussing realities: real devices bestowed with artificial intelligence (AI) capabilities that have been hacked. Hacking is usually a troublesome situation that we are used to hear or read in the newspapers. The most recent case of hacking is currently under an investigation conducted by the Federal Bureau of Investigation (FBI) in the United States (BBC, 2020; Leswing, 2020). It is the case of hacking of Twitter accounts of several celebrities of the United States (the current Democratic Candidate to the United States Presidency, Joe R. Biden Jr.; the 44th President of the United States, Barack Obama; Amazon founder, Jeff Bezos; Microsoft cofounder, Bill Gates; or Tesla’s CEO, Elon Musk) posting a message requesting people to click on a link to be redirected to a website where they were supposed to donate cryptocurrencies in exchange of receiving them doubled. This cryptocurrency scam supposed 320 transactions in bitcoin worth up to US$110,000 for the scammers before Twitter was able to erase these messages (Iyengar, 2020). On 31 July 2020, just sixteen days after the scam took place, there were several arrests related to it, as the FBI informed in a press release (FBI, 2020) along with the United States Attorney’s Office for the Northern District of California (United States Attorney’s Office for the Northern District of California, 2020). But this has not been an isolated case of hacking. Unfortunately, many hackers and hacking organizations have exploited the COVID-19 crisis we are suffering since late December 2019 (Newman, 2020; Newman, 2020b) to target, among others, public administrations, such as the targeting of the Wuhan Government and the Chinese Ministry of Emergency Management by APT32 (Henderson et al, 2020). This turns especially worrisome when the data affected by the hackers are medical records. In the first half of 2019, 27.8 million patient records were breached by hacking (Davis, 2019). According to a recent investigation conducted by the Institute for Critical Infrastructure Technology for the U.S. Senate, the data stolen from medical records are used for a wide variety of criminal activities, involving illegal immigration, pedophilia and social engineering attacks (McGee, 2016; Scott and Spaniel, 2016). But during the first wave of strict confinement approved due to the COVID-19 pandemic, IBM announced that they detected an increase of a 6,000% in spam attacks focused on healthcare facilities with the intention of stealing patient records (Weintraub, 2020).


AI & Glocalization in Law, Volume 1 (2020)

The illegal obtention of patient records is a reality on our interconnected world. In the XXI century, we can access our medical information through a mobile app, because our medical data is stored in a cloud. Through a code and a password, we can obtain our prescriptions, the results of our last blood test, the results of a PCR test, etc. So that if we can access these results, everybody is able to do so by illegal means (i.e., hacking), along with our social security number, insurance account information, payment information, medical images, etc. (Chan, 2020).7 In one quote ‘[t]he world’s most valuable resource is no longer oil, but data’ (The Economist, 2017; Bhageshpur, 2019). The example that will be used in the following subsections will be the implantable pacemaker mentioned in the Abstract of this paper. 2.

Are implantable cardiac pacemakers’ devices with AI capabilities?

To be able to answer this question, there is a prerequisite question to be answered too: what does artificial intelligence (AI) means? According to (McCarthy, 2007, 2), AI is ‘the science of […] making intelligent machines’, ‘intelligence’ meaning ‘the computational part of the ability to achieve goals in the world’. A more specific definition, based on (Russel and Norvig, 2009) can be found in the report of the (OECD, 2019, 22) and defines AI as a ‘system [that] consists of three main elements: sensors, operational logic and actuators. Sensors collect raw data from the environment, while actuators act to change the state of the environment. The key power of an AI system resides in its operational logic. For a given set of objectives and based on input data from sensors, the operational logic provides output for the actuators. These take the form of recommendations, predictions or decisions that can influence the state of the environment’. Taking this definition, one may consider that implantable cardiac pacemakers are bestowed with AI capabilities, because they are programmed to control the patient’s heart beating and to take different actions depending on how this beating is: if it is too high, to resynchronize it; if it is too low, to give an electric impulse to make it beat faster, but not too much to cause a tachycardia. So that, these devices have one variable to take into consideration and several actions depending on the monitorization of this variable. About the functioning of this medical device, see (Fogoros, 2020). In this case, we find sensors that control the patient’s heart pace, as well as actuators that act according to the data the sensors have collected. The operational logic is in the code of the software that has been programmed to decide how the actuators shall act provided the current state of the data collected by the sensors. 7 And this information is critical for patients because it is related to their identity. The access

by hackers to this information can be used to commit illegal identity fraud. To steal people’s money stored in bank accounts, to access to their emails or any information that the Social Security may have about them.


The analysis made above matches with the OECD report, according to which implantable cardiac pacemakers are considered among the current applications of AI in the healthcare sector, and the ‘questions from the use of data extracted from implantable healthcare devices’ are raised thereto (OECD, 2019, 63). 3.

Hacking an AI medical device? Is it the product defective?

This is the core question of this paper. What happens when an unauthorized user takes profit of a cybersecurity vulnerability that a specific device suffers? Who is liable? Product Liability Law is based on the concept of ‘safety’,8 nonetheless several standards are used to assess if a product is safe or not and, therefore, defective or not: in mainland Europe, the consumer expectations test (‘which a person is entitled to expect’, using the Directive’s wording), which is an objective test, not taking into consideration the expectations of the concrete consumer at hand, that takes as a reference the ‘normal consumer’ (Parra Lucán, 2009, 1672; Fairgrieve and others, 2016, 51); in the United States, several different tests are used depending on the jurisdiction (Henderson and Twerski, 2011, 179 ff). So that, one must ask itself, which is the safety that a normal consumer (using the EU standard) is entitled to expect with an implantable cardiac pacemaker? The author of this paper considers that the answer to this question is two-fold: firstly, a ‘normal consumer’ (in Spanish, consumidor normal or consumidor medio) shall be entitled to expect that the pacemaker activates when he or she is suffering bradycardia or tachycardia, this means, to correctly regulate the pace of the heart to avoid any heart failure; on the other hand, given that this device has an embedded software that is the one controlling and regulating the pace of the heart and acting accordingly, it is fundamental that this device – and the embedded software – is safe


Article 6(1) of EU Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products: ‘A product is defective when it does not provide the safety which a person is entitled to expect’. In the U.S. the concept of ‘safety’ also appears in the case law of the courts: Banks v. ICI Americas, Inc., 450 S.E.2d 671,673 (Ga.1994), about the presumption of ‘safety’ of a design when the claim is based upon a manufacturing defect; or Rix v. General Motors Co., 723 P.2d 195 and Camacho v. Honda Motor Co. 741 P.2d 1240 (1987), in both cases, the respective courts dealt with the standards that courts might use when assessing whether a product is defective under the reasonable alternative design standard. Also, in Restatement Third of Torts: Product Liability encompasses the concept of safety within the definition of design and information defects. In this respect, see § 2(b) and § 2(c) of the Restatement, as well as the presumptions in case of compliance and noncompliance with safety standards and regulations, see § 4 of the Restatement. And, to close with, in the People’s Republic of China the standard of safety is also the relevant one when assessing the defectiveness, or not, of a product. See article 46 of the Product Quality Law of the People’s Republic of China (中华人民共和国产品质量法).


AI & Glocalization in Law, Volume 1 (2020)

enough to avoid any user different from the patient’s physician to access to the data. In this case is a life-or-death matter.9 In order to assess whether a hacked medical device bestowed with AI capabilities might be considered defective, the first question that needs to be answered affects to the applicability of the whole Product Liability Law regime: is the software that has been hacked a product? 4.

Is the software a ‘product’? Not always

EU Council Directive 85/374/EEC defines ‘product’ as ‘all movables […] even though incorporated into another movable or into an inmovable’. According to the scholarship another requirement needs to be added: these movables need to be tangible (Fairgrieve and others, 2016, 41). Whether a software is a ‘product’ or not in the light of the Directive is something that does not find unanimity among the scholarship. Some consider that it constitutes a service, others an intangible good, while others deem it as mere information (for the full debate, Fairgrieve, 2016, 46). But it is necessary to analyze the specific nature of each software: it is not the same a computer software that it is sold to a consumer (where this author will share the view that it is a service), that the software that is embedded into a medical device. In this latter case, it is hard to separate the device (the pacemaker) of the software that it is embedded to it. They are a single reality. The pacemaker is no more than a machine (the hardware), needing of the software to function.10 The European Commission has dealt with this situation and focuses on the fact that ‘embedded software […] are already an integral component of many products. In terms of EU product safety legislation, the producer is responsible for the safety of the final products as a whole. Therefore, for products which include software at the moment they were put into circulation by the producer, the Directive could address liability claims for damages caused by defects in this software’ (European Commission, 2018, 52).

In this regard, see the judgment of the CJEU (Fourth Chamber), 5 March 2015, Boston Scientific Medizintechnik GmbH v. AOK Sachsen-Anhalt – Die Gesundheitskasse & Betriebskrankenkasse RWE (C-503/13 and C-504/13): ‘With regard to medical devices such as the pacemakers […], it is clear that, in the light of their function and the particularly vulnerable situation of patients using such devices, the safety requirements for those devices which such patients are entitled to expect are particularly high’ (para 39). 10 Using again the case of the computer software bought by a consumer. When a consumer buys a brand-new computer without any program installed, he or she might need of writing applications, an email server, or maybe he or she wants to download an application to access his or her bank account without need of accessing through the Internet (this is more the case of mobile phones), etc. The software exposed above are not necessary for the functioning of the computer, nor is embedded to it. 9



Design defect or manufacturing defect?

We arrive to one of the hardest points of the analysis: to put the basis for determining which is the defect that a hacked product may suffer from. Just to start with, the Restatement Third of Torts on Product Liability, as well as the scholarship specialized in this area of the Law, distinguish among three different types of defects: manufacturing defects, design defect and information (or failure to warn) defects. In Europe, the distinction is the same, but without it being express anywhere: neither in the Directive, nor in the transposition rules of the Member States. A third example can be found in the People’s Republic of China, where most of the scholarship defends the so-called four-defect theory, adding the ‘follow-up defect’, defined as ‘ the defect of failure to take timely warning, recall and other measures or [to take] inadequate remedial measures’ [see Lixin and Zhen (2013, 8), Liming (2008, 238-9) and Zhang (2011, 390-1)]. As it is known the failure to warn defects are ‘the most recognizable defects’ [see Duplechin (2018, 821)] as they require that the manufacturer provides insufficient instructions as to the use and risks of the product and it is, in fact, this lack of instructions what is penalized. If the manufacturer had provided enough information as per the use of the product, or regarding its risks, maybe the harm would have not occurred or would have occurred in different circumstances. The big discussion arises in the intersection of deciding whether it is a manufacturing or a design defect. One interesting case in which this distinction is clearly explained is Rix v. General Motors Co., 723 P.2d 195 and Robinson v. Reed-Prentice Div. of Package Mach. Co., 49 N.Y.2d 471, 479, 426 N.Y.S.2d 717, 403 N.Y.2d 440. The Rix and Robinson courts explained that: ‘[M]anufacturing defects, by definition, are “imperfections that inevitably occur in a typically small percentage of products of a given design as a result of the fallibility of the manufacturing process. A [defectively manufactured] product does not conform in some significant aspect to the intended design […] it is misconstructed […] In contrast, a design defect is one which “presents an unreasonable risk of harm, notwithstanding that it was meticulously made according to [the] detailed plans and specifications’. In summary, the manufacturing defects has nothing to do with a problem with the design, because this should be presumed as correct [see Banks v. ICI Americas, Inc., 450 S.E.2d 671,673 (Ga. 1994)], because in the execution of the design, something went wrong and it is this something where lays the concrete cause of the defect. On the other hand, design defects affect to the inception process of the product’s idea, not to its execution: the idea is wrong and is the cause of the lack of security. Software development, according to the doctrine, can be separated in four phases: ‘(1) design, (2) coding, (3) testing, (4) replication and distribution’ [see Scott (2008, 459) and Duplechin (2018, 822)]. The phase of ‘design’ includes the inception of the software, where R&D teams prepare what would be the intended software. The phase of ‘coding’ includes embedding the software with a code that will govern its conduct (metaphorically speaking). The phase of ‘testing’ includes


AI & Glocalization in Law, Volume 1 (2020)

proving the strength and vulnerabilities of the software of the code developed in a controlled environment. And the phase of ‘replication and distribution’ includes the preparation for distribution (and the distribution itself) of the software products among the general population. Taking into consideration these phases, if the defect occurs in phases 1 and 2, is where we can assess that a design defect exists. While if they occur in phase 4, a manufacturing defect will exist [see Duplechin (2018, 822)].


Concluding remarks – What do we do with Dr Robot?

The preceding lines allow us to draw the following conclusions about the problems associated with the new emerging technologies [see Gómez Ligüerre and GarcíaMicó, 2020 for a summarized analysis of the guidelines at the EU level in this regard]: 1. Hackable devices find obstacles in being compensated under U.S. law, as Courts reject legal standing of plaintiffs who allege the risk of a device being hacked, as far as there is a lack of an injury in fact. 2. In the EU, on the other hand, an implantable Dr Robot would be considered defective and, if replacement were deemed necessary, also a damage would arise and, therefore, compensation would be possible. In the case of non-implantable devices, the conclusion might be slightly different, depending on the possibility of the plaintiff of producing evidence of any damage suffered. 3. In the case of devices that have been hacked, the possibility of compensation is high enough to risk litigation. Nonetheless, one of the problems associated with new technologies would be producing evidence of a defect in the product. Even more if we are dealing with software embedded into products and this software allows the device to act autonomously.

References 1. Askeland, B.: Product Liability in Norway. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 359–76. Intersentia, Cambridge (2016). 2. BBC Technology: Twitter hack: FBI investigates major Twitter attack, BBC (17 July 2020),, last accessed 2020/08/13. 3. Bhageshpur, K.: Data Is The New Oil - - And That’s A Good Thing, Forbes (15 November 2019),, last accessed 2020/08/14. 4. Borghetti, J-S.: Product Liability in France. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 205–36. Intersentia, Cambridge (2016).

129 5. Braga, M.; Vignali, A.; Gianotti, L.; Zuliani, W.; Radaelli, G.; Gruarin, P.; Dellabona, P. and Di Carlo, V.: Laparoscopic Versus Open Colorectal Surgery. A Randomized Trial on Short-Term Outcome. Annals of Surgery 6, 759-767 (2002). 6. Bussani, M.; Palmer, V.V. and Parisi, F.: Liability for Pure Financial Loss in Europe: An Economic Restatement. The American Journal of Comparative Law 51(1), 113–162 (2003). 7. Chan, T.: How Stolen Medical Records Are Used for Identity Theft, Care Dash (13 January 2020),, last accessed 2020/08/14. 8. Comandé, G.: Product Liability in Italy. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 275–309. Intersentia, Cambridge (2016). 9. Davis, J.: 32M Patient Records Breached in First Half of 2019, 88% Caused by Hacking. Health IT Security (1 August 2019),, last accessed 2020/08/14. 10. Duplechin, R.J.: The Emerging Intersection of Products Liability, Cybersecurity, and Autonomous Vehicles. Tennessee Law Review 85, 803-46 (2018). 11. European Commission: Commission Staff Working Document. Evaluation of Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products {COM(2018) 246 final} – {SWD(2018) 158 final}. Brussels. 12. Fairgrieve, D.; Howells, G.; Møgelvang-Hansen, P.; Straetmans, G.; Verhoeven, D.; Machnikowski, P.; Janssen, A. And Schulze, R.: Product Liability Directive. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 17–108. Intersentia, Cambridge (2016). 13. FBI: Statement from FBI San Francisco Assistant Special Agent in Charge Sanjay Virmani on Arrests in Twitter Cyber Attack (31 July 2020),, last accessed 2020/08/13. 14. FDA: MedWatch: The FDA Safety Information and Adverse Event Reporting Program,, last accessed 2020/14/08. 15. FDA: Postmarket Management of Cybersecurity in Medical Devices. Guidance for Industry and Food and Drug Administration Staff (2016),, last accessed 2020/08/14. 16. FDA: Cybersecurity Vulnerabilities Identified in St. Jude Medical's Implantable Cardiac Devices and Merlin@home Transmitter: FDA Safety Communication (2017),, last accessed 2020/08/14. 17. FDA: Firmware Update to Address Cybersecurity Vulnerabilities Identified in Abbott’s (formerly St. Jude Medical’s) Implantable Cardiac Pacemakers: FDA Safety Communication (2017b),, last accessed 2020/08/14.

130 AI & Glocalization in Law, Volume 1 (2020) 18. FDA: Battery Performance Alert and Cybersecurity Firmware Updates for Certain Abbott (formerly St. Jude Medical) Implantable Cardiac Devices: FDA Safety Communication (2018),, last accessed 2020/08/14. 19. FDA: Cybersecurity Updates Affecting Medtronic Implantable Cardiac Device Programmers: FDA Safety Communication (2018b),, last accessed 2020/08/14. 20. FDA: Cybersecurity Vulnerabilities Affecting Medtronic Implantable Cardiac Devices, Programmers, and Home Monitors: FDA Safety Communication (2019),, last accessed 2020/08/14. 21. FDA: Certain Medtronic MiniMed Insulin Pumps Have Potential Cybersecurity Risks: FDA Safety Communication (2019b),, last accessed 2020/08/14. 22. FDA: URGENT/11 Cybersecurity Vulnerabilities in a Widely-Used Third-Party Software Component May Introduce Risks During Use of Certain Medical Devices: FDA Safety Communication (2019c),, last accessed 2020/08/14. 23. FDA: Medtronic Recalls Remote Controllers for MiniMed Insulin Pumps for Potential Cybersecurity Risks (2019d),, last accessed 2020/08/14. 24. FDA: Cybersecurity Vulnerabilities in Certain GE Healthcare Clinical Information Central Stations and Telemetry Servers: Safety Communication (2020),, last accessed 2020/08/14. 25. FDA: Cybersecurity Vulnerabilities Affecting Medtronic Implantable Cardiac Devices, Programmers, and Home Monitors: FDA Safety Communication (2020b),, last accessed 2020/08/14. 26. FDA: SweynTooth Cybersecurity Vulnerabilities May Affect Certain Medical Devices: FDA Safety Communication (2020c),, last accessed 2020/08/14. 27. Fogoros, R.N.: What You Should Know About Pacemakers, Verwell Health (5 April 2020),, last accessed 2020/08/14. 28. Gómez Ligüerre, C. and García-Micó, T.G.: Liability for Artificial Intelligence and other emerging technologies. InDret 1, pp. 501-11 (2020).

131 29. Holle, M-L. and Møgelvang-Hansen, P.: Product Liability in Denmark. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 155–72. Intersentia, Cambridge (2016). 30. Henderson, J.A. and Twerski, A.D.: Products Liability. Problems and Process. 7th edn. Wolters Kluwer, New York (2011). 31. Henderson, S.; Roncone G.; Jones, S.; Hultquist, J.; Read, B.: Threat Research – Vietnamese Threat Actors APT32 Targeting Wuhan Government and Chinese Ministry of Emergency Management in Latest Example of COVID-19 Related Espionage, FireEye (22 April 2020),, last accessed 2020/08/13. 32. Intuitive Surgical: Q2 2020 Intuitive Investor Presentation,, last accessed 2020/08/18. 33. Iyengar, R.: Twitter blames ‘coordinated’ attack on its systems for hack of Joe Biden, Barack Obama, Bill Gates and others, CNN Business (16 July 2020),, last accessed 2020/08/13. 34. Keirse, A.L.M.: Product Liability in The Netherlands. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 311–58. Intersentia, Cambridge (2016). 35. Koch, B.A.: Product Liability in Austria. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 111–47. Intersentia, Cambridge (2016). 36. Leswing, K.: Hackers targeted Twitter employees to hijack accounts of Elon Musk, Joe Biden and others in digital currency scandal, CNBC (16 July 2020),, last accessed 2020/08/13. 37. Liming, W. Research on Tort Liability. Vol. 2. Renmin University of China Press (2008). 38. Lixin, Y. and Zhen, Y. Application of Chinese Law in Product Liability Cases. Report on the Founding Conference of the World Society of Tort Law and the First Symposium on Chinese Law. Northern Law 5 (2013). 39. Magnus, U.: Product Liability in Germany. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 407–57. Intersentia, Cambridge (2016). 40. McCarthy, J.: What is Artificial Intelligence? (2007),, last accessed 2020/08/14. 41. McGee, M.K.: Research Reveals Why Hacked Patient Records Are So Valuable, Data Breach Today (27 September 2016),, last accessed 2020/08/14. 42. Medtronic: Conexus Telemetry and Monitoring Accessories. Security Bulletin,, last accessed 2020/08/14. 43. Murray, A.; Lourenco, T.; de Verteuil, R.; Hernandez, R.; Fraser, C.; McKinley, A.; Krukowski, Z.; Vale, L. and Grant. A. Clinical effectiveness and cost-effectiveness of laparoscopic surgery for colorectal cancer: systematic reviews and economic evaluation. Health Technological Assessment 10(45) (2006).

132 AI & Glocalization in Law, Volume 1 (2020) 44. Naveira Zarra, M.M.: El resarcimiento del daño en la responsabilidad civil extracontractual. Universidade da Coruña, A Coruña (2004),, last accessed 2020/08/13. 45. Newman, L.H.: Google Sees State-Sponsored Hackers Ramping Up Coronavirus Attacks, WIRED (22 April 2020),, last accessed 2020/08/13. 46. Newman, L.H.: The Worst Hacks and Breaches of 2020 So Far, WIRED (3 July 2020),, last accessed 2020/08/14. 47. OECD: Artificial Intelligence in Society. OECD Publishing, Paris,, last accessed 2020/08/14. 48. Oliphant, K. and Wilcox, V.: Product Liability in England and Wales. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 173–204. Intersentia, Cambridge (2016). 49. Parra Lucán, M.A.: Comentario al artículo 137 TRLGDCU. In: Bercovitz RodríguezCano, R. (ed.) Comentario del Texto Refundido de la Ley General para la Defensa de los Consumidores y Usuarios y otras leyes complementarias, pp. 1670–78. Aranzadi, Pamplona (2009). 50. Russel, S. and Norvig, P.: Artificial Intelligence: A Modern Approach. 3rd edn. Pearson, London (2009). 51. Scott, J. and Spaniel, D.: Your Life, Repackaged and Resold: The Deep Web Exploitation of Health Sector Breach Victims. CreateSpace (2016). 52. Scott, M.D.: Tort Liability for Vendors of Insecure Software: Has the Time Finally Come? Maryland Law Review 67 (2008). 53. The Economist: The world’s most valuable resource is no longer oil, but data, The Economist,, last accessed 2020/08/14. 54. Tichy, L.: Product Liability in the Czech Republic. In: Machnikowski, P. (ed.) European Product Liability Law – An Analysis of the State of the Art in the Era of New Technologies, pp. 149–54. Intersentia, Cambridge (2016). 55. United States Attorney’s Office for the Northern District of California: Three Individuals Charged For Alleged Roles In Twitter Hack (31 July 2020),, last accessed 2020/08/13. 56. Wang, S.; Shi, N.; You, L.; Dai, M. and Zhao, Y. Minimally invasive surgical approach versus open procedure for pancreaticoduodenectomy. A systematic review and metaanalysis. Medicine 96(10), 50 ff. (2017). 57. Weintraub, K.: A game of ‘cat and mouse’: Hacking attacks on hospitals for patient data increase during coronavirus pandemic, USA Today (13 July 2020),, last accessed 2020/08/14. 58. Zhang, X.B. Tort Liability Law. Renmin University of China Press (2011).


13 “Alexa, I will see you in Court” –AI and Freedom of Speech and Expression Ritwik Prakash Srivastava National Law Institute University, Bhopal, India

Abstract. We are living in a world where communication with and amongst humans and computers is seamless, and really inevitable. Resultantly, the concept of machine language is increasingly becoming redundant. Thus, a question that beckons is whether our existing constitutional law principles of free speech and expression can adequately regulate such AI speech. The author argues that the existing doctrines and legal precedents of freedom and speech and expression which govern human speech can, with appropriate albeit minimal tailoring, adequately accommodate AI-speech and the way it interacts with humans as well. The existing jurisprudence envisages and provides for adequate restrictions to regulate obscene and dangerous speech, and the harms of allowing certain content to disseminate. Such an exercise would involve investigation would involve testing the AI speech against the doctrines of free speech, tracing the trail of accountability to human operators of AI, and accordingly testing the elasticity of our constitutional framework to accommodate such possibilities. Keywords: Artificial Intelligence, Freedom of speech and expression, Constitutional Law.



Artificial Intelligence (“AI”) in the recent years has come into common parlance. Its unsurprisingly fast uptake in the society has compelled policy-makers to account for AI as a variable while coming up with policy guidelines for everything from commerce, to communication, to even disaster management. That very fact is reminiscent of the versatility of AI, and the various verticals it can be integrated into.


AI & Glocalization in Law, Volume 1 (2020)

One aspect of AI that humans are have become accustomed to is by way of speech. The manner in which humans interact with AI is increasingly becoming anthropomorphised. The mannerisms of Siris and Alexas have come to resemble human speech to an extent that at times it becomes difficult to distinguish whether you are talking to a machine or a person. Not just that, AI is also starting to extensively affect and alter the opinions of humans by acting as filter for content that one interacts with online (Baird, 2018). AI now panders and caters to individual tastes of users, by blocking and deprioritizing other types of content. Thus, a need arises not only for the AI-speech to have some external controls, but also for how and to what extent can AI interfere with the freedom of expression, thought and opinion of humans who interact with it. One needs to consider that while machines can improve the quality of life, if allowed to run amok, they can adversely affect humans and human rights. An argument that such interferences are already happening and are a source of concern would not be an unfounded one. Mega corporations, social media websites, content creation and exchange platforms are already a big part of a person in today’s world. By virtue of how much time an average person spends on the particular platform, and relies on them for everyday tasks, they already hold exceptional power and influence on our autonomy. In that backdrop, policies and law attempting to regulate technology and its various facets is gaining relevance. Among those, is the challenge that machines pose to the constitutional rights of speech and expression. However, it should be kept in hindsight that any attempt to regulate such speech carries its own risks. Too restrictive a policy would be a huge setback for the corporations, and the consumers alike. The extent to which AI interference in the content that one is “allowed” to see needs to be merely regulated, for banning all interreference would be immediately noticeable and affect one’s experience on the particular platform online. Any government intervention can adversely affect free speech rights online. Hence, these problems need to be judicially and judiciously addressed. The author argues that the existing doctrines and legal precedents of freedom and speech and expression which govern human speech can, with appropriate albeit minimal tailoring, adequately accommodate AI-speech and the way it interacts with humans as well. The existing jurisprudence envisages and provides for adequate restrictions to regulate obscene and dangerous speech, and the harms of allowing certain content to disseminate. To “tailor” the existing law however, certain alterations would have to be necessarily made to circumvent the conventional legal definitions, and procedures. Such an exercise needs to be undertaken keeping in mind not only the ever-developing standards of technology, but also how society interacts with such an apparatus. Another aspect that needs to be considered is the perspective with which law chooses to approach this problem. AI can either be taken to be merely a scientific


invention then attention needs to be given to its development and the people incharge of such development. This, while seemingly useful, providing for a supervisory model which prevents AI from “taking over”, can be a hinderance in innovation in the field. The other perspective could be to consider AI as something which evolves with and integrates itself in the society. What then needs to be done, is to regulate the effects of the advances in AI technology, and not the advances themselves. However, there are considerable barriers to that seemingly obvious approach. The AI speech is gradually becoming more “autonomous”. In that scenario, a question needs to be considered of how long are we willing to stretch the thread of accountability, i.e., to what extent can the human creator of the AI be blamed for any untoward speech of AI. Another aspect that needs to be considered is that of the legal personhood of a machine or a program. A Constitutional Purist can very well may argue that the rights of constitution can only be conferred upon a “person” who creates the thing and not the “thing” itself (Sahu, 2019, August.). Along the same lines, the possibility of actions of AI influenced by it picking up bias and prejudices, as they already exist in majority of the society while learning from a sample space, need to be accounted for (Chopra, 2004). The paper attempts to delve into the discussion as initiated by the previous paragraphs. It investigates how far, the hypothesis of tailoring the existing legal standards of human speech can be transposed to accommodate machine speech within its ambit, stands true.


How does AI affect Human Speech?

We have already delved into a brief introduction of the far-reaching ways in which AI is adversely interfering with the rights of speech and expression of humans. Online intermediaries continue to use AI in a manner that makes the role of AI opaque to the users. This gives rise to AI-surveillance whereby the tastes, and behaviours of users can be kept a track of. In this section we undertake a deeper discussion addressing that concern. The aim of the author, while undertaking this exercise, is to throw light on a near-exhaustible set of issues that arise out of smarter speech of AI, that this paper seeks to address.


Digital-profiling of an Individual

One of the most direct ways in which one AI can interfere with one’s adequate exercise of freedom of thought and expression (Europe, 2017). It can even obstruct the absolute right of freedom to form an opinion by restricting the access to content which does not suit our preferences. The AI not only masks content which could be categorized as commercial or entertainment, but also potentially news stories and


AI & Glocalization in Law, Volume 1 (2020)

other relevant issues could be obstructed from the user, based on their past searches. This is aggravated by the fact that most users only read the first few results displayed. Such a predicament could potentially reinforce the impact of fake news as fake news is sensationalized and propagated in a manner that it exploits the AI algorithms to trend. This problem comes to the forefront in cases of social media outlets like Facebook and Twitter (al., 2015). The way that the AI decides to organise the user’s feed and the news that they’d be subjected to when logging in the first time can determine the way one interacts with the world around them. This gives rise to a confirmation bias, whereby the news and opinion pieces that are closer to a person’s beliefs are displayed often than others (Europe, 2017). This gives rise to the creation of a phenomenon called the “filter bubble”, as coined by Eli Pariser. It refers to the situation whereby a user is exposed to less diverse views, creating an echo chamber which amplifies pre-existing views of the user. This is one of the effects which is at its most prominence during an election campaign, during which supporters of a particular candidate or political party are only subjected to information about that one candidate, while suppressing those of the opponents (Sunstein, 2007).


Removal of harmful (but legal) content and censorship

Other than restricting access to content, another way in which AI can potentially interfere with the freedom of expression of humans, is by automated removal of content from online platforms (What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation, 13, p.18., 2019). Most such platforms utilize an AI-based algorithm to filter content to avoid extremist and obscene speech to be published. While an argument could be made that such acts are taken in the interest of democracy, they can as well be an impediment to the right of freedom of expression of the users. It has to be pointed out that a human at the helm of such editorial position could make such mistakes too. However, AI can hardly discern the emotion behind the content. As such, pieces which are satirical, or have elements of cynicism, irony or humour are at a greater risk of being censored by the algorithm. (How do you make an AI get the joke? Here’s what I found on the web., 2020). Some jurisdictions have soft law in place which seeks to regulate the censorship of harmful content (From a'Race to AI'to a'Race to AI Regulation'-Regulatory Competition for Artificial Intelligence. , 2019). And while there still exist some lacunae in this law whether they provide for adequate safeguards to balance the right of freedom of expression and public morality. For example, in the EU, the E-Commerce Directives (“Directives”) makes the service provider responsible for such harmful content of third-party (Liability of intermediary service providers in the EU Directive on Electronic Commerce, 2002). The hosts are required to remove


such harmful content, and while the Directives provide that such content should be fairly judged against the principles of freedom of expression, it does not provide for any specific safeguards to achieve such an end. This puts the curators of online platforms in a peculiar situation, where they may be liable for both under-removing and over-removing certain content. Such shortcomings in law become more glaring in the cases where AI acts as an editor (Liability of intermediary service providers in the EU Directive on Electronic Commerce., 2002). As far as fake content is concerned, the usual practices of fact-checking and redflagging are our first and only line of defense. This is a largely self-regulated sector. To tackle this issue, a cognitive AI could be used to undertake the exercise of redflagging untruthful news. However, in that scenario an automated censor could also remove unlikely, yet still truthful news. Such an impairment to the right to freedom of expression. An infamous example of such a scenario is the automated censorship tool used by the Communist Party of China which utilizes AI to heavily regulate the content allowed on the internet that can be accessed by the Chinese citizens (Ruan, 2019). This can result in a lack of due process, which reinforces the need for a accountability mechanisms and grievance redressal systems to be put into place. 3.

Obscene AI speech and Automated Discrimination

The first two problems highlighted are from the viewpoint of the AI being a moderator of human speech. Of late, humans have come up with a new way with which we can interact with the AI by talking to it. This new means of engaging with AI brings with it a new set of problems. The AI in question could talk back in a racist or bigoted manner to the user interacting with it. Platforms like Facebook are already in the process of developing a sophisticated AI which retorts to bigoted speech online (Christopher, 2019). It crafts its responses by sampling the conversations that real users have had on platforms like Reddit and Facebook. While prima facie it seems like an obvious idea, the problem arises when we take a closer look at the manner in which the AI is trained for such speech (Deep learning for hate speech detection in tweets. , 2017). An illustration might make the concern clearer., a Microsoft chat box AI, which was developed with an intention of developing a conversational understanding by engaging with people on Twitter (Vincent, 2016). Within 24-hours, had turned into a “racist bigot”. There are examples where the AI deployed to detect hate speech developed bias on the basis of colour and religion (Carbone, 2019). It is important to regulate such AI, but the question that arises here is whom to put on trial – the AI or Microsoft? With AI getting smarter, autonomous, the answer to that question is only getting more complicated by the day. Automated discrimination can be intuitive and subtle, and thus difficult to detect. Even at times, by the victims themselves. Due regard needs to be given to explicit bias, for example, not promoting women content creators and implicit bias, for ex-


AI & Glocalization in Law, Volume 1 (2020)

ample, only promoting certain content to people associated with a particular profession, which is a part of the sample data which is used to train the AI (Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube, 2017). As evidenced from out previous discussion there is an acute need to regulate AI’s interaction along the lines of what kind of speech is permissible and what is not. AI poses difficult questions to the existing doctrines of law both as a moderator of speech, and as the speaker itself.


Is existing jurisprudence flexible enough to accommodate AI Speakers?

If one was to ignore the source of the speech, they would realise that there are many aspects of the speech that the law already protects. Hereinunder, we build upon that idea and explore whether the source of the speech is really of any concern to us. 1.

Constitutional Protection given to “Speech”

There are several reasons, each based on the jurisprudential understanding of freedom of speech and expression, which could be iterated here. One of the first ones is the understanding that any speech that is a part of the democratic process should be protected. What it means that anything that is “worth saying, shall be said”, (MEIKLEJOHN, 1965) and is thus protected under the existing doctrines of free speech. Such a contribution, if in the effect of being in the interest of democracy, would then be considered protected. The question then arises that whether AI can effectively contribute to such a discourse. Even though the AI, as of the time of writing this, cannot be realistically seen to be the representative of the people, one cannot undermine it as a participant in the democracy (The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms, 2017). AI plays a major role in acting as an intermediary between the elected and the electors, and interacts with the common man on a daily basis (Knott, 2017). The only difference is that it does not interact like a “natural” person. Resultantly, it can be put forth that the basic principles of democracy do not apply to the AI itself, as a legal personhood cannot be said to be possessed by the AI speaker. At the same time however, a juristic personality is granted to unnatural entities like corporations, partnerships, NGOs, etc (Legal personhood for artificial intelligences., 1991). It was held in the case of First National Bank of Boston v. Bellotti (Powell, 1978), it was discussed whether protection of free speech should


be extended to corporations, associations, etc. The court reasoned that the “worth of speech, does not depend on the source”. Such personalities are said to wield the constitutional rights in a slightly different manner than “natural” entities. These derivative rights, although not exercised in a similar manner as those by a natural person, still draw their authority from the constitution. It is a legal fiction which is adopted by the law to achieve a sense of parity between ambiguous notions like what parties are privy to a contract, for the purposes of limiting liability, and distinguishing how the assets of a corporation or a natural person may be taxed (Legal personhood: How we are getting it wrong, 2015). In any case, the jurisprudence has expanded from mere speech, to include within its purview works of art, literature, even music and video games within its purview (Defining Speech in an Entertainment Age: The Case of First Amendment Protection for Video Games, 2004). Thus, no matter the feature, if it is a device through which a semblance of “interaction” can take place, and it can be used to deliver a message, a thought or opinion, that should be sufficient for it be covered by the protection of free speech. 2.

Legal Personhood of AI

Such a proposition conveniently gives us an opportunity to bring up the discussion on the legal personhood of the AI. A school of constitutional law jurists lay emphasis on the autonomy of the person which confers upon them the rights granted by the Constitution. Prima facie this stands as one of the biggest obstacles to the extension of free speech rights and protections to AI speech. With the humanoid Sophia recently being conferred with a citizenship by Saudi Arabia (Sophia, first citizen robot of the world., 2017), and the discussions on Tesla’s self-driving cars abound, we have examples in front of us which have proven that those barriers may verily be pedantic. The test of personhood is increasingly becoming mental-based than personalitybased (van Genderen, 2018). The definition has been expanded every time a new group has been extended the recognition, be it on the basis of religion, gender, or colour. Eventually, the understanding became that any group that is sentient, or can think for itself and undertake actions by itself, should be extended the veil of legal personality. So the questioning criteria for granting the same to AI becomes whether AI could be the decision-maker. And we have enough scientific evidence now to conclude that AI is not becoming autodidactic, see Google’s DeepMind for example (Gent, 2020), but is becoming self-aware, and evolving by itself (Griffin, 2020). Another source of obstacle is the inability of AI to speech to have an expressive, or emotional element. To that end, a computer also cannot have tangible thoughts what might be termed as “opinions” (DWORKIN, 1977). But to that too technology has effectively developed an answer. Computers can now have an olfactory, haptic,


AI & Glocalization in Law, Volume 1 (2020)

optical and proximity indicators to have an emotional intelligence – what has been dubbed as “affective computing” (PINKER, 2002). Thus, even if AI cannot have emotions, it can still express them. Even that former impossibility should not be ruled out for long. The question then becomes that of when rather than if. So being “human” does not become a criteria for granting of a legal personality (GRAY, 1921). It is also not necessary that all “groups” are granted all constitutional rights. For example, corporations and partnerships do not have the right to vote. The trend in recent years has been that there is a distinct legal status, like an electronic personhood, that is created for AI (Parliament, 2017). The European Parliament in 2017 recognized the urgent need to recognise the potential harm that may be caused by more sophisticated robot individuals, so they may be called upon to do good any damage that is caused by them. In 2019 however, this stance was reduced to functional reasoning by the Expert Group on Liability and New Technologies appointed by the European Commission (“Expert Group”) (Group, 2018) wherein the Expert Group denied the need of defining an electronic personhood as currently, the liability is reducible to those attributable to their creators who are usually natural persons. This stance of the Expert Group is valid to the extent that currently most AIs are operated by corporations (Group, 2018). There are layers of shareholders between the company and the AI itself. Adding another layer of electronic personhood would not serve any purpose that would help in making the legal process more efficient. However, leaving it at that would be turning a blind eye to potential cases, now or in the future, which would warrant some form of personhood to be attributed to the machine. Summarily, that maybe achieved by either i) extending onto the machines the concept of legal personhood as it exists for entities like corporations and NGOs; or ii) create an entirely different notion of juridical personhood for the machine called the “electronic personhood” with specific regulations. 3.

AI as a Moderator of Speech

The other set of problems that we previously discussed revolve around how the AI influences or moderates human speech. Thus, the next logical line of enquiry should be whether the conduct of AI which seeks to interfere with the freedom of speech and expression of natural persons. The harms caused to an individual by limiting their exposure to information cannot be understated. These external interferences can be regulated and protected against, for here not the speech, but the speaker, or the source, itself is under scrutiny. For most platforms, AI acts the first line of defense when it comes to preventing illicit and inflammatory content being displayed on the platform (Content moderation, AI, and the question of scale., 2020). However, the user has no control


what parameters and variables the algorithm takes into account while deciding upon the fate of the piece. On top of that, the process is completely opaque and no explaining as to the path of reason taken by the AI is provided. This is where the possibility of AI picking up bias prevalent in the society during its learning phase and censoring content comes into the picture (Content Moderation, and Freedom of Expression, 2020). This may well be unrepresentative of the conscious bias of its creators, and that makes the problem rather difficult to be addressed. There is some content which is prima facie harmful like content that is out-towardly racially and religiously inciting, or pornography. But others require analysing the broader cultural, societal and historic context in which they were posted, and accounting for different forms of content like text, images, audio, and video. From a commercial aspect, the courts have since long regulated advertisements or misleading speech, through a content-based regulatory approach (Powell, 1980). The AI-speakers in the commercial sphere may be subjected to similar regulatory regimes. From a purely human rights perspective, which traces its authority from the constitution of the respective country, both negative and positive obligations are imposed on the State to make sure that its citizens can adequately exercise their right of freedom of thought and expression (Human Rights Committee, 2004). We will not be indulging in the discussion of what in-house measures and policies the companies or platforms may have for adequate redressal of grievances arising related to AI and content moderation, see for example, UNHRC’s the Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework (enterprises, 2011). As far as enforcing the constitutional rights are required, the most modern understanding of the jurisprudence is surprisingly easy, based on a three-part test (Freedom of expression: article 19 of the international covenant on civil and political rights and the Human Rights Committee’s general comment No 34, 2012). The same was also alluded to in the landmark judgement of India’s Supreme Court in the case of Justice Puttaswamy v. Union of India (Chandrachud, 2017). The right to freedom of speech and expression provided most prominently under the Article 19 of the ICCPR, and reciprocated in various jurisdictions across world including India, the EU, USA, etc., is subject to certain reasonable restrictions. Such restrictions cannot be unlawful or arbitrary. To that extent, for an interference to be in accordance with the law, it has to fulfil a conjunctive three-part test of i) legality, or being in accordance with law; ii) necessity, or the restriction being proportional and there being a pressing social need for such a restriction; and iii) legitimacy, or it being in pursuit of a legitimate state aim (Freedom of expression: article 19 of the international covenant on civil and political rights and the Human Rights Committee’s general comment No 34, 2012). We can set aside the prongs of legitimate state aim and necessity, since both converge upon the idea that such a step is required for the general public good. The first prong of legality needs to be given a closer look. If the States are able to come up with a dedicated legislation which seeks to govern the speech and conduct of AI, the same can be fulfilled. It needs to be noted here that the author does not seek to


AI & Glocalization in Law, Volume 1 (2020)

lay down the specifics of what such a dedicated legislation should cover, for it lies outside the scope of our discussion. The paper is limited to examining whether the existing constitutional framework is flexible enough to accommodate within itself the upcoming notion of AI. Thus, a framework, or a set of guidelines are devised which seeks to address specific problems posed by the AI along the lines of accountability, training guidelines, optimal resource allocation, fact-checking, self-regulation regimes, in-company guidelines, increasing oversight, etc. is promulgated, then AI speakers would be a positive step towards assimilating AI into the current jurisprudence. Such a regulation should be developed with extensive engagement of all stakeholders to prevent consolidation of intelligence, know-how or power in one or two companies. Adequate consideration must also be given to the user-base which would be the group that would be actually interacting with the AI. Most countries especially those part of the EU already have a developed National Strategy for regulating AI (Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law, 2018). Some countries like Sweden, Japan, and USA have made sector-wise regulations for dealing with AI (Saran, 2018). Separate frameworks seek to govern separate verticals like e-mobility, health industry, e-commerce, among others. India has a number of national strategies as of 2018, but it still lacks a dedicated set of guidelines regulating AI.


The Road Ahead: Guiding AI and Being Guided By AI

In the previous section we delved into the discussion of how the existing framework, with requisite tweaks, is well-suited to accommodate AI speech and conduct in its purview. In this section, we take that understanding a step forward and look at various the various ways which could prove to be beneficial in societal and legal uptake of AI and AI speech. Any attempt to regulate AI speech would come with the challenges of its own. The manner in which content is scrutinised is often heavily influenced by the state. In such a scenario, any social welfare intentions behind the speech might have to take a backseat. Another limitation to be addressed is that there are some offences related to speech which require intent on the part of the accused for the liability to be imposed. A computer cannot be naturally be taken to possess the mens rea necessary for culpability. However, at the same time, we cannot turn a blind-eye to the real and heavy harm speech can cause. The increasing sophistication and pace of learning, and reach of the computers give States some urgency to move ahead with the initiatives in regulating AI speech.


Evolution is preferred over mutation. In the process of state’s attempts at regulating the AI, the various stakeholders must be sufficiently involved for such an exercise has that the AI speech might end up being suppressed by the state, or worse, being wielded by the state by means of legal hokum for propaganda purposes, hidden from the common man’s eyes (Cath, 2018). The problem of assigning a personhood on AI could be dealt with by undertaking a functionary approach is already being pondered by various jurisdictions across the globe. It involves an undertaking an investigatory exercise on a case-by-case basis – that of tracing accountability to a single party that may be held responsible for the damage caused. Which means that in cases where doing so becomes impossible, to the agent primarily in control of the AI-based product or service, or somebody who ultimately benefits from the usage of such service (Towards a ‘Responsible AI’: Can India Take the Lead?, 2020). The latter is helpful in pinning liability in cases where multiple and various stakeholder are involved with the AI at different stages of creation, training and deployment. Such a step should be complimented by mandatory disclosure requirements to ease identification of such persons. Taking a leaf from how the concepts of legal personhood granted to corporations and partnerships works, a manner of limiting the liability of liable parties should be put into place. As far as content moderation by AI is concerned, it is firstly important to fulfil the prong of legality as discussed earlier. Any legislation must take into account the pluralism and diversity of modern society. Platforms can opt for allowing the users to choose for themselves the level of content moderation that they seek while still keeping the overall moderation privilege to account for especially inflammatory content (THE OVERSIGHT OF CONTENT MODERATION BY AI: IMPACT ASSESSMENTS AND THEIR LIMITATIONS, 2020). The AI should be made responsive to the users’ needs and not be responsive to what the platform thinks is most relevant for them. This will prevent the creation of the filter bubble, and the users themselves would be responsible for what kind of content they are exposed to. There should be explicit warnings provided to the user before they decide to opt for such a recourse. Every individual platform should be mandated to have moderation filters, which adequately follow state’s anti-trust laws, non-discrimination laws, and transparency norms. A minimum set of non-derogatory standards can be provided to such platforms which is broad enough to accommodate diverse content (No amount of “AI” in content moderation will solve filtering’s prior-restraint problem, 2020). On that account, there is a strong need to set up intermediary liability guidelines in place. India has recently come up with such a framework, and the same may be reworked to include AI in its grasp (Comments on the (Draft) Information Technology [Intermediaries Guidelines (Amendment) Rules], 2018., 2018). Inter-


AI & Glocalization in Law, Volume 1 (2020)

mediary liability guidelines play an important role in moderating third-party content on websites which reduces legal exercises which need to be undertaken while determining liability and tracing accountability. The intermediary should not be penalized for isolated incidents of oversight by the AI. Instead, to account for unforeseen and mechanical errors, the overall performance of the moderation AI should be assessed on a periodic basis (The Right Tools: Europe's Intermediary Liability Laws and the EU 2016 General Data Protection Regulation, 2018). On a similar note, further reinforcing the accountability of AI, and to deal with the problem of opaqueness of the entire process, there should be robust and easy grievance redressal mechanisms in place which the users can seek recourse to (Promotion and protection of the right to freedom of opinion and expression, 2018). This reprimanding should be done by a human and not a machine so that special cases of context and relevance of the content can be adequately judged. Human oversight plays an important role at all stages is crucial for maintaining a comprehensive bubble for safe exercise of freedom of thought and expression. Complimenting the transparency and accountability mechanisms in place, there should be provisions for safeguarding design as well. Creation of relevant sample datasets to facilitate learning and training of AI to reduce inherent and societal biases creeping in should be undertaken. A step should be taken directed towards the user base as well. It is important that initiatives are taken by the government to raise the media literacy of the general public. Trainings related to how one should manage and judiciously share their non-personal and sensitive data online, and conduct themselves on such platforms should be commissioned by the government. Lastly, any policy that is to be framed must involve multiple stakeholders. To address free speech concerns the perspective of different experts, focus groups, industry leaders, and executive must be taken instead of relying solely on the discretion of the legislature or a few handpicked platform leaders. The regulation should be vertical-specific, whereby, different sectors which can utilize are individually regulated before we move towards a consolidated set of rules (Towards a ‘Responsible AI’: Can India Take the Lead?, 2020). Ideas of self-regulatory regimes, regulatory sandboxes and co-regulatory bodies should be promoted to get ahead on this still-nascent industry.



The increasing sophistication of AI is fascinating – but alarming. Urgent steps need to be taken to regulate this developing AI technology so that its integration in society can be guided by appropriate constitutional and humanitarian values. AI speech is one such aspect which is increasingly becoming familiar to us. The manner


in which it not only interacts, but interferes with the exercise of right of expression of humans needs to be given due care. The existing jurisprudence on right to freedom of speech and expression, as we have seen, presents surprisingly few barriers towards the integration of AI speech in its ambit. With due regard given to the social, historic and linguistic aspects of human speech, and to fundamental regulatory concepts like transparency, accountability, legal fiction of juristic personality, and rule of law, the process of integration of AI into existing legal system can be effectively initiated. It is required that states beware from simplistic approaches to regulation and opt for a comprehensive framework, borne out of multi-stakeholder discussions, with a result-driven perspective. The users should have a choice of limiting the influence of AI in their online presence, and avoid such an interference being imposed upon them by big platforms.

References 1. Baird, A., Parada-Cabaleiro, E., Hantke, S., Burkhardt, F., Cummins, N. and Schuller, B.W. 2018. The Perception and Analysis of the Likeability and Human Likeness of Synthesized Speech. s.l. : In INTERSPEECH (pp. 2863-2867)., 2018. 2. Chopra, S. and White, L. 2004, August.. Artificial agents-personhood in law and philosophy. In Proceedings of the 16th European Conference on Artificial Intelligence (pp. 635-639). s.l. : IOS Press., 2004, August. 3. Sahu, S. and Singh, S.K. 2019, August.. Ethics in AI: Collaborative filtering based approach to alleviate strong user biases and prejudices. In 2019 Twelfth International Conference on Contemporary Computing (IC3) (pp. 1-6). s.l. : IEEE., 2019, August. 4. Europe, Council of. 2017. ‘Algorithms and Human Rights: Study on the human rights dimensions of automated data processing techniques and possible regulatory implications’. s.l. : DG 12, 17, 2017. 5. al., Frederik J. Zuiderveen Borgesius et. 2015. 'Should we worry about filter bubbles?'. Internet Policy Review. [Online] 2015. [Cited: August 20, 2020.] 6. Europe, Council of. 2017. ‘Algorithms and Human Rights: Study on the human rights dimensions of automated data processing techniques and possible regulatory implications’,. s.l. : Council of Europe, 12, 17., 2017. 7. Sunstein, Cass R. 2007. 2.0, 116. s.l. : Princeton University Press, 2007. 8. What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation, 13, p.18. Suzor, N.P., West, S.M., Quodling, A. and York, J. 2019. s.l. : International Journal of Communication, 2019. 9. How do you make an AI get the joke? Here’s what I found on the web. Maraev, V., Breitholtz, E. and Howes, C. 2020. s.l. : In First AISB Symposium on Conversational AI (SoCAI), 2020. 10. From a'Race to AI'to a'Race to AI Regulation'-Regulatory Competition for Artificial Intelligence. . Smuha, N.A. 2019. s.l. : SSRN , 2019. 3501410. 11. Liability of intermediary service providers in the EU Directive on Electronic Commerce. Baistrocchi, P.A. 2002. s.l. : Santa Clara Computer & High Tech. LJ, 2002.

146 AI & Glocalization in Law, Volume 1 (2020) 12. Liability of intermediary service providers in the EU Directive on Electronic Commerce. Baistrocchi, P.A. 2002. s.l. : Santa Clara Computer & High Tech. LJ, 2002. 13. Ruan, Lotus. 2019. Regulation of the Internet in China: An Explainer. The Asia Dialogue. [Online] October 7, 2019. [Cited: August 26, 2020.] 14. Christopher, Intagliata. 2019. Artificial Language Learns to Talk Back to Bigots. Scientific American. [Online] October 10, 2019. 15. Deep learning for hate speech detection in tweets. . Badjatiya, P., Gupta, S., Gupta, M. and Varma, V. 2017. s.l. : Proceedings of the 26th International Conference on World Wide Web Companion, 2017. 16. Vincent, James. 2016. Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. The Verge. [Online] March 24, 2016. [Cited: August 28, 2020.] 17. Carbone, Christopher. 2019. AI trained to detect hate speech online found to be biased against black people . The New York Post. [Online] August 16, 2019. [Cited: August 28, 2020.] 18. Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Matamoros-Fernåndez, A. 2017. s.l. : Information, Communication & Society, 20(6), pp.930-946., 2017. 19. MEIKLEJOHN, ALEXANDER. 1965. POLITICAL FREEDOM: THE CONSTITUTIONAL POWERS OF THE PEOPLE. 1965. 20. The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Makridakis, S.,. 2017. s.l. : Futures, 90, pp.46-60., 2017. 21. Knott, A. 2017. Uses and abuses of AI in election campaigns. 2017. 22. Legal personhood for artificial intelligences. Solum, L.B. 1991. s.l. : NCL Rev., 70, p.1231., 1991. 23. Powell, J. 1978. First National Bank of Boston v. Bellotti. 1978 U.S. LEXIS 83 : United States Supreme Court, 1978. 24. Legal personhood: How we are getting it wrong. Dyschkant, A. 2015. s.l. : U. Ill. L. Rev., p.2075., 2015. 25. Defining Speech in an Entertainment Age: The Case of First Amendment Protection for Video Games. Garry, P.M. 2004. s.l. : SMUL Rev., 57, p.139., 2004. 26. Sophia, first citizen robot of the world. Retto, J. 2017 : ResearchGate. 27. van Genderen, R.V.D.H. 2018. Do we need new legal personhood in the age of robots and AI?. In Robotics, AI and the Future of Law. s.l. : Springer Singapore, (pp. 15-55)., 2018. 28. Gent, Edd. 2020. Artificial intelligence is evolving all by itself. ScienceMag. [Online] April 13, 2020. [Cited: August 2020, 2020.] 29. Griffin, Matthew. 2020. Google unveiled a new type of AI that spontaneously mutates and evolves all by itself. Fanatical Futurist. [Online] April 30, 2020. [Cited: August 20, 2020.] 30. DWORKIN, RONALD. 1977. TAKING RIGHTS SERIOUSLY. 1977. 31. PINKER, STEVEN. 2002. THE BLANK SLATE: THE MODERN DENIAL OF HUMAN NATURE. 2002.

147 32. GRAY, JOHN CHIPMAN. 1921. THE NATURE AND SOURCES OF THE LAW, Roland Gray ed. s.l. : MacMillan, 1921. 33. Parliament, The European. 2017. Resolution on Civil Law Rules on Ro-botics. s.l. : European Parliament , 2017. 34. Group, Expert. 2018. Liability and New Technologies. s.l. : the European Commission, 2018. 35. —. 2018. Liability and New Technologies. s.l. : European Commission , 2018. 36. Content moderation, AI, and the question of scale. Gillespie, T. 2020. s.l. : Big Data & Society, 7(2), p.2053951720943234., 2020. 37. Content Moderation, and Freedom of Expression. Llansó, E., van Hoboken, J., Leerssen, P. and Harambam, J. 2020. s.l. : Algorithms , 2020. 38. Powell, J. 1980. Central Hudson Gas & Electric Corp. v. Public Service Commission. s.l. : Supreme Court of the United States, 1980. 39. Human Rights Committee, Eightieth session. 2004. No 31, I.G.C., The Nature of the General Legal Obligation imposed on States Parties to the Covenant. s.l. : Human Rights Committee, 2004. 40. enterprises, Special representative of the secretary-general on the issue of human rights and transnational corporations and other business. 2011. Guiding principles on business and human rights: implementing the united nations ‘protect, respect, and remedy' framework. s.l. : UNHRC, 2011. 41. Chandrachud, J. 2017. Justice K. S. Puttaswamy (Retd.) and Anr. vs Union Of India And Ors. s.l. : Supreme Court of India, 2017. 42. Freedom of expression: article 19 of the international covenant on civil and political rights and the Human Rights Committee’s general comment No 34. O’Flaherty, M. 2012. s.l. : Human Rights Law Review, 12(4), pp.627-654, 2012. 43. —. O’Flaherty, M. 2012. s.l. : Human Rights Law Review, 12(4), pp.627-654, 2012. 44. Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Hacker, P. 2018. s.l. : SSRN, 2018. 45. Saran, S., Natarajan, N. and Srikumar, M. 2018. In Pursuit of Autonomy: AI and National Strategies. s.l. : Observer Research Foundation, 2018. 46. Towards a ‘Responsible AI’: Can India Take the Lead? Chakrabarti, R. and Sanyal, K. 2020. s.l. : South Asia Economic Journal, p.1391561420908728., 2020. 47. —. Chakrabarti, R. and Sanyal, K. 2020. s.l. : South Asia Economic Journal, p.1391561420908728., 2020. 48. Cath, C. 2018. Governing artificial intelligence: ethical, legal and technical opportunities and challenges. s.l. : The Royal Society, 2018. 49. THE OVERSIGHT OF CONTENT MODERATION BY AI: IMPACT ASSESSMENTS AND THEIR LIMITATIONS. Nahmias, Y. 2020. s.l. : Harvard Journal on Legislation, Forthcoming, 2020. 50. No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. Llansó, E.J. 2020. s.l. : Big Data & Society, 7(1), p.2053951720920686., 2020. 51. Comments on the (Draft) Information Technology [Intermediaries Guidelines (Amendment) Rules], 2018. Bailey, R., Parsheera, S. and Rahman, F. 2018. s.l. : SSRN 3328401, 2018. 52. The Right Tools: Europe's Intermediary Liability Laws and the EU 2016 General Data Protection Regulation. Keller, D. 2018. s.l. : Berkeley Tech. LJ, 33, p.287., 2018. 53. Promotion and protection of the right to freedom of opinion and expression. UN General Assembly, 73rd Session. 2018. s.l. : UNGA, 2018.


AI & Glocalization in Law, Volume 1 (2020)

14 Laws Regulating Facial Regulation in India Paranjay Sharma Research Member, Indian Society of Artificial Intelligence and Law, India

Abstract. Facial recognition is a technology, based on artificial intelligence (AI), which leverages biometric data to identify a person based on their facial pattern. In recent months, the Indian government has also commenced employing this technology to enhance law enforcement capabilities. In the longterm, it plans to build a nation-wide Automated Facial Recognition System (AFRS) to modernize the process for criminal identification and verification by various police organizations across the country. As of today, there is no regulation in place to avoid the unlawful use of facial information. Thus, it is the need of the hour to create an underlying framework for its usage since this technology will undermine democratic values and fundamental rights of the citizens. This paper aims to critically scrutinize the current legal framework existing in India with regard to the Facial Recognition technology and provide rudimentary legal recommendations for implementation of such technology. Keywords: Facial Recognition, AI, Machine Learning, Identity Sciences.


Understanding the emergence of facial recognition in India

Facial recognition has become an area of important interest to the Indian government. Recently, the government approved the deployment of its Automated Facial Recognition System (AFRS) across the nation. AFRS will play a vital and necessary role in improving criminal identification and verification by way of the system’s rapid recording, processing, analysis, retrieval, and sharing of biometric identity information between different law enforcement authorities (Kimery, 2020). The National Crime Records Bureau stated that the facial recognition platform will be integrated with other governmental departments to embrace images collected by them, thereby, simplifying the process for the law enforcement authorities to identify the law offenders and streamline the tedious process which till date is manual.


While this national level project is still in its early stages, the police departments of various states are already in the process of developing Facial Recognition technologies by public and private developers such as: • • • • • •

TSCOP + CCTNS in Telangana. Punjab Artificial Intelligence System (PAIS) in Punjab. Trinetra in Uttar Pradesh. Police Artificial Intelligence System in Uttarakhand. AFRS in Delhi. Automated Multimodal Biometric Identification System (AMBIS) in Maharashtra. • FaceTagr in Tamil Nadu.


Benefits and drawbacks of facial recognition

The Facial Recognition technology has immense benefits, ranging from its ability to help in finding missing children, identifying criminals, ensuring safety in public places, preventing human trafficking and general maintenance of law and order. Moreover, this technology has been deployed in several sectors viz airports, banks, and shopping centres etc where it has been of great use for government authorities for augmenting national security. However, this technology raises several privacy and security concerns as well which need to be addressed prior to implementation. The accuracy of the Facial Recognition technology is questionable since the software suffers from ethnic and racial biases globally. Moreover, the pool of private data can expose the information to a data breach thus violating the individual’s fundamental right to privacy. Therefore, there is a need for ensuring the cyber security of such data to be protected in order to protect the rights of people.


3. Legal challenges to facial recognition in india

When we talk about laws concerning data capture, there is no legal framework passed by the Parliament of India which authorises the implementation and maintenance of such facial recognition technologies. The Indian Information Technology Act 2000 (Union of India), India’s first legislation on the electronic format, has no reference to the facial recognition technology. Also, laws relating to data protection in terms of general data protection laws are very minimal, single civil or criminal provision under the Information Technology Act which does not encompass the Facial Recognition technology. The Indian Government is also pushing for the power to collect private data through multiple state surveillance schemes, which violates people’s fundamental


AI & Glocalization in Law, Volume 1 (2020)

right to privacy enshrined under Article 21 of the Constitution of India. By virtue of the judgment of Puttaswamy v/s Union of India (Supreme Court of India), the Hon’ble Supreme Court of India stated that the Right to Privacy is a part of the fundamental right guaranteed under Article 21 of the Indian Constitution. This judgment also expressly says that consent is required before data can be collected. The court also held that any act which violates Right to Privacy like collecting personal information for law enforcement purposes must be done only if it passed the proportionality test. Therefore, it is clear that no legal safeguard currently exists in India concerning AFR Systems and Personal Data Protection.


Personal Data Protection Bill, 2019 and Right to Privacy

The open collection of sensitive information that the technology propagates does not have any legal framework to ensure its successful implementation. Moreover, Automated Facial Recognition technology poses privacy, cyber security and regulatory issues. However, there are certain interception laws that allow the government to snoop on private data of the people. The recent introduction of Personal Data Protection Bill 2019 in the Lok Sabha aims to legalise the collection of private data of the citizens without their consent in matters relating to security of the state and to maintain public order. Moreover, the Bill explicitly states that the personal data of an individual could also be collected for ‘reasonable reasons’, without their say, which is directly hindering the individual’s Right to Privacy. The Indian Government, therefore, needs to find a customized approach on facial recognition technologies and protect the privacy and fundamental rights & liberties of its citizens.



Adopting the emerging Facial Recognition technology for public services is essential, but at the same time it does raise reasonable questions and concerns about privacy and rights. Therefore, ensuring its lawful use is crucial for any government to gain the trust of its people. The use of Facial Recognition technology is cosmic but requires proper regulatory mechanisms so it could be leveraged in a proper manner. The government needs to step up and consider/address the data issues in a holistic manner before India moves forward in the direction of Automated Facial Recognition technologies. Moreover, the government should enact stringent laws to counter the potential exploitation of the personal information of its citizens and institute adequate safeguards to balance the merits and demerits of this technology.




With the deployment of AI in Facial Recognition Technologies aspects related to privacy, security, transparency, redressal, lack of due process causing bias and discrimination need due consideration. The following policy measures are recommended• Automated facial recognition is a relatively new technology being introduced by law enforcement agencies around the world in order to identify persons of interest. Giving the public more insight into how the technology functions would be a good step towards a future in which the technology inspires less fear and controversy. • If the Government is planning to implement the Automated Facial Recognition System in India, they need to establish adequate safeguards, redressal mechanisms, and create speedy dispute resolution forums to address any violations that may arise due to the use of this technology on a mass scale. • This technology should be implemented only with strong guidelines to avoid any misuse. The Authorities need to frame/enforce strict guidelines for responsible use of Facial Recognition technologies effectively and with full accountability to the citizens. • The Data Protection Act also requires certain amendments to restrict how private/governmental entities may collect and use biometric data, including facial data. Also, laws need to be formulated to include robust transparency measures with respect to use of the technology. • The laws should also limit the use of facial recognition to conduct mass surveillance and real-time identification. Such scanning should require a warrant or identify/lay down the ‘under exigent circumstances’ clearly. • The government needs to feed better and diverse datasets onto the Facial Recognition software to address bias risks. This will not only avoid the indiscriminate use of this technology against people of colour or gender but will also improve the accuracy levels of the algorithm (Das, 2020) • Since the database gathers humongous data without any consent, there is a need for ensuring the cyber security of such data to be protected and preserved. Thus, the Data Protection law needs amendment so as to avoid any misuse of sensitive information.

References Das, Sejuti. 2020. Is There A Case of Regulating Facial Recognition Technology. ANALYTICS INDIA MAG.COM. [Online] June 20, 2020.


AI & Glocalization in Law, Volume 1 (2020) Kimery, Anthony. 2020. India set to stand up world’s largest government facial recognition database for police use. BIOMETRIC UPDATE.COM. [Online] March 11, 2020. Supreme Court of India. Puttaswamy v/s Union of India, . (2017) 10 SCC 1 (India). s.l. : Supreme Court of India. Union of India. The Indian Information Technology Act, 2000, No. 21, Acts of Parliament, 2000 (India).


15 Digital Diplomacy and The Role of Governance in International ‘CyberSphere’ Aditi Sharma Associate Editor, Indian Society of Artificial Intelligence and Law, India

Abstract. The impact of developing technology in the lives of people cannot be ignored. Its aim has always been to improve the quality of life for current and future generations. The development of technology has always been a question of concern to its creators and its users. It is not a hidden fact that whenever we take a step towards developing a new technological advance, we also introduce ourselves to a new menace. This discussion is turning to be more mature day by day. The technologies have been used to create mischiefs to individuals and the countries, who aim to use technology for the people. Therefore, the need to formulate laws that would put some deterrence on those who try to create such mischiefs cannot be ignored. People generally equate this development in technology by asking the experts if these technologies are good or bad, which is a wrong line of question. The correct approach to look at this trend is to see how we relate to the change as a country. The countries use e-tools to reach out to millions of people. They call this method as ‘digital diplomacy’. It aims to enhance international associations and manage international crises. It is also used as a tool to counter disinformation, stimulating trade relations, and improving diaspora engagement. But the majority of people are unaware of the reason because of which their governments are using digital diplomacy when they can simply communicate through media. Here comes the role of another aspect of digital diplomacy. Today we are standing on a crossroads where we need the help of big companies, specializing in technology to help the society comprehend that we need these technologies to become better. The reason is that the rise of Google-Apple-Facebook-Amazon, commonly known as ‘GAFA’, has been an essential concern in expanding the use of technology. People tend to believe more to these companies than their governments because they know that these companies are experts in this subject. In a similar line of argumentation, this research paper is set to highlight the role of digital diplomacy and governance in formulating uniform laws for the cybersphere. It also proposes solutions that may help in overcoming the glitches that may arise in the way.



AI & Glocalization in Law, Volume 1 (2020) Keywords: Cyberspace, cyber warfare, digital diplomacy, public diplomacy, economy, international cyber laws, international relations, internet governance, sovereignty.


The debaters have always pointed out that framing international law for cyberspace would only be successful when states would take an active role in it. Some states have always condemned for formulating the same because they feel that cyberspace must remain free and diffused. The stakeholders and international institutions have also discussed the subject but failed to conclude the same (2019). The overall determination is that complexity of cyberspace makes it challenging to formulate these laws. Since this subject is progressive, it is doubtful that making law would resolve even the problems that may arise in the future is very uncertain. Artificial Intelligence is a measure that has the potential to unravel this problem. In the era of changing trends, every facet is impacted. The impact on society is coming from developing technology. So is the ‘diplomacy’. Diplomacy has always been essential to every state’s foreign policy. A state which uses all kinds of diplomacy like economic diplomacy, public diplomacy, people’s diplomacy, or gunboat diplomacy tends to have the best upshot (TRUNKOS, 2011). But since the end of the cold war, various trends have changed. The lookout towards diplomacy has also changed. Ideology plays a minimal role in it now (ROBERT, et al., 2009). Nevertheless, the essence of it in the state’s policymaking remains the same. Evolving technology has brought a change in the lookout of diplomacy too. There has been an introduction to a new term known as “digital diplomacy” or e-diplomacy. When a state uses technological advances like social media platforms or global media platforms and promotes its social policies, foreign policies, and information propaganda, it is referred to use digital diplomacy. Changing diplomacy requires a type of diplomat who could adapt to such change (TRUNKOS, 2011). In the same line of argumentation, this research project would aim to understand the glitches that halt the international community to come together to advance a shield that would empower them in encountering the detriments of cyber-security and how artificial intelligence would be useful to do so. It also considers that digital diplomacy plays a crucial role in it. Hence, how the countries form relationships with big technology companies defines its extent in helping society cope with the changing trend in this era.


International Cyber Law: A History

The data shows that almost 79 percent of the nation worldwide have domestic laws relating to cybercrimes and 13 percent with no legislation at all (2020). The absence of any uniform legislation in the international sphere is still a subject of


debate for various academicians. A vast number of people have debated in support of drafting of uniform guidelines that could govern cyber warfare but failed. Though, these debates resulted in the drafting of “Tallinn Manual” (SCHMITT, 2017), which is a product of various international groups of experts who, after a lengthy discussion, brought about an academic, non-binding study on how international law applies to cyber warfare. Another lookout to this is that various state and non-state actors believe that idea of digital sovereignty is to avoid control over data, infrastructure, communication, and information related to cyberspace. (GUEHAM, 2017) This creates a harder challenge in formulating a uniform international cyber law. Majorly, there are three views regarding managing the cyberspace. Firstly, Liberal institutionalists calls for the importance of institutions and multilateralism in managing cyberspace (WU, et al., 1997). Secondly, cyberlibertarians propose that cyberspace must remain independent of the involvement of any state (BARLOW, 1996). Thirdly, Statists says that states together are responsible for formulating laws on cyberspace, and unless the states come together and show efforts to do so, universal cyberspace laws cannot be formulated (LEWIS, 2010). The debates have divided the discussion in these three views, and it has been stagnant since then neither the states nor the institutions’ effort to take any step in the implementation of the law. The subject of cyberspace is unique from other subjects because of its dynamic nature. Every new invention brings changes in all aspects of it. Similar are the actors of cyberspace. They are widely diffusive and diverse. (BASAK, 2010) The main actors in this sphere range from state actors to individuals like hackers and big internet companies or enterprises. The identification of users is challenging because the internet provides secrecy of its users inherently. All the actors have their interests and issues. Such situations make it challenging to address all the issues and come out with a legitimate solution that benefits everyone. Interestingly, the states have also not been able to conclude the status of cyberspace as a global common. (LIAROPOULOS, 2017) The consideration is given to the fact that while making any policy, discussion, or while attributing it a global common, attribution of cyber conduct is also to be done (RID, et al., 2014). No state would accept the liability of its people when the crime is done at a place where no evidence can be found or which is a global common. Therefore, no jurisdiction can be determined for the same. It adds to the complexity of the subject.


Mischiefs in Cyber Space

There have been a number of mischiefs in cyberspace, which indicates that cyberspace is prone to attacks, being a very subtle area. Edward Snowden’s case and WikiLeaks case are eye-openers. WikiLeaks, which is a company that specializes in ‘the analysis and publication of large datasets of censored or otherwise restricted


AI & Glocalization in Law, Volume 1 (2020)

official materials involving war, spying and corruption’, suffered numerous breaches which resulted in the leaking of various critical information. In 2011, it reportedly published almost 779 secret files relating to prisoners’ information (JHA, 2017). Another mischief witnessed was in 2013 when a scandal broke, which highlighted that the U.S. National Security Agency (NSA) was collecting the telephone records of millions of Americans. One of the papers published a secret court order directing one of the telecom companies to share its customer’s telephone data NSA. The surveillance was extended even to the other online firms like Facebook, Yahoo, Google, etc. (2014). The danger of not formulating cyber laws can be visible in various incidents. There have been cyber-attacks in countries in the past. These attacks were a threat to countries’ national security because attacks were focused on power plants of various countries. In 2010, the Stuxnet virus was titled as ‘the World’s First Digital Weapon’, which attacked over fifteen nuclear facilities in Iran, destroyed over 984 uranium upgrading centrifuges, and caused a thirty percent decrease in the enrichment efficiency of Iran. It was alleged that some non-state actors have a role in attacking them. (ZETTER, 2014) In 2014, the Hydro and Nuclear power plant’s computer systems were hacked, and the hacker stole vital information from it. The hacker threatened that he would destroy the reactors if the country did not shut down its reactors. Some news pointed out that North Korea allegedly had a part in such hacking (PARK, et al., 2015). In 2019, Nuclear Power Point Corporation of India Ltd. confirmed that the Kudankulam Nuclear Power Plant was attacked, even though it was not connected to the internet (Das, 2019). Lack of laws hardens the possibility of adjudicating those behind such attacks. It is difficult to blame state actors for such attacks because, apparently, no laws govern it. Not only this, but there have been uncountable breaches in cybersecurity of private organizations that store data of public importance. In 2013, it was reported that Adobe’s security was breached, and the hackers stole almost three million customer’s credit card information and encrypted passwords of almost thirty-eight millions of its users. Later it was revealed that not only was this information stolen, but they were also exposed publically. As a result, the company had to pay almost one-million dollars to its customers and one point one million dollars as legal fees to cope with an undefined number of claims of its customers (BALL, 2018). In 201314, Yahoo disclosed that hackers compromised its three-billion users’ personal information, security questions, and answers. It cost the company almost three hundred and fifty million (WILLIAMS, 2017).


Artificial Intelligence and Cyber Security

The debate regarding the use of Artificial Intelligence and the balance between its positive and negative roles are ongoing. Irrespective of it, various companies are using artificial intelligence for their needs. Artificial intelligence for cybersecurity


is an entirely new and trending area for research of developers. Many companies who have been using biometric scans for daily tasks have been victims of security breaches. Even the experts agree that passwords are vulnerable to attacks and are compromised to steal personal information of its users. It is not just the problems at the individual level. Breaches have been reported even at the national level that hampers the security of the nations. Such breaches have created fear in the minds of people, and they tend to disbelief that technology is beneficial. The researches have shown that artificial intelligence can be used to detect any potential threat, malware, or malicious activities. Numerous cybersecurity companies are teaching artificial intelligence to identify these threats and respond to them. Capgemini’s Report on artificial intelligence surveyed eight hundred and sixty senior executives from IT Information Security, Cyber-security, and IT Operations in seven sectors across ten countries. It concludes that every company is either using or planning to use artificial intelligence in its cyber-security initiative (2019). Companies like Google are already using it for cybersecurity of its users. The wellknown Google Cloud Video Intelligence, which is a video storage unit on the server, is an example of how Google is using deep learning technology to create security alerts for its users (MARR, 2017). Similarly, it also used its machine learning framework TensorFlow to help Gmail learn to block spams and create a spam-free environment (VINCENT, 2019). IBM’s Watson cognitive training also uses machine learning to detect malicious activities and cyber threats (2019).


Role of Diplomacy and Idea of Sovereignty

The impact of the changing trends on society is coming from these technologies and their effect on the nation’s security. Moreover, the idea of ‘Sovereignty’ is prime concern while addressing the requirement to implement laws for cyberspace. When China and Russia’s cooperation on digital sovereignty came up, this concern was highlighted (BUDNITSKY, et al., 2018). It heightened that both countries are in favor of the idea of promoting digital sovereignty. They demand control over their cyberspace by holding up the principle of non-interference in the governance of the internet (BUDNITSKY, et al., 2018). This idea is uncontestably supported by countries like Saudi Arabia and Egypt (MASASHI, et al., 2012). The real notion is what violates digital sovereignty depends upon whom you ask this question. In 2018, the British Attorney General Jeremy Wright QC MP said that ‘a cyber operation, no matter how hostile, never violates sovereignty’ (2018), on the other hand, in 2019, the French Ministry of the Armies issues a statement saying that ‘remote cyber operations that cause effects are a violation of sovereignty’ (SCHMITT, 2019). This is one of the reasons why the international community is unable to come to a consensus to formulate international cyber laws. The following statement by David Bailey, who is a senior national security law advisor for Army Cyber Command highlights that even the United States is not able to conclude the same:


AI & Glocalization in Law, Volume 1 (2020)

“In our world of cyberspace operations, we can’t even agree on what use of force is a tremendously important concept to us, but there’s no good definition. We do have some statements from the United States in terms of public statements by our Department of State legal advisors Harold Koh and Brian Egan over the course of several years, which purport to lay out what the U.S. government thinks about these issues and use of force, but, again, no agreed definition or framework yet.” If we consider a situation where this idea of digital sovereignty arouse the states to formulate international cyber law, in arguendo, then the question would be that the law which would be formulated would itself be contesting with the ideas of digital sovereignty at the expense of benefits to non-state actors like companies, civil societies or even individuals (ADONIS, 2020). Therefore, attempts to apply sovereignty to cyberspace is inappropriate to the domain, as it only creates chaos and increases room for arguments for debate on the subject (MUELLAR, 2019). The concern of digital sovereignty, the focus of continuous mischiefs, and continuous efforts to formulate laws have brought about disarray all around. Everything that is concerning these changing trends is impacting a state’s way of practicing diplomacy. There are three aspects of the interplay between technological changes and diplomacy. Firstly, the introduction of new topics for diplomatic agendas, like cybersecurity, internet governance, privacy, etc. (CULL, 2008). Secondly, the emergence of geopolitics, sovereignty, and interdependence, which means internet-driven changes in environmental-related diplomacy. Lastly and most importantly, the use of the internet in the practice of diplomacy, like the use of social media accounts by ministries of various countries (METZGER, 2012). This interplay is not only the concern of the government of states, but big corporations have a huge role to play in it. Rise of the big fishes in the market, i.e., Microsoft, Google, Facebook, etc. have played a very significant role in changing the fashion of technology. No one can deny that states and the organizations require a new set of skills and innovation was to approach and influence global policy. These companies help the states by influencing policies and regulations related to technological advances. Because they have power in their hands, they need to collaborate with various governments, civil societies, media, and other persons to help them develop and turn to an entirely new era of digitalization that is benefitting every individual in the society. This collaboration gives rise to the introduction of digital diplomacy as a measure of a country’s way of accomplishing goals and communicating with the companies as well as foreign publics. There seem to be two divergent views regarding digital diplomacy. First says that digital diplomacy is just another way of conducting public diplomacy. It means that a country uses e-tools merely for communication to the public and give information. Second, on the other hand, it says that it is a measure to increase the ability to interrelate with foreign publics. It means that a state uses the internet as a means to transit from a monologue to a dialogue (ROBERTS, 2007).


Interestingly, the big companies which deal explicitly with these technologies have widened the definition and scope of digital diplomacy today. Various diplomats believe that manner in which a country enables its resources and allows these companies to use them to implement new advances in their countries also defines the extent of digital diplomacy of that country. A common example is the Geneva Engage. It is an initiative by Geneva Internet Platform and the State of Geneva, which awards ‘the most engagement and outreach use of social media’ by various international organizations and the United Nations office in Geneva (2020). 1.

Estonia: An Epitome of Digital Society

Estonia is a European country where almost 99 percent of the public services are online. Ms. Tiina Intelmann, who is Ambassador of Estonia to the United Kingdom of Great Britain and Northern Island, says that ‘there are only three things in Estonia which you cannot do online – buy real estate, marry and get a divorce’. The country has a National Identity system, wherein the physical IDs of the citizens are paired with their digital signatures. (HEATH, 2019) In an interview with CNN, Estonia’s President Kersti Kaljulaid said that: “We have a generation who has grown up knowing that you communicate digitally with your school because we have an e-school system with your doctor because of e-health. You could say the Estonian government offers what normally only the private sector can offer to people. Estonia was a relatively poor country. Our public sector, our government, and our civil servants wanted to offer our people good quality services. We did it straight away digitally because it was simply cheaper, easy.” (SCHULZE, 2019) The development is to such an extent that the country even has an option of ivoting, wherein the almost 30% of the population votes online in elections. Bureaucracy in most of the countries can never agree with such elections because they know the possibility of mischief that can take place. But Estonia has shown the world that though there are risks involved, you must be ready to take it. Because they were aware that if it turns out to be a success, the propitious will outpower every unfortunate risk. The country faced twenty-two days lasting massive cyberattacks in 2007, which resulted in the downfall of most of its digital infrastructure. It was suspected that this attack was politically motivated due to conflict between Estonia and Russia. (DAVIS, 2007) It shows that any country, even with well-developed digital infrastructure, is susceptible to attacks. What is essential is the way in which the country copes with such attacks. Estonia restored itself and created the NATO Cooperative Cyber Defence Center of Excellence (CCDCOE). The same organization also came up with the Tallinn Manual, which was referred by the author in the previous part of the research paper. The relevance of giving an example of a country with the focus of e-governance is to highlight how this country has beautifully understood and embedded the use of digital diplomacy to boost its economy. It has used technology not only to create


AI & Glocalization in Law, Volume 1 (2020)

a digital environment for its citizen by exercising public diplomacy. They have also welcomed many small tech and private companies and cooperated with them to create a perfect and safe milieu. They have overcome the roadblocks and re-established itself. Estonia might offer the global community, a plan to build a digital society and grapple with challenges from technology. 2.

Indian Diplomacy

India is in a pragmatic stage that has been using diplomacy as a prime tool to anchor its cultural, economic, and political relations. The history of Indian diplomacy dates back to Chanakya who propounded the Indian diplomacy. The government effectively adopted the techniques of digital diplomacy in very little time. The Ministry of External Affairs in India, for the first time, posted from its twitter account in 2010. For the first time in 2011, it used Twitter to facilitate the successful evacuation of Indian citizens from Libya during the civil war. After the U.S. department, the Indian ministry of external affairs has the second-largest followers on its social media page, i.e., one point two million (AMARESH, 2020). India was also ranked in the top ten countries of global digital diplomacy (CHAUDHURY, 2016). NITI Aayog published a report on National Strategy on Artificial Intelligence, which descriptively described India’s plan on functioning with artificial intelligence and its focus areas (2018). Recently, the government has also introduced the National Artificial Intelligence portal (2020), which aims to publish all AI-related information and advancement, in India and abroad. Currently, the Indian Prime Minister, Narendra Modi, is the third most followed leader on social media, with almost eight-million followers (2014). He is one of the famous politicians in effectively using digital diplomacy and attracted international relations. Apart from digital diplomacy, his famous ‘selfie diplomacy’ is also seen while posting selfies with leaders on foreign tours (2019). He is also known for balancing between new age and traditional diplomacy as he uses social media platforms effectively, while also travels to countries and uses his public diplomacy by addressing everyone. His focus never shifts from the promotion of FDIs, vibrant capital diaspora, and other developments of the country. Not just the government, but various private organizations of India are an active part of digital diplomacy. There have been increasing investments in companies that provide technical supports to the government. Companies like Abu Dhabi Investments Authority have invested in various Indian telecom companies that support the Digital India Initiative (2020). This initiative is a flagship program of the government aimed to transform India to empower society digitally. The government has issued almost 1.12 billion biometric specialized identities to almost 88 percent of its population (2019). India is no different from facing challenges of digital diplomacy and cybersecurity. Lack of digital infrastructure is the prominent challenge that the country is currently facing. The disinformation and strategically counter to propaganda by


non-state actors has increased many-folds. The most significant problem has always been to differentiate between right and wrong information. Cyber attacks are also a critical area for every organization that possesses public information. Various government organizations witnessed these attacks in the last two years. But these challenges do not barricade the way to adopting technological advances. India’s technological achievements, scientific developments, and initiatives like Digital India show that the country tends to be optimistic about trends of the next phase. There is the risk involved, and knowingly, India is ready to take it. Because we are aware that if it turns out to be a success, the propitious will outpower every unfortunate risk.



The governance of conduct in cyberspace is still a question that needs clear answers. The challenges in its formulation are standstill. The debates are increasing, and they would be never-ending. They would continue to debate until they get their answers. At this stage, it would be useful to react to those debates so that this ruckus may halt, at least for some time. The only way in which a country can enhance its technology is when it allows small tech companies to investigate and then develop tools. Sometimes these tools would be effective, and other times, they may not be effective, but if it turns out to be effective, there is a possibility that the race to this development would pause, at least for some time. It will not only benefit the nation but also encourage and promote these small tech companies to develop something new and of its own. Digital diplomacy plays a crucial part in this. It is the only tool that encourages such companies to invest in the development of tools. Not all countries allow or encourage small tech companies, and the problem lies in their understanding of the very core of digital diplomacy. There is no doubt that governments use social media channels to convey information to their people. Still, it lacks influence, persuasion, and advocacy, which aim to use digital diplomacy. There is a minimal concern to resolve difficult issues and no communication with the stakeholders, using digital tools. There is a possibility that they feel that risks involved in using the technology are more than the outcome, which is precisely the concern of many countries. They need to invest more in cyber and information domains to overcome these risks. Channeling more sophisticated tools in regular diplomacy is always a good option. It must not back-off in experimenting in the conduct of digital diplomacy. Though Artificial Intelligence is an emerging trend that is known to be useful for cybersecurity, we must not rely only on it. Alternative solutions to these solutions that can be technical or non-technical must always be tested. When a country is using e-tools, regular audits of software and hardware is an effective solution that


AI & Glocalization in Law, Volume 1 (2020)

can avoid its unhealthy working. At the same time, monitoring the outgoing and incoming traffic without breaching privacy is also of utmost importance. With changing time and changing society, there will be a change in technology also. The positive and negative implications of such changes would depend upon how we take these changes to be. Or, in other words, the reaction of people to the change is only deciding factor of success of that change. We may accept these changes and make the best out of it. We may also reject them because we fear that there would be implications of acceptance. But rejection must never be a reason to stop trying to implement the changes. In the end, our decisions would reflect on the upcoming generations.

References 2020. About the Award. Geneva Engage . [Online] January 29, 2020. 2. ADONIS, Abid A. 2020. International Law on Cyber Security in the Age of Digital Sovereignty. E-International Relations. [Online] March 14, 2020. [Cited: June 25, 2020.] 3. AMARESH, Preethi. 2020. Digital Diplomacy: Connecting India with the World. [Online] May 21, 2020. [Cited: June 23, 2020.] 4. BALL, Terena. 2018. Adobe’s CSO talks security, the 2013 breach, and how he sets priorities. CSO Online. [Online] April 12, 2018. [Cited: June 22, 2020.] 5. BARLOW, John Perry. 1996. Electronic Frontier Foundation. A Declaration of the Independence of Cyberspace. [Online] February 1996. 6. BASAK, Cali. 2010. International Law for International Relations. s.l. : Oxford University Press, 2010. 7. BAYLIS, John and SMITH, Steve. 2001. The Globalisation of World Politics. 2nd. s.l. : Oxford University Press, 2001. 9780198782636. 8. BUDNITSKY, Stanislav and JIA, Lianrui. 2018. Branding Internet sovereignty: Digital media and the Chinese–Russian cyberalliance. European Journal of Cultural Studies. 2018, Vol. 21, 2. 9. CHAUDHURY, Dipanjan Roy. 2016. India on top 10 ranking of global digital diplomacy: Diplomacy Live. Economics Times . [Online] April 06, 2016. [Cited: June 23, 2020.] 10. CULL, N.J. 2008. Public diplomacy: Taxonomies and histories. The Annals of the American Academy of Political and Social Science. 2008, Vol. 616, 1. 11. 2018. Cyber and International Law in the 21st Century. [Online] May 23, 2018. [Cited: June 24, 2020.] 1.

163 12. Das, Debak. 2019. An Indian nuclear power plant suffered a cyberattack. the Washington Post. [Online] November 04, 2019. [Cited: June 19, 2020.] 13. DAVIS, Joshua. 2007. Hackers Take Down the Most Wired Country in Europe. Wired. [Online] August 08, 2007. [Cited: June 19, 2020.] 14. 2014. Digital Diplomacy. Gateway House - Indian Council on Global Relations. [Online] December 19, 2014. [Cited: June 24, 2020.] 15. Diplomacy and Domestic Politics: The Logic of Two-Level Games. PUTNAM, Robert D. 1988. 3, s.l. : The MIT Press, 1988, Vol. 42. 16. 2014. Edward Snowden: Leaks that exposed US spy programme. BBC News. [Online] January 17, 2014. [Cited: June 25, 2020.] 17. GILPIN, Robert. 1987. The Political Economy of International Relations. s.l. : Princeton University Press, 1987. 18. GUEHAM, Farid. 2017. The Fondation pour l’innovation politique - A French Think tank. Digital Sovereignty - A Step Towards A New System of Internet Governance. [Online] February 02, 2017. 19. HEATH, Nick. 2019. How Estonia became an e-government powerhouse. TechRepublic. [Online] February 19, 2019. [Cited: June 19, 2020.] 20. 2019. Is this Aadhaar of the future? Facial biometric technology-based chip-enabled cards issued. Economic Times. [Online] August 28, 2019. [Cited: June 20, 2020.] 21. 2020. IT Minister Launches National AI Portal of India. DD News. [Online] May 30, 2020. [Cited: June 20, 2020.] 22. JHA, Martand. 2017. What Was WikiLeaks All About?: A Classic Case of Cyber Security. Geopolitics. 2017, Vol. 32, 2. 23. 2020. Jio Platforms set to raise Rs 5683.50 crore from Abu Dhabi Investment Authority by selling 1.16% equity stake. Economics Times. [Online] June 16, 2020. [Cited: June 20, 2020.],led% 20company%20in%20sev. 24. LEWIS, James A. 2010. Sovereignty and the Role of Government in Cyberspace. Brown Journal of World Affairs. 2010, Vol. 16, 2. 25. LIAROPOULOS, Andrew. 2017. Cyberspace Governance and State Sovereignty . [book auth.] George Bitro and Nicholas Kyriazis. Democracy and an Open-Economy World Order. s.l. : Springer, 2017.

164 AI & Glocalization in Law, Volume 1 (2020) 26. MARR, Bernerd. 2017. The Amazing Ways Google Uses Deep Learning AI. Forbes. [Online] August 08, 2017. [Cited: June 21, 2020.] 27. MASASHI, Crete-Nishihata and DEIBERT, Ronald J. 2012. Global Governance and the Spread of Cyberspace Controls. Global Governance. 2012, Vol. 18, 3. 28. METZGER, E.T. 2012. Is it the medium or the message? Social media, American public relations & Iran. Global Media Journal. 2012. 29. MUELLAR, Milton L. 2019. Against Sovereignty in Cyberspace. International Studies Review . 2019, Vol. 44, 2. 30. 2018. National Strategy on Artificial Intelligence. s.l. : NITI Aayog, 2018. 31. PARK, Ju-min and CHO, Meeyoung. 2015. South Korea blames North Korea for December hack on nuclear operator. [Online] March 17, 2015. [Cited: June 19, 2020.] and,“destruction”%20in%20Twitter%20messages.. 32. 2019. Reinventing cyber-security with Artificial Intelligence. s.l. : Capgemini Research Institute, 2019. 33. RID, T. and BUCHANAN, B. 2014. Attributing Cyber Attacks. Journal of Strategic Studies. 2014, Vol. 38, 1-2. 34. ROBERT, Jean and FEILLEUX, Leguey. 2009. The Dynamics of Diplomacy. s.l. : Lynne Rienner Publishers, 2009. 9781588266293. 35. ROBERTS, W.R. 2007. What is public diplomacy? Past practices, present conduct, possible future. Mediterranean Quarterly. 2007, Vol. 18, 4. 36. SANER, Raymond and YIU, Lichia. 2003. International Economic Diplomacy: Mutations in Post-modern Times. Clingendael : Netherland Institute of International Relations, 2003. ISSN 15692981. 37. SCHMITT, Michael N. 2017. The Talinn Manual on the International Law Applicable to CYber Warfare. s.l. : NATO CCDCOE, 2017. ISBN 978-1-107-02443-4. 38. SCHMITT, Micheal. 2019France’s's Major Statement on International Law and Cyber: An Assessment. Just Security. [Online] September 16, 2019. [Cited: June 24, 2020.] 39. SCHULZE, Elizabeth. 2019. How a tiny country bordering Russia became one of the most tech-savvy societies in the world. CNBC. [Online] February 08, 2019. [Cited: June 19, 2020.] 40. 2019. Selfie Diplomacy: Australian PM tweets selfie with Modi, says ‘Kithana acha he Modi!’. The Hindu. [Online] June 29, 2019. [Cited: June 21, 2020.] 41. 2019. The Role of AI in CyberSecurity. EC-Council Blog. [Online] 2019. [Cited: June 21, 2020.] 42. The Vulnerability of Nuclear Facilities to Cyber Attacks. Kesler, Brent. 2011. 1, s.l. : Strategic Insights, 2011, Vol. 10. 43. TRUNKOS, Judit. 2011. Changing Diplomacy Demands New Type of Diplomats. Washington DC : Institute of Cultural Diplomacy’s International Conference, 2011. p. 7.

165 44. 2020. United Nations Conference on Trade and Development . Cybercrime Legislation Worldwide. [Online] UNCTAD, February 04, 2020. 45. VINCENT, James. 2019. Gmail is now blocking 100 million extra spam messages every day with AI. The Verge. [Online] February 06, 2019. [Cited: June 21, 2020.] 46. 2019. Why International Law is Failing to Keep Pace with Technology in Preventing Cyber Attacks. World Economic Forum . [Online] WEF, February 25, 2019. 47. WILLIAMS, Martyn. 2017. Inside the Russian hack of Yahoo: How they did it. CSO Online. [Online] October 04, 2017. [Cited: June 21, 2020 .] 48. WU, Wei and NIEMUNIS, Andrzej. 1997. Cyberspace Sovereignty–The Internet and the International System. Harvard Journal of Law & Technology. 1997, Vol. 10, 3. 49. ZETTER, Kim. 2014. An Unprecedented Look at Stuxnet, thWorld’s's First Digital Weapon. [Online] March 11, 2014. [Cited: June 19, 2020.]


AI & Glocalization in Law, Volume 1 (2020)

16 AI & Cybersecurity in India: A Critical Review Ankita Malik Research Member, Indian Society of Artificial Intelligence and Law, India Student, Christ (Deemed to be) University, Bangalore, India

Abstract. Cybersecurity challenges have been evolving constantly due to the dynamic shift in technology. As the threats evolve, the need to provide policy solutions takes on the function of ensuring that key players are coordinated and aware of the situation at hand. The advent of Artificial intelligence development has seen a paradigm shift on how AI is perceived. AI does not exist in isolation, rather, there is a constant overlapping interaction with various fields due to which any policy that is formulated, cannot have a solitary frame of reference. Another domain which has a similar application is that of cybersecurity, wherein the cyberspace has fluid boundaries, with application in almost every field. The combination of Artificial Intelligence and the cyberspace has proven to have many advantages while also laying forth a wide array of perils. Therefore, the aim of this research would be to provide policy recommendations in the application of Artificial Intelligence in the domain of Cybersecurity. The same would be analyzed on the basis of the three main pillars i.e the government, the private sector and the citizenry. The primary questions, which are sought to be answered, pertain to the practicality of using Artificial Intelligence in the Indian jurisdiction by examining the shortcoming in the current system and providing plausible solutions to the same. Keywords: Cybersecurity, Artificial Intelligence, Cyber-attacks, Data Breach



The shift of activities varying across an array of fields onto the Internet has been indisputable. While the application and utilization of the internet has been constantly increasing, currently there exists no uniform set of norms which are followed for its regulation. A uniform set of norms would not merely encompass the cybersecurity in isolation but related fields, which are instrumental in the development of the cyberspace. Therefore, a policy question also arises from the lack of


foundational expertise required in order to formulate these norms, not only because of the vague nature of the cyberspace but also due to the traditional view of security. Further, the problem intensifies as technology develops at an extremely high speed thereby resulting in multifaceted problems. This is seen with the example of the recent escalation in engagement with the Internet of Things (IoT) due to factors such as advances in big data analytics and artificial intelligence (AI) (Simona Jankowski et al., 2014). This can be expected to grow in the future as it provides crucial opportunity for key players to gain advantage in an already competitive market.


Cyber-attacks and the Costs of Data Breach

One of the primary reasons for the inability to meet the demands of uniform policy, as mentioned above, is the rapid development in technology. Along with this evolution, there has been a parallel evolution in the methodology adopted by cyberattacks. Cyber-attacks have proven to be disastrous due to the phenomenally high costs of data breaches. This also has a direct impact upon political and human costs incurred, as the integrity and security of the state stands questioned, while individuals risk the violation of their rights. Further, the threats over a period of time have become more sophisticated as attackers have taken a more organized approach. The lone wolf concept has given way to a wide range of attackers whose motivation may vary; however, the aimed end result remains the same (James Dobbins et al., 2015). Therefore, the government is dealing with conflicting problems wherein the balance to be maintained so as to not encroach upon rights of parties while maintaining national security and integrity seems to be a herculean task (Carter, 2015). The accelerant to this is also due to the IoT and as has been rightly predicted that the Internet of Things will expand the area of cyberspace, thereby leading to an increased potential of evolved cyber threats. In order to combat this, it is essential to look at the technological capacities of a nation as it helps gauge the deviation in traditional threats and the manner in which a developed as well as a developing nation responds to these threats. This is primarily because the threats to all key players in this domain remain more or less similar while the ends that are aimed to be achieved and how a nation deals with the same, may prove to be different. This still remains a problem which has the capacity to cripple a nation as it has phenomenally high economic, political and social costs. The Cost of a Data Breach Report 2020 (Ponemon Institute & IBM Security, 2020), which was brought forth by the Ponemon Institute & IBM Security, has been instrumental in analyzing the costs that are incurred every year due to data breaches. The global perspective indicates that in 2020 data breaches costed the USA $8.64M which took 237 days for identification and containment, it costed India $2M and 313 days for containment whereas Canada lost $4.5M. The costs of data breaches vary, however, the certainty that is brought forth is that regardless of


AI & Glocalization in Law, Volume 1 (2020)

the variance, the costs incurred are extremely high. Further, the ASEAN region had the tenth highest cost of data breach, wherein it stood highest for the data breaches caused due to human error, which further indicates the need for stronger norms for cybersecurity. Another aspect that needs to analyzed from this pertains to the industry which had the highest average cost of data breach i.e. Healthcare industry followed by the energy industry. A careful scrutiny of the pattern, in terms of industries worst affected and the most targeted data i.e. Customer PII, would aid in formulating the required norms with an industry specific approach. This will provide a focused approach, which would be better equipped with deal with the industry-specific cyber threats. Further, the report has taken account of the potential impacts of COVID 19, considering the shift to a remote work environment that further increases the scope for me. It has also proposed practical solutions which can be incorporated into governance standards such as investments in Security Orchestration, Automation & Response, stress testing incident response systems, invest in governance, risk management and compliance programs.


Interface of Artificial Intelligence and Cybersecurity

AI is predicted to be the groundbreaking technology that is set to change the dynamics of the global economy as well as how international security has been viewed till date. This is primarily due to its nature as an enabling technology spread over various fields, having numerous applications (Paul Scharre et al., 2018), which has also resulted in the view that it might be another industrial revolution. The characteristics of any Industrial Revolution are that it results in paradigm shifts in the political, social and economic composition at the international as well as domestic front. Therefore, this AI revolution can trigger similar effects as were done by the previous industrial revolutions, wherein it would change the metrics for the global power politics as nations with the best resources for technological advancement will leap ahead. Therefore, even though the applications of AI are wide ranging specially with due regard to diplomacy, surveillance, commerce etc., one of the primary areas for research still remains cybersecurity as it forms the basis of protecting anything and everything in the cyberspace. Further, the cyberspace is not restricted to the boundaries that a State conforms to and policy formation vis-Ă -vis AI, is of foremost importance as it will be one of the determinant factors of who control power in the global fora and how (Paul Scharre et al., 2018 p. 4). India has already started on the path of incorporating Artificial Intelligence into various arenas in order to boost development (NITI Aayog, 2018). However, the scope of this paper has been restricted to its application to cyber security. India has the third largest Internet user base in the world (Samuel, 2014) and it is predicted to grow at a phenomenal rate as the problem of accessibility is reduced. The focus till now, has been on a limited number of fields such as banking, communications etc., as the government has realized that while the cyberspace opens up a plethora


of opportunities, it also brings forth threats that are so unique in their nature that the traditional concepts of security would be insufficient to deal with them. Further, India has taken a view of the internet as ‘global commons’ (Prime Minister of India, 2014) however, it has to be taken into account that the cyberspace requires regulation through certain norms and practices, which need to be formulated by mutually agreeable terms, taking into consideration the approach of various nation states with regard to the same. India’s stance towards the cyberspace and policy formulation around it can also be seen through its recommendation to the United Nations wherein it was proposed that a Committee for Internet Related Policies (CIRP) (Mueller, 2011) be constituted and was also of the view that the principles of the Tunis Agenda (United Nations, 2005) should be followed. Further, India has made cybersecurity a top priority while simultaneously grappling with enforcement limitations. However, the need for a comprehensive strategy for the cyberspace still remains, with special focus on an Asian-led approach. This is due to the fact that there exists a lack of cooperation within the region and India has held dialogues at the bilateral level with various cyber-powers but most of these have been outside the region (Samuel, 2014 p. 23).


Building Upon Positive Foundations

The change in the arena of operation inadvertently opens up the doors to new emerging threats which cannot be dealt with the existing legislations or norms. The gravity of these threats is such that it amplifies the vulnerabilities of a nation, which can result in a complete breakdown of state machinery. The traditional view of security is categorized on the basis of individual security responsibilities, domestic & international problems, military security etc. (The New Policy World of Cybersecurity) However, with the advent of technology, there are possibilities of interface between these concepts. A policy for cybersecurity per se cannot be formulated merely on the basis of a traditional approach since the nature of threats that accompany the cyberspace and their impact, were not envisioned by traditional theories. The cyberspace is not restricted by geographical limits and neither does it exist in isolation, there is a constant flow of information which may require the collaboration of the government with foreign law enforcement agencies as well and not just overlaps with domestic law enforcement agencies. Thus, it becomes important to acknowledge the same in order to broaden the scope of policy. However, the primary question of where to draw this policy from still remains pertinent as merely broadening the scope of the policy cannot determine its foundation. A plausible solution to this problem is to take a bottom up approach wherein pre-existing policies are used to combat the threats to the cyberspace. An example of this is using deterrence modeling by deriving it from policies governing nuclear


AI & Glocalization in Law, Volume 1 (2020)

weaponry (The New Policy World of Cybersecurity p. 457), the Information Technology Act (Government of India, 2000), Information Technology Rules 2011 (Government of India, 2011), laws governing Intellectual Property (Government of India, 1957) etc. The foundation is derived from pre-existing legislations and norms, flowing into a detailed analysis of the same in order to demarcate the shortfalls leading to drafting of a comprehensive policy. Relying upon pre-existing documents can prove to be a effective plan however, the same must be restricted to the preliminary stages of development. Further, due to the very nature of the cyberspace any policy, which is to be formulated, must be through a collaborative effort of the government, public private coordination and a vigilant citizenry (The White House, 2011). Interacting with various nation states who are all battling with the same issues along with the aforementioned approach can help develop a comprehensive policy for the same. 1.

Systematic categorization for efficient operation

The fields of operation with regard to security are changing, as has been elucidated above, and with this increased speed and efficiency in communications comes the inadvertent exposure to vulnerabilities within systems. There is a constant need develop to dynamic and adaptive policies to match the technological developments of similar nature. For example, in terms of the nuclear security sector, AI can be used to decreased the vulnerabilities that are introduced due to the cyberspace wherein AI supporting biometric data can be used to authenticate users and limit access while simultaneously analyzing behaviors to detect anomalies (Decker, et al., 2010). The nature of Artificial Intelligence is that it is augmented intelligence with a lesser possibility of errors specially taking into consideration repetitive tasks for which the AI may be specifically designed which can. However, while it may be used for increasing protection, it may also launch an array of threats wherein it can be used by bad actors to enhance their attempts to exploit preexisting vulnerabilities. Systematic categorization for efficient operation can prove to be the tool which helps thwart these attempts. Since communications and availability of response mechanisms are the deciding factor of how a threat is responded to, it is important to strengthen the same as malicious actors may use AI to present false risk communications to the public. This will target response mechanisms and can compromise these, thereby resulting in a systematic erosion of resources while also creating unnecessary public alarm (Decker, et al., 2010 p. 3). This shows how the consequences, which would be foreseeable, are enhanced due to the nature of technology. In order to combat this it is essential to have targeted trans-border collaborations (United Nations,, 2018) to demarcate & evolve the existing best practices in terms of communication and response mechanisms. For example, it is necessary to develop risk communication, which would have a focused amalgamation of prepared public emergency communication along with calculated incident reporting


(Decker, et al., 2010 p. 5). Another best practice to be considered is risk analysis, wherein current threats are analyzed in order to predict the development of these risks with evolving technologies. 2.

Data Exchanges & Review Strategies

Incident disclosures can prove to be extremely helpful in determining the strategy that a state adopts. The threats that various natures face with regard to the cyberspace have a similar trajectory. Therefore, incident disclosures of various incidents that a state dealt with, regardless of whether it had consequences or not, can help formulate a database of practically applied best practices. This can be through voluntary sharing and open source reporting, wherein it would provide an opportunity to replicate models and build upon strategies, which worked previously (Slowik, 2020). While it is important to formulate strategies for the future, it is also essential review the strategies already in place (Decker, et al., 2010 p. 6). This includes a review of State as well as International approaches to cybercrime and a detailed analysis of how existing policies are being enforced. Further, this also has an incidental branch i.e. cooperation challenges due to the fact that concepts such as data sharing as well collaborative strategy reviews can only function if states engage at the international front. 3.

Utilization of adaptive algorithms and Quantum Computing

The strength of artificial intelligence is creation of dynamic algorithms, which provide an adaptive approach wherein security systems work on preexisting data to respond to attacks, which are similar in nature, as has been explained in the previous section. However, another dimension of AI and Machine Learning is to use these powerful algorithms for locating evidence to substantiate the identity of the attacker by recognizing the modus operandi of the attacker (Yangyue). This aids in making the costs of the cyber-attack phenomenally high, which may be done by taking two approaches (Yangyue p. 21). The first one, wherein based on the capacity of the attacker, enhanced defenses can be customized and second, by increasing the intensity of retaliation in order to achieve the phenomenon of deterrence by punishment. However, these require a myriad of resources and are models, which may work, in the preliminary stages when it is determined that the opponent does not have the resource to deal with either retaliation or break through the enhanced protection (Lindsay, 2015). In order to combat issues pertaining to these specific issues, unsupervised learning can take on the task of extensive data analysis and use the same to figure out deviations in the normal behavior, while incorporating the recommendations discussed above.


AI & Glocalization in Law, Volume 1 (2020)

An aid in this context can be the powerful tool of quantum computing which is considered as the enabler of advanced predictive analytics. The data used for quantum computing plays an important role in determining which actor has a competitive edge as democratized data provides a neutral playing field however, dark data i.e. data which has never been exploited, provides a strategic advantage (Micheal Brett et al., 2017). Further, while AI can be instrumental in changing the way cybersecurity is viewed, it is also important that vulnerabilities within AI are constantly eradicated so as to protect AI from data manipulations (Micheal Brett et al., 2017 p. 5) and safeguard our models by thwarting any attempts at breaking the infrastructure upon which it relies.



AI and Cybersecurity are extremely diverse and dynamic fields that it would be impossible for any state to ignore the importance conferred upon them, taking into consideration their growing prominence at the national as well as international fora. Artificial intelligence and cybersecurity can work to the advantage of both the attackers as well as defenders since all parties involved have access to technology. However, facilitating dialogue amongst all affected and interested parties can be an approach, which provides inclusive policy formulations, thereby materializing efforts to strengthen a states’ cybersecurity stance.

References 1.


3. 4. 5. 6.

Carter, Ash. 2015. ‘Remarks by Secretary Carter at the Drell Lecture Cemex Auditorium, Stanford Graduate School of Business' transcript. United States Department of Defence. [Online] April 2015. Decker, D, Rauhut, K and Fabro, M. 2010. Prioritising Actions for Management of Cybersecurity Risks. Stimson Centre. [Online] 2010. [Cited: August 23, 2020.] Government of India. 2011. Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 . 2011. —. 1957. The Copyright Act, 1957. 1957. —. 2000. The Information Technology Act, 2000. New Delhi, India : s.n., 2000. James Dobbins et al. 2015. Cybersecurity: Strategic Rethink. RAND Corporation. [Online] 2015.











17. 18.

19. 20.

173 Lindsay, J. 2015. Tipping the scales: The Attribution Problem and the Feasibility of Deterrence Against Cyberattack. Journal of Cybersecurity. [Online] 2015. [Cited: August 20, 2020.] Micheal Brett et al. 2017. Artificial Intelligence for Cybersecurity: Technological and Ethical Implications. Center for Cyber and Homeland Security at Auburn University. [Online] 2017. [Cited: August 25, 2020.] Mueller, Milton. 2011. A United Nations Committee for Internet Related Policies? A Fair Assessment. Internet Governance Project. [Online] October 29, 2011. [Cited: August 17, 2020.] NITI Aayog. 2018. National Strategy for Artificial Intelligence. [Online] June 2018. [Cited: August 17, 2020.] Paul Scharre et al. 2018. Artificial Intelligence: What Every Policymaker Needs to Know. Center for a New American Security. [Online] 2018. [Cited: August 17, 2020.] Ponemon Institute & IBM Security. 2020. Cost of a Data Breach Report 2020. IBM. [Online] 2020. [Cited: August 8, 2020.] Prime Minister of India. 2014. Prime Minister’s Statement in the 6th BRICS Summit on Political Coordination: International Governance & Regional Crises. Ministry of External Affairs, Government of India. [Online] July 15, 2014. [Cited: August 12, 2020.] Samuel, Cherian. 2014. India’s International Cybersecurity Strategy. S. Rajaratnam School of International Studies. [Online] 2014. [Cited: August 13, 2020.] Simona Jankowski et al. 2014. The Internet of Things: Making Sense of the Next Mega-Trend. Goldman Sachs Global Investment Research. [Online] September 2014. [Cited: August 14, 2020.] Slowik, Joseph. 2020. Evolution of ICS Attacks and the Prospects for Future Disruptive Events. Semantics Scholar. [Online] August 27, 2020. 28820670.1598855927-160774364.1598855927. The New Policy World of Cybersecurity. Harknett, Richard J and Stever, James A. 3, Public Administration Review, Vol. 71. The White House. 2011. National Strategy for Trusted Identities in Cyberspace. [Online] April 2011. [Cited: August 25, 2020.] United Nations. 2005. Tunis Agenda for the Information Society, WSIS05/TUNIS/DOC/6(Rev. 1)-E. [Online] November 18, 2005. United Nations,. 2018. Approaches to a Wicked Problem: Stakeholders Promote Enhanced Coordination and Collaborative, Risk-Based Frameworks of Regional and


AI & Glocalization in Law, Volume 1 (2020) National Cybersecurity Initiatives, IGF 2018 WS#75. Internet Governance Forum. [Online] November 2018. 21. Yangyue, Liu. The Role of Artificial Intelligence in Cyber-deterrence’ (2019) Volume II East Asian Perspectives . Stockholm International Peace Research Institute. [Online] [Cited: August 19, 2020.]


17 Impact of Fake News, Misinformation and Computational Propaganda on Electoral Politics: Some Global Perspectives Soundarya Rajagopal Research Member, Indian Society of Artificial Intelligence and Law, India Student, Gujarat National Law University, Gandhinagar, India



2016 became a landmark year in global elections when the Cambridge Analytica story broke, revealing that the personal information of Facebook users was used to micro-target political advertisements during the 2016 US Presidential elections. It gave rise to conversations about personal data collection by social media platforms, and its use by data brokers and campaign consultancies, thus bringing into focus the impact of social media on electoral politics. The use of bots to circulate misinformation and amplify attention towards certain hashtags rose to prominence during the Brexit referendum, and has been identified as a prevalent means of disseminating computational propaganda in many countries since. More nefariously, co-ordinated misinformation networks have been identified all over the world, indicating their global deployment. Democratic elections rely on the decision-making of the populace hence, creation of false and misleading narratives can skew people’s perceptions on important policy issues. Social media algorithms show the user content that is tailored to their preferences and this has led to the worrying phenomenon of “echo chambers” and “filter bubbles” that reinforce their views instead of displaying an array of diverging perspectives. The use of bots to game the algorithm and create misleading impressions in support or opposition of a candidate can also be harmful to the democratic process by creating a false consensus. Machines are taking over the conversations and political interactions to be had by humans. In trying to curb fake news and misinformation, governments have had to grapple with the constitutional right to free speech, which is also an integral aspect of democracy. The potential of misuse and censorship in the name of such legislation poses an equally concerning problem regarding the implementation of such law.


AI & Glocalization in Law, Volume 1 (2020)

This paper will use case studies of three different national elections to study the nature of misinformation and computational propaganda in countries with varying democratic cultures. The three countries that will be studied are democracies which have had elections in the past two years, implying that public and private institutions have been aware of the above-outlined social media threats due to increased international attention to them in the light of the US elections and the Brexit referendum. Brazil, Italy and Canada were chosen as they display variations in their economic and political development levels, along with diverging cultures. The aim of this study is to draw inferences of threats that have been posed and may be posed to elections and to lay emphasis on the dichotomies of social capital and free speech in the digital age.

2 Review of literature The role of the internet in democratizing political discourse has been instrumental in fostering freedom of speech and access to information. Globally, there has been declining trust in media and political institutions and people are increasingly taking to social media to obtain their news. It has been integral in sharing content and expressing political opinions, first seen in the uprisings during the Arab Springs (Howard, 2013). However, this has come with the pitfalls of unregulated spaces and therefore, poses a serious threat to the integrity of elections, referenda and other democratic processes. Propaganda has been a tool of persuasion used to push a certain narrative since time immemorial. It can be rooted in truth, misleading information or even blatant falsehoods. In the digital age, this has morphed into computational propaganda, which involves the use of algorithms, data mining and profiling to communicate a certain message to users (Wooley, 2020) This can take the form of fake news, micro-targeted advertising, use of bots and co-ordinated disinformation networks. The particular advantage of computational propaganda is that it can be tailored to both mass appeal as well as targeted audiences at a low cost (Bradshaw, 2018) A unique danger of computational propaganda is that most people use applications such as Facebook, Twitter and Whatsapp to curate their feeds using the algorithm, mostly for networking and entertainment purposes. Computational propaganda is also frequently disseminated on chatting applications (Bradshaw, 2018) Thus, users may opt into circles of coordinated disinformation networks without realising it, becoming more susceptible to manipulation through fake news and misleading information. In a culture that rewards engagement and sharing of content on these platforms, this can lead to further unintentional spread of fake or misleading propaganda by users. In this context, it is integral to make a distinction between disinformation and misinformation. Disinformation refers to the dissemination of information that the sharer knows to be false, hence implying the presence of intent to mislead (Bennett, 2018). Misinformation, while also encompassing disinformation, include circumstances where the sharer


is not aware of the misleading nature of the information; in fact, they themselves might have been misled into believing it. Due to the lack of a regulating authority to assess the veracity of information disseminated, spreading misinformation is easier than ever. In the context of electoral politics, researchers disagree about the impact of computational propaganda and misinformation campaigns on the people’s choices. Since “echo chambers” (Exposure to opposing views on social media can increase political polarization. , 2018) are rampant and humans have a confirmation bias (Cantarella, 2020) it is argued that it is a conscious choice that users make to obtain news from certain networks and base their views on it. The true influence of these techniques is limited in changing people’s opinions; it only serves to reinforce them. However, this reinforcement can be instrumental in causing greater polarization (Sunstein, 2017) In the author’s opinion, however, this ignores the long-term systemic issues which are inevitable in the digital age. The use of fake news, bots and misinformation has been documented in over 48 countries in 2017 (Bradshaw, 2018) In this context, it is important for regulatory agencies and the public to recognise the impact that they have on electoral politics and frame regulations which are effective in balancing a secure democratic environment without infringing on freedom of speech.

3 1.

Case studies 2018 Brazilian Elections

Background. Brazil is a young democracy that was under a military dictatorship until 1985. It has a presidential form of government where a president may be elected for a maximum of two successive terms. The country has historically elected left wing leaders to power but the vote is often split among a large number of parties. For example, there were 23 to 28 parties in the Congress in the year 2014, implying fierce inter-party competition and likelihood of unstable coalition formation to obtain a majority. The 2018 Presidential elections took place in the wake of economic slowdown and the Lava Jato scandal. What started off as a money-laundering investigation unearthed the rampant corruption among high-level politicians in Brazil, leading to jailing and investigation of several well-known leaders and the impeachment of the then President, Dilma Rousseff. Public trust in the leaders has been eroding and the politically-motivated news coverage by corporate media houses has also forced a greater number of citizens to turn to social media for information. It is in this backdrop that Jair Bolsonaro contested the 2019 elections, emerging from a small, virtually unknown right-wing party. He went on to win the Presidential elections with over 55% of the popular vote.


AI & Glocalization in Law, Volume 1 (2020)

Brazilian electoral laws allow each politician unpaid television campaigning time in proportion to their representation in the Congress (Machado, 2020) Bolsonaro, receiving just eight seconds of air time, ran a powerful and far-reaching social media campaign to capture popular attention. As the only right-wing candidate, he was widely discussed on social media platforms in the lead-up to the election and was poised as the solution to the crumbling opinions of the prevalent politicians in the status quo. Use of misinformation and computational propaganda in 2018 Presidential elections. The 2018 elections saw a rise in fake news and misinformation circulated by coordinated entities. Brazilian law states that candidates and political parties may not pay for electoral advertising on social media sites, (Eleitoral, 2015) but it does not prohibit private entities and individuals without a direct connection to these parties from placing such advertisements. Therefore, the loophole in the law may be exploited by parties if they hire third parties to place such advertisements instead. However, the truly effective misinformation campaigns were largely carried out by groups of individuals and bots who shared doctored media and misleading information in organised networks. Existing literature indicates that two primary platforms used for these purposes were Twitter and Whatsapp. Whatsapp has seen wide adoption by the Brazilian population as it is free, easy to use and offers a high level of security due to encryption of messages. The type of relationships formed within Whatsapp groups can generally be said to be more personal as it is used for private communication. Several studies identified co-ordinated networks of profiles that shared misinformation in these groups. One group of researchers found that 56% of political images that were most frequently shared could be categorized as misleading, often containing dubious claims or lacking context (Tardรกguila, 2020) Another study monitored 110 political Whatsapp groups for one week during the election period and found that some of the most active users sent over 25 times the number of messages an average user sent, and that 80% of said users did not display their names and were their persons were not traceable on the internet. The results also showed that these users were often part of multiple groups, thus supporting the idea that the political images occurred through a highly interconnected network even on a private messaging platform like Whatsapp (Medium, 2020) This indicates that these networks were used to perform broadcast functions and spread misinformation in a coordinated manner. Identification and taking down of fake news either by government authorities or Whatsapp is not possible as the messages are all encrypted. Therefore, the information that has been obtained about these networks is only through researchers joining these groups and actively cataloging all of these political messages. The true magnitude of misinformation networks that played a role during the 2018 elections is unknown. Twitter has come to the forefront of political conversation due to increasing adoption of the platform by politicians. Statements made by political leaders are often


relayed by news outlets and it has the power to shape conversations and draw attention to issues through its Trending tab. It was found in studies that the engagement on certain hashtags favourable to certain candidates were inflated using bot comments so that they trended (Arnaudo, 2020). The conversations on Twitter were also highly partisan. It was found that several parties used such tactics, with Bolsonaro’s supporters spreading high levels of fake news and supporters of Lula & Hadad (the incumbent party) having higher numbers of shares (Oxford Blog, 2020) However, it was noticed that the network of bots tweeting in support of Bolsonaro were highly connected. Impact. It is hard to quantitatively establish a definite causal link between the misinformation campaigns and voting behaviour but the case of Brazil’s elections show some of the clearest results. Bolsonaro, a candidate who was barely discussed prior to these elections, became the central figure due to social media networks committed to spreading news--true and false-- in support of him. However, it must be remembered that he was unique in the fact that he was the only far-right candidate contesting the elections and the extreme levels of disillusionment greatly contributed to the polarization and drew voters to his populist rhetoric. However, the high prevalence of fake news and misleading information and inadequacy of fact-checking sources certainly led to harboring of several false impressions that may have impeded the decision making of voters, which is fundamental to democracy. Furthermore, creation of filter bubbles due to the algorithm’s tendency to recommend content of similar nature to a user may have led to a polarized and biased narrative being propagated in their feed, further playing into the inherent confirmation bias of humans. Therefore, voters may not have had the opportunity to hear all sides due to the hijacking of narratives using techniques to mislead them. 2.

2018 Italian Elections.

Background. Italy is a Western European democracy that follows a parliamentary form of government. The parliament is bicameral and the Prime Minister is the head of state. The 2018 elections, happening in the wake of an economic crisis, led to the first populist majority victory in the region, with a coalition of the League (“Lega) and Movimento 5 Stelle (M5S) forming the government. Guiseppe Conte, a lesserknown politician, became the Prime Minister. The primary issues of contention during the election revolved around immigration and corruption of the privileged elite. The former issue, in particular, was extremely divisive and led to large-scale expressions of xenophobia and Islamophobia. The murder of an Italian woman by a gang of Nigerian men also fueled these anti-immigrant sentiments.


AI & Glocalization in Law, Volume 1 (2020)

Media coverage in Italy has always been hyper-partisan and public trust in the media has seen a definite decline. Propaganda is protected under the freedom of speech rights given to Italian citizens. In 2017, an attempt was made to amend the Italian criminal code to make dissemination of fake news punishable by law, but it faced severe opposition as the provisions were viewed as an infringement upon citizens’ freedom of expression and compliance could lead to implications of private censorship. Later in the same year, another attempt at a fake news law was made, this time focusing on the “illegal content” circulated on social media. The bill was not passed. In 2018, guidelines titled “Guidelines for equal access to online platforms during the 2018 general election campaign” were issued to all media platforms, including social media platforms to curb disinformation and fake news circulation during the election period. Use of misinformation and computational propaganda in 2018 elections. The rhetoric on issues such as immigration and corruption was polarized and hateful, creating fertile ground for spreading misinformation and fake news. Various studies have attributed use of these techniques on social media platforms like Facebook and Twitter to bots networks in support of Lega and M5S. A study conducted during the elections found 24 networks involving 82 entities who used coordinated networks to spread misinformation. These networks tended to share links to widely known fake news sites, much more than users who did not seem to be a part of these networks, suggesting deliberate attempts to push misinformation into the public eye (Fabio Giglietto, 2020). It was also found that these misinformation networks could be mapped in clusters with centralized structure and clustered structure (Fabio Giglietto, 2020). The use of bots has been clearly documented by several studies, the most widely cited one being the one undertaken by Avaaz, (Amazonaws, 2020) which identifies several pages that had grown in popularity by discussing topics of non-partisan appeal, such as football, lifestyle and jokes, which later changed their name and peddled misinformation to people who did not sign up for political content. The findings of the study helped Facebook shut down 24 misinformation pages with a total audience of almost 4 million subscribers. As a whole, the parties who received most engagement on social media also seemed to be the ones supported by the co-ordinated misinformation networks, suggesting that such networks were used to mislead the public, both about the amount of support they received and the factual nature of contentious issues during the election. Impact. While the presence of bot networks and misinformation campaigns during the 2018 elections is confirmed by several studies, their impact has been found to be minimal. A prominent study was undertaken to measure the same in two different provinces of the country, one largely Italian speaking and the other, German speaking (Pollicino, 2020) The latter was used as the control group based on the reasoning that


there is less incentive for fake news to be manufactured in German, a language spoken by a very small fraction of the population. It was found that there was no real difference in the voting behaviour of the populations in these provinces. When this is considered in light of social media algorithms and the idea of “filter bubbles”, it is very likely that this misinformation was most circulated within the groups of the population that supported Lega and MS5 and that users choose to enter these bubbles through their choice of content. Though there is no clear causal link that is established in the present case, the impact of misinformation in further cementing a person’s belief cannot be discounted. In light of the divisive nature of election issues, this misinformation could have had a role in polarization, which is extremely dangerous for free and rational debate in a democracy. 3.

2019 Canadian Election

Background. Canada is a constitutional monarchy and follows the parliamentary system of governance, with the Prime Minister as the head of the government. It is a developed country that is significantly influenced by its neighbour, the United States of America. The rate of internet penetration is at a high 90%, with 75% of its population being users of social media (Bradshaw, 2019). Following the infamous Cambridge Analytica scandal, the Canadian government spent a large amount of money to improve aspects of election security. In 2011, Canada had faced a tech-based electoral challenge dubbed as the ‘robocall scandal’, which involved redirecting several citizens away from the polling stations using automated calls. However, there is no legislation that protects the privacy of voter data as the privacy legislation is only applicable to personal data and against government institutions, a term from which political parties have been excluded (Bradshaw, 2019). Since micro-targeting of misleading advertisements persisted during the 2019 elections, this poses a cause of concern regarding people’s susceptibility to misinformation. The two primary contenders in the election were the Liberal Party and the Conservative Party. The incumbent Prime Minister, Justin Trudeau, hails from the latter and ultimately won his second term in 2019. A third party, the Green Party, relying on promises of environmentally-conscious policy making as its campaign strategy, managed to capture only about 2% of the total votes. Use of misinformation and computational propaganda in 2019 elections. The presence of bots during the elections is certain, with messages posted by them comprising at least 13% of all the tweets posted in relation to the 2019 elections (Rheault, 2020). Both Conservative and Liberal Parties employed bots, but the density was higher among Conservative networks (Rheault, 2020). Bots seem to behave in ways similar to the average Canadian social media user in terms of link sharing behaviour as they mostly shared links from mainstream news media that was


AI & Glocalization in Law, Volume 1 (2020)

widely disseminated (Rheault, 2020). Upon identifying the geographical location of bots, some of them were detected to be Russian, and have been posting about Canadian issues since 2017 (Bradshaw, 2019). During the campaigning period, a picture of Justin Trudeau in brownface was published by Time magazine, (Kambhampathy, 2020) causing controversy and ultimately resulting in Trudeau apologising for his actions. This is certainly an event which had the potential to generate polarizing misinformation but such attempts were rarely detected by researchers. While the sentiments in messages shared by bots were decidedly more negative than those shared by normal users, (Rheault, 2020) popularity could not be attributed to any computational propaganda. An interesting angle on misinformation can also be construed from the micro-targeted advertisements on social media that were paid for by the parties themselves. This was particularly noted in two Chinese language advertisements identified by a study (Bennett, 2020). The Liberal Party ran an advertisement saying that the Conservatives would legalize assault weaponry if they are elected. The Conservative Party also ran a false advertisement stating that the Liberal Party was planning to legalize hard drugs. Certainly, such advertisements coming from the social media handles of verified party accounts carry veracity and may ferment misinformation in the portion of Chinese-speaking voters who do not have equal comfort levels with English and French political media. Impact. Despite having a high number of social media users, it was found that the use of bots to spread misinformation in Canada did not have any impact. The bots were usually used to amplify the social media discourse rather than dictate it, but still did not reach the level of engagement which suggests any disruption of electoral processes and debates. In the brownface scandal involving Justin Trudeau, it was found that bots displayed more negative sentiments than human users but failed to generate activity that distorted the state of public discourse. The most salient fake news was released by a French outlet that is well-known for circulating fake news, implicating Trudeau in a sex scandal (Chown, 2018) However, this did not garner great public attention or serve to skew voter choices. Most studies have ruled out large-scale misinformation campaigns on social media during the Canadian elections. The limited effectiveness of the campaigns that existed can be attributed to the proactive steps taken by the Canadian government prior to the election and the experience of other developed countries like the USA and the UK in the recent past. The awareness of Cambridge Analytica could have made people less gullible to misinformation and the stance of platforms in promptly removing such posts and bot accounts could have also helped. However, there is always the concerning possibility that the sophistication of certain bots accounts could have let them escape identification. This is the next step in the field of computational propaganda and cannot be ruled out.


4 Discussion Impact on public perception of policy issues and political discourse. A primary technique used by political parties to promote their own narrative is the use of bots and algorithmic dynamics to drown out opposing views. This can particularly be seen in countries like Brazil and Italy, where certain candidates had more prominent social media campaigns. Such manipulation of algorithms is evidenced in a study of Italian elections, which noted a high volume of comments over shares in Facebook posts expressing opposition to the Lega and M5S parties. An opposite trend over more shares than comments was observed in posts that supported them (Giglietto, 2018) This was a clever use of the site’s algorithm, which awards engagement by amplifying the reach of the post. This technique ensured that the posts supporting these parties appeared first on users’ feeds and when the opposing opinions were expressed in a lower part of the feed, the comments in support of the Lega and M5S parties reinforced the ideas they had presented to the user through the initial article. While this would be perfectly legitimate if all the actors involved were human, studies found the use of bots in a large number of these activities, thus raising questions about their attempt to mould a narrative in an unethical manner. This was also used in Brazil, where bot tweets brought narratives favouring Bolsonaro on Twitter’s trending tab. Ultimately, this is the manufacturing of political capital and can be used to create a “false consensus” (Soon, 2018) Voters pay utmost attention to prevalent narratives in society and can be swayed by majority opinions. While this is accepted as the rationale and wisdom of voters in a democracy, the veracity of these decisions are questionable when the expression of support is dominated by accounts handled by the campaign or bot accounts that are employed in deliberately misleading them. Another nefarious result that is completely contrary to fruitful political discourse is the cementing of ideological divides. This is particularly damaging when the narrative is on communal or identity lines; these issues touch on sociological cornerstones such as faith, culture and social groups. When a hateful and visceral rhetoric is used to fuel anger, people become more polarized. This is how social media can also lead to indoctrination of certain ideas. A further explanation as to how social media and the internet in particular fuel this problem will be discussed in subsection 4.4. Impact on the quality of public discourse. The biggest issue, as mentioned in the previous subsection, is the weaponization of social media to provide a skewed narrative of public policy issues and political discourse to the average user. It is often used to amplify hate and dissent in a manner that is unconstructive. Social media platforms incentivize content that is short, explosive and attentiongrabbing. Clickbait articles, fake headlines and doctored pictures often go viral even before the platform can take it down. More worryingly, the short attention span of


AI & Glocalization in Law, Volume 1 (2020)

users on social media leads to oversimplification of the issues discussed on these platforms. Often, the headlines will involve the aspect of the policy which is most newsworthy. Long-form content exploring the intricacies of the issue are often ignored altogether. In an attempt to regulate content, social media platforms have created algorithms to flag content that may be hateful, using certain keywords. However, machine learning algorithms are not always intelligent enough to understand the contextual nuances of words and ideas. Such use of algorithms can be harmful to activism and constructive discussions on issues that involve these keywords, even if the discourse is well-intended. At the inception of social media, it was hailed as a democratizing force which would allow people to express their opinions and be heard freely. While this still remains true in most contexts, it is important to acknowledge the money and muscle power that is exercised by political parties to pay social media campaigners, data aggregators and misinformation networks to spread computational propaganda. In general, it can be noted that younger democracies with weaker institutions and great inequality could be more susceptible to the pitfalls of social media use in electoral politics. While the United States and Brexit referendums were the instances that made world news and attracted the attention of the public to this phenomenon, the socio-political landscape since then has forced governments and social media regulators to enact specific provisions against misinformation and propaganda in a concerted manner. In this regard, state capacity and the political awareness of citizens plays a huge role in curbing this problem. A reason why Canada saw lower levels of misinformation campaigns and bots could have been the proactive addressal of the issue by the then Prime Minister Justin Trudeau, (Subverting Democracy to Save Democracy: Canada’s Extra-Constitutional Approaches to Battling 'Fake News', 2019) and the use of soft law instruments that encouraged compliance by social media platforms. This could have also been largely influenced by the fact that their neighbour, the United States, had their President allegedly implicated in the Cambridge Analytica scandal.

5 Impact on minorities and marginalized communities As previously elucidated in subsection 4.1., co-ordinated misinformation campaigns and bots are used to create an illusion of mass support for politically divisive issues in order to sway the opinions of the social media users. This is particularly damaging the case of rhetoric which is used to stigmatise minorities and marginalised communities in the populace. For example, the immigration debate in the 2018 Italian election was rooted in sentiments of xenophobia and Islamophobia. Allowing these campaigns to weaponize these prejudices for their own electoral gains is harmful to the country as a whole, but more so to the minorities as discrimination becomes more acceptable when such views are seen as popular.


Secondly, the cost it takes for the minority to speak up is immense in the face of slurs and hate speech they may receive, not only from real accounts but also bots with a political agenda. The pre-existing lack of social capital is compounded by organised misinformation campaigns that intentionally vilify these communities, which is intensely problematic for free speech and political equality. Impact on the decision-making process. The above discussion leads to the inference made in this subsection, posing the most troubling implication for democratic institutions. Democracy, as a system, relies on the informed decisions made by the citizens who vote their representatives into power. However, in a post-truth society where emotional appeals may trump logical argumentation, social media acts as a fertile ground for spreading misleading narratives. Most elements of misinformation and propaganda have been present in the media through the ages. However, their potency is amplified on the internet and social media, not only due to higher reach and anonymity but also users’ perceptions and mindsets on social media use. Social media is typically used to communicate with friends and those with similar interests. Certain influencers gain big followings and users form parasocial relationships with them, but the intent of social media communications is largely apolitical in nature. Even political issues are discussed, most users tend to assume that they are communicating with accounts operated by ordinary people while it could quite easily be a bot operated by a corporate house with vested interests in the political campaign. Thus, users are more likely to trust the information disseminated by those they view in a more informal light. This is the point that makes the manipulation more discernible than propaganda techniques used by media houses, which people view from an arm’s length. The personal nature of social media, when weaponized by electoral candidates, is illusionary and preys on the trust and intention of citizens who use social media. Echo chambers and filter bubbles are unintentionally entered into by users when they engage with content on social media and the platform’s algorithm pushes similar content on their feeds. While this customisation is necessary for marketability of entertainment, this is utterly unsuitable to dissemination of political information and interactions between different political ideologies. Repeated viewing of the same ideology, especially in a polarized climate, leads to entrenchment of certain attitudes in people. This makes their perspective skewed and reconciliation between the two sides of the spectrum becomes highly unlikely. When people do not have a chance to view the criticism of their ideology, it can even lead to complacency and an unwillingness to question their own leaders. Furthermore, it allows the campaigns to paint their opposition in a derogatory light and ignore legitimate criticisms. While this is certainly an aspect that exists in traditional media, there is a greater degree of intention in choosing to receive news from a particular media house. However, many people do not actively search for


AI & Glocalization in Law, Volume 1 (2020)

political content on social media. They view the content that is shared by their friends and engage with it. This, in turn, alerts the algorithm to a certain political preference and their feeds are flooded with posts that lean to one side of the political spectrum. Here, the social media user is not actively opting into the filter bubble and it is a gradual process. Therefore, a user may not be aware of the fact that they are being shown content that can create a bias in them. While the problem of imperfect information is pervasive regardless of technological developments, it is integral to understand the new ways in which this can be propagated in the digital age. Democracy relies on the decisions of the voters, which is dependent on the nature and quality of information they can access. Creating ecosystems for voters to make informed decisions is the only path to political development. In that sense, exploiting social media to further the information vacuum is regressive and extremely detrimental to the essence of democratic choice. On the flip side of the coin, its mass reach is a blessing for information dissemination, as can be noted in every segment of human society and its benefits to democracies has also been unparalleled (United States District Court for the District of Columbia). The path to regulation. At the present, creating specific laws surrounding social media campaigns is faced with the challenge of anonymity and prevalence of disinformation networks. Locating them involves a great cost and state capacity that makes implementation impractical in poorer (often more populated) democracies. Campaigns can also easily flout their ethical responsibility as these measures are taken by third party companies or simply, the party supporters, and cannot be reasonably accounted for within election law. From the above case studies, as well as global experiences, it is clear that political parties engaging in computational propaganda and misinformation during elections in a digital age is inevitable. Attempts to regulate misinformation have led to the creation of fake news legislations in various countries, which routinely face criticism for the potential of misuse. It is particularly difficult to balance freedom of speech with these laws and regulation of social media platforms can easily turn partisan, whether it is by government regulators or the platform itself. Furthermore, taking down fake news is in itself a Herculean task, especially during elections when parties and other interested actors put efforts into pumping out misleading narratives. In a field that grows exponentially every year, there is also the danger of targeted legislation becoming archaic and easily circumventable. The growth in computational propaganda technology and techniques far outpaces the evolution of legislations, which in democracies, take a long time to formulate and be passed. Therefore, a soft law approach is the most practical route that can be taken at this juncture. For example, this may have worked in Canada, where Prime Minister Justin Trudeau repeatedly pushed for greater content regulation by social media operators. Facebook and other platforms took proactive measures to take down fake news and other misinformation. An express mention of these issues by political leaders can


also garner the attention of the voters and make them more aware of the type of propaganda they may view on social media. Policy Focus Points. From the case studies and discussion presented above, certain policy points for governments and social media platforms can be formulated. They are targeted at raising important considerations for policy-makers to use existing principles and legislation in the social media context. Further legislation can be developed to strengthen laws surrounding these areas without compromising on freedom of speech, the cornerstone of democracy. • Privacy of voter data under data protection laws or election laws; • Development of robust fake news detection technologies and their prevalent deployment; • More intelligent machine learning programs that can differentiate contextual nuances; • Addressing hate speech, especially against minorities and marginalised communities; • Education of users on misinformation and computational propaganda through education and comprehensive dissemination.

6 Conclusions Social media has become an inextricable part of human society and political campaigns have inevitably moved to these platforms. Along with the democratizing and equalising effect brought about by the internet, the use of social and political capital in manipulating public opinion has seen a definite rise. In all the three countries studied in this paper, activity by bots and misinformation was noted, in line with the global trend. Information and perception are the two aspects of the public domain that can sway voters. Manipulation has nefarious effects on their right to information and brings to question the legitimacy of the election itself. Furthermore, it has far-reaching effects on polarization and misperceptions of minority communities. While most research has not proven a definite causal link between computational misinformation and skewed election results, the effect of misinformation, both ethically and pragmatically, cannot be ignored while maintaining the sanctity of the democratic mandate. While propaganda and disinformation tactics have been widespread throughout the history of our societies, social media has made it more pervasive and effective in the common people’s life.


AI & Glocalization in Law, Volume 1 (2020)

This paper offers a qualitative insight into the use of computational propaganda and disinformation tactics in three diverse democracies (Brazil, Italy and Canada), definitively pinpoints the effects they can have on electoral integrity and suggests areas of reform for policymakers. This is a field that requires further research, particularly along the lines of identification population groups most vulnerable to exploitation and manipulation. As social media becomes an integral part of the human socialisation process, computational propaganda is here to stay. It is essential to learn from global experiences and chart the courses for more equitable democracies.


References 1


3 4 5 6 7

8 9 10 11 12


14 15

Amazonaws. 2020. Far Right Networks Of Deception: Avaaz Investigation Uncovers Flood Of Disinformation, Triggering Shutdown Of Facebook Pages With Over 500 Million Views. Amazonaws. [Online] 2020. [Cited: September 3, 2020.] y_2019.pdf. Arnaudo, D. 2020. Computational Propaganda In Brazil: Social Bots During Elections. [online]. Oxford Blog. [Online] 2020. [Cited: September 3, 2020.] Bennett, Colin and Gordon, Jesse. 2020. Understanding the “Micro” in Micro-Targeting: An Analysis of Facebook Digital Advertising in the 2019 Federal Canadian Election. s.l. : SSRN, 2020. Bennett, W. L., & Livingston, S. 2018. The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, . 2018, 33(2), 122–139. Bradshaw, S. and Howard, P. 2018. Challenging Truth And Trust: A Global Inventory Of Organized Social Media Manipulation. [online]. 2018. —. 2018. Why Does Junk News Spread So Quickly Across Social Media? Algorithms, Advertising and Exposure in Public life. Knight Foundation, Social Media & Democracy Series. 2018. Bradshaw, S. 2019. Securing Canadian Elections: Disinformation, Computational Propaganda, Targeted Advertising And What To Expect In 2019. [Online] 2019. [Cited: September 3, 2020.] Cantarella, Michele and Fraccaroli, Nicolò and Volpe, Roberto Geno,. 2020. Does Fake News Affect Voting Behaviour? s.l. : CEIS Working Paper No. 493, 2020. Chown, Marco, Jane Lytvynenko, and Craig Silverman. 2018. A Bualo Website Is Publishing ‘False’ Viral Stories About Justin Trudeau—And There’s Nothing Canada Can Do About It.” The Toronto Star . 2018. Eleitoral, Altera as Leis do Código. 2015. 2015. Pub. L. No. 13.165/15 . Exposure to opposing views on social media can increase political polarization. . Bail, C. A., et al. 2018. s.l. : PNAS, 2018. 115, 37, 921. Fabio Giglietto, Nicola Righetti, Luca Rossi & Giada Marino. 2020. It takes a village to manipulate the media: coordinated link sharing behavior during 2018 and 2019 Italian elections. Information, Communication & Society, . 2020, DOI: 10.1080/1369118X.2020.1739732. Giglietto, Fabio and Iannelli, Laura and Rossi, Luca and Valeriani, Augusto and Righetti, Nicola and Carabini, Francesca and Marino, Giada and Usai, Stefano and Zurovac, Elisabetta. 2018. Mapping Italian News Media Political Coverage in the Lead-Up to 2018 General Election (May 17, 2018). 2018. Howard, P. and Hussain, M. 2013. Democracy’S Fourth Wave? Digital Media And The Arab Spring. . Oxford University Press. 2013. Kambhampathy, A., Carlisle, M. and Chan, M. 2020. Justin Trudeau Wore Brownface At 2001 ‘Arabian Nights’ Party While He Taught At A Private School. [online] Time. Time. [Online] 2020. [Cited: September 3, 2020.]





20 21

22 23 24



190 AI & Glocalization in Law, Volume 1 (2020) Machado, C. 2020. Whatsapp’S Influence In The Brazilian Election And How It Helped Jair Bolsonaro Win. [Online] 2020. [Cited: September 3, 2020.] Medium. 2020. Computational Power: Automated Use Of Whatsapp In The Elections. Medium. [Online] 2020. [Cited: September 3, 2020.] Oxford Blog. 2020. News And Political Information Consumption In Brazil: Mapping The First Round Of The 2018 Brazilian Presidential Election On Twitter. [online]. Oxford Blog. [Online] 2020. [Cited: September 3, 2020.] Pollicino, Oreste and Somaini, Laura. 2020. Online Disinformation and Freedom of Expression in the Electoral Context: The European and Italian Responses (2020). Sandrine Baume, Véronique Boillet, Vincent Martenet (eds), Misinformation in referenda. Forthcoming. [Online] 2020. Rheault, Ludovic and Musulan, Andreea. 2020. Investigating the Role of Social Bots During the 2019 Canadian Election. [Online] March 2, 2020. Soon, C., and Goh, S. 2018. Fake news, false information and more: Countering human biases IPs. [Online] 2018. 918.pdf.. Working Papers No. 31. Subverting Democracy to Save Democracy: Canada’s Extra-Constitutional Approaches to Battling 'Fake News'. Karanicolas, Michael. 2019. s.l. : Canadian Journal of Law and Technology, 2019. Sunstein, C. 2017. #Republic: Divided Democracy In The Age Of Social Media. . Princeton University Press. 2017. Tardáguila, C., Benevenuto, F. and Ortellado, P. 2020. Opinion | Fake News Is Poisoning Brazilian Politics. Whatsapp Can Stop It.. New York Times. [Online] 2020. [Cited: September 3, 2020.] United States District Court for the District of Columbia. United States of America v. Internet Research Agency LLC. s.l. : United States District Court for the District of Columbia. Wooley, S. and Howard, P. 2020. Political Communication, Computational Propaganda, and Autonomous Agents. International Journal of Communication.



18 Manoeuvring AI in International Arbitration Ankita Bhailot Research Intern (Former), Indian Society of Artificial Intelligence and Law, India



Back in the year 1950, philosophers attempted to comprehend human intelligence into a form of a structured program and this coined the term ‘Artificial Intelligence’, and it is still believed that philosophy has played its role in the advancement of Artificial Intelligence. Owing to Science, it has led us to achieve heights that were only hoped of existing, however in the language of computer science, Artificial Intelligence is the intelligence demonstrated by machines to emulate a human being’s unique reasoning faculties. The most recent two decades have seen sensational progressions in data innovation, which have encouraged a noteworthy degree of development in items and services offered over various businesses. Yet, International arbitration, forming part of the legal services industry, has been so far unaffected by such turns of events. We can discover models where the International Arbitration people group has been improving its administrations through the usage of innovation: video-conferencing, e-disclosure, utilization of online stages, and cloud-based advancements. As the predictive analysis under AI-enabled systems can only be done through the past performances, an arbitration algorithm will also be requiring data from live broadcasts of the Arbitral proceedings. These models will thus aid the Arbitral proceedings for both Public as well as Private use over the discretion of the parties involved in the proceedings. Nonetheless, these and other comparative advances are upgrades of a steady nature, notable to most and are successfully practiced by numerous arbitration practitioners and Institutions. We cannot also ignore the presence of AI in our daily lives be it using AI to filter spam emails, Self-driving vehicles, Digital assistants, write newspaper articles, provide medical diagnosis, or connecting with friends. A considerable lot of these applications will make and raise new legal issues for legal advisors, for example, risk of independent vehicles, the lawfulness of deadly self-ruling weapons, monetary bots that may run encroach antitrust laws, the security of Medical robots. What better it is to be enabled into a more organized future than with the use of technology, such as Artificial Intelligence (AI).


AI & Glocalization in Law, Volume 1 (2020)

Law professionals tend to believe that it will have a limited impact on their lives, while the AI has already touched many areas of law, including legal research, document review, due diligence checks, etc. AI can provide lawyers with insight on similar cases and help them accurately answer client questions, that is AI can handle many humdrum tasks, which empowers lawyers to spend more time on analysis, counseling, negotiations, and court visits (Law Technology Today, 2019) AI speaks to both the greatest chance and the best danger to the legal profession since its arrangement. These applications are only the beginnings of what will be an innovation-based interruption to the profession. In International arbitration, AI can be utilized in the sectors such as the process of legal research, drafting and proof-reading of written submissions, the appointment of arbitrators, case management and document organization, translations of documents, hearing arrangements of transcripts and foreign language interpretation, drafting of standard sections of awards and cost estimations. In addition to dealing with the above-mentioned aspects, this paper will also deal with the controversial areas which are a core area of the arbitral process; the decision-making itself. It will focus on important questions as to whether and how the outcome of future decisions should be determined on the probabilistic calculation based on past data. In the matter of judicial decision-making, many have argued over the feasibility of AI Judges and Robot Arbitrators, but little examination has gone into the likely ramifications of the utilization of AI in this territory. (JosĂŠ MarĂ­a de la Jara, 2017)Certain Authors have gone on the terms that utilization of AI will be inescapable in the future or on the other hand express wariness, essentially on the supposition that some "human factor" would be important to guarantee sympathy and emotional justice. In like manner, the discussion would focus on cutting-edge that will make certifiable advancement in International Arbitration.

2 Models available under Artificial Intelligence In research today, as part of (Markarian, 2019) Deep Learning-based AI, machine learning for document classification goes by many names, but the two most common are Technology-Assisted review (TAR) and Predictive coding (PC), which are used more or less interchangeably. Of the most known tools under Artificial Intelligence for cross-border dispute resolution, Predictive Coding catches relationships among numerous elements to evaluate risk with a specific arrangement of conditions to measure the data and interpret it benefitting the company. The procedure begins with setting boundaries and recognizing reports that will frame the example set of records to be inspected. This example is by and large checked on by a senior legal advisor having skilled knowledge of the case. The framework, at last, will tune the grouping model to where it can "foresee" what the


human will pick as the sample documents as (Claire Morel de Westgaver, 2020) ‘relevant’ or ‘non-relevant’; or for document requests as ‘responsive’ or ‘non-responsive’ which thus prepares a prescient formula i.e. the algorithm. This algorithm runs through different stages of reviewing for preferred sets of documents either continuously or on completion of a selected set’s review. The documents are then identified in order as relevant/responsive by the TAR systems based on conceptual analytics, however, if the finding reveals any weak document by the tool then they may be reviewed by the senior lawyer to improve algorithm thereon. These sets of documents identified under the algorithm as relevant/ responsive are sampled for quality control. The Support Vector Mechanisms (SVM) based Predictive coding will calculate here the relevancy score ranging from 0 to 100 and the document which achieves a higher level of relevancy will likely be consistent and thus as a result lower risk of error will be associated with such document. At some point or another, most e-discovery experts and researchers have encountered the weight of a slow-moving document reviewing. The pressure to reduce time spent through audit and cost control is a significant reason that e-discovery is prime land for the present sprouting-utilization of Artificial reasoning (AI) in law. Machine Learning is a zone of AI that empowers Computers to self-learn, without basic-obvious programming. In e-disclosure, AI automation, for example, Technology-Assisted Review (TAR) is helping legal groups drastically speed up document review as mentioned above, and, in this way, it significantly helps to decrease the expense. The results from TAR and PC systems are being used for workflows in e-discovery such as setting search parameters for document review which although conventionally doesn't require AI, but using AI will improve the speed, accuracy, and proficiency of document analysis. What effect will Predictive Analysis Have? Early Data Assessment- Early Data Assessment (EDA) work processes give both early bits of knowledge to what in particular is contained in the corpus just as faultless measurements to settle on vital decisions, to incur less than the Estimated total expenses. EDA permits parties to search, compose and test a sample of electronically stored information (ESI) beforehand as the data is fully processed. This facilitates the counsel to help shape the suit technique and promote participation and proportionality in legal research. Governance- Predictive Analysis is being applied to support the (Thomson Reuters Practical Law) document retention policies for both Corporations and government in laying the ground rules for how an organization will manage its documents from creation to destruction. Under this, a system can be trained to identify the relevant documents which shall further identify the documents that are needed to be kept to support any specific compliance and the ones lacking this, can be selected for deletion.


AI & Glocalization in Law, Volume 1 (2020)

Developing Legal performance- AI, throughout the arbitration process, can recommend, through predictive analytics the relevant drafting suggestions for arbitration clauses; help lawyers through submissions, review agreements and documents, and helping both clients and lawyers to identify the blind-spots further to bulletproof their interests accordingly. AI-infused products and services could help legal advisors likewise it can better oversee cases by, for instance, diagnosing wasteful aspects and mechanizing the management undertakings. Clients could likewise pre-screen a legal team's good for a specific case and acquire a second conclusion on their legal team's investigation. Systematic Adjudication services- Case management tasks could be robotized, or essentially smoothed out with the guide of programming, giving the Arbitrators more opportunity to do what they excel at i.e. Arbitrate, and Tribunal secretaries may be replaced by AI decision support systems. Synopses thus could be automatically generated to help readers navigate through decisions and hopefully one day an AI-enabled ‘arbitrator’ could preside over a dispute, owing to the assent of parties related to the dispute, on the appointment of such ‘Machine Arbitrators’. If the parties trust the AI-enabled Arbitrator, at that point who is to prevent them from utilizing it, especially in arbitration where an opportunity of decision is fundamental?


Judicial decision-making and preserving Confidentiality

The human brain often experiences ‘hardware impediments’ which is relatively outperformed by computer programs without any problem (Theodore W. Ruger, 2004) A few examinations may lend support to the proposal that computer programs are better than humans in anticipating the outcome of legal decision-making. These AI-led programs, predicting the outcomes of legal decision-making appears to be unreasonable to most of the lawyers. They ought to believe; which is inherent that decision making requires a cognitive ability which involves understanding the party’s disposition and based on that it determines the most valid outcome through the process of reasoning, this very process is not attainable by the computer programs. Nonetheless, as talked about in the previous area, computer models can accomplish 'intelligent' results, which, whenever performed by people, are accepted to require high-level cognitive procedures. In his 2008 book, ‘Nudge’, Nobel Prize winner Mr. Richard H. Thaler has argued‘if irrationality can be predicted, human decisions can be nudged to maximize better outcomes.’ This entails the question of replacing human arbitrator with AI-enabled system arbitrator which will eliminate the dilemma involved in the process of selecting Arbitrator by the parties or appointed by the Court. However, the decision will purely be based on the algorithms that will be used in the program.


The power of deciding the case will be in hands of the programmer and this will thereon attract controversy as it may lead to non-acceptance of the award as the right to choose the arbitrator will no longer be available. Also, in the event when the choice is exclusively based on algorithms, at that point the reason for giving distributive and fair equity will be crushed. As the Human judge thinks about different factors before passing an award, for example, the pay and state of the parties, which will probably not be thought while utilizing the AI-empowered framework during the decision-making procedure. Any data-driven AI programs most importantly expect access to information. AI models, which depend on probabilistic conjecture, are information hungry: the larger the sample, the more precise the model's predictive worth will be. Thus, there will always be a need for sufficient data that can be referred. Considering that the case information is most-of-the-time hard to get to and all the more significant in situations where the decisions are confidential and hence not accessible for the nonparty individuals (White & Case, 2018). For example, worldwide commercial arbitration awards are commonly not distributed or published and the constitution of a database to build up an AI model would, along these lines, demonstrate to be difficult. In International arbitration, accordingly, AI programs are bound to apply to International Investment arbitration which commonly raises various notable issues, then in International Commercial arbitration, which manages different and special issues regularly. The subsequent inquiry identifies with the model yield. For AI-driven decisions making two types of questions rings the bell. The first question identifies with the information input and how much AI-based decisionmaking models require repetitive fact pattern or regardless of whether they would have the option to manage points that are mind-boggling and non-monotonous. All things considered, the more the anomalies or non-repetitive issues, the more troubles AI model will confront.

4 Strengths and Limitations of Artificial Intelligence owing to Cross-border Dispute resolution Access to open courts and tribunals is a fundamental right to the rule of law. Sponsored by the power of the state, courts and tribunals empowers citizens and organizations to shield or secure their privileges and legitimate interests, and at last give the way to consider the government responsible. Successful access to justice includes the ability to access procedures for settling disputes and rights guarantees, that lead to binding remedies mirroring the benefits of cases as per the substantive law, through the methods for strategies that are prominently reasonable and seen to be so. The authenticity of the framework – the motivation behind why individuals by and large do what judges let them know without the requirement for authorization – depends on open regard for, and trust and credence in the framework. The


AI & Glocalization in Law, Volume 1 (2020)

effect of AI on this Justice framework will be critical as it can be mixed with existing adjudicatory or non-adjudicatory procedures, and there have been questions raised about whether these procedures will affect the job of lawyers and judges as any innovation taking place replaces some human decision-making and analysis arrangement. AI forms, whenever applied in the legal frameworks, would potentially oppose this mastery of legal decision making. It appears to be very much acknowledging that the effect outside the Justice area is probably going to be huge. Various AI forecasts along with different advances implies that the present business employment plans will no longer exist in 20 years as a result of current multiple undertakings being replaced by AI-supported forms. Also, there has so far been little conversation about increasingly senior legal division jobs and whether these turns of events including the production of Judge AI will imply that legal work will change with certain adjudicators being replaced by more up to date advances. For this matter some computer models are rule-based, utilizing causal rationale and deductive thinking since they apply pre-built up rules in the algorithm to the noticeable information. Other AI models still have various highlights, specifically, for example, neural systems, regularly have no pre-characterized rules. Deductive, causal thinking is in this manner replaced by a backward approach because the AI program removes the algorithm from the noticeable information. As opposed to utilizing rationale, the AI model figures probabilities, for example, the probability for some random result from the given outcome. Further, the legal prediction studies talked about the overall utilization of a binary classification as an output task. Be that as it may, while the facts demonstrate that numerous legal questions of fact or law can be diminished to a 0/1 or yes/no double assignment, the issue is that there will be a large number of such binary tasks for each situation, and deciding every one of them will be case-explicit. For an AI model to have the option to extricate the necessary patterns and algorithms from the data, having one clear yield question, encourages the model-building process in itself. In frameworks where the calculation is coded by a human developer, the error will frequently be in the plan of the calculations itself. It very well may be changed once the error is identified (Smith, 2020). The error will in this manner, as a rule, result from the input data and are progressively hard to identify and fix. Concealing reasonable components from the input data, for example, ethnic foundation or geological origin could be considered in assisting with forestalling issues. Be that as it may, regardless of whether the delicate highlights are covered up, calculations may in any case verifiably re-build them from proxy factors.


5 Future of Justice: Real-time Courts & Online Dispute resolution Due to various technological pressures, the role of a new age judiciary is required to re-think which is linked to the formation of a new court-environment. On weighing the everyday issues and questions that we come across, not many individuals appear to wind up in court or tribunals. What we know from research and legal measurements is that by far in most of the district legal disputes the initiator is a business or organization instead of a person. Except for personal injury procedures, people's understanding of court procedures is as a respondent instead as a petitioner. So, the cases for the access to justice advantages of online courts and ODR should be seen in that specific circumstance. AI analysts have had various clear success outside the legal field, these successes recommend that predictive analysis even where there are huge varieties regarding oddity can be achieved. Not long ago the DeepMind researchers from Google Inc. successfully trained an AI program ‘AlphaGo’, to make him reach a higher level in the complex game GO than the original European master. The program was trained with the help of general-purpose supervised and reinforcement learning methods through its neural networks directly from the gameplay itself (Maddison, 2016). While the law is more perplexing than any game, these successes recommend that Judge AI can figure out how to apply the law by understanding enactment and case law, and ultimately the application of these standards to verifiable conditions is achievable. Considering the present expansion of AI in the fields other than the legal arena, it seems probable that the development of sophisticated Judge AI can take place in the next decade. With the time as the machine learning merges with advance predictive analytics the position of Judge AI will be at a better stage than the present one.

6 Conclusions We are standing now on the brink of significant change, to put this in a more extensive setting, it very well may be viewed as a major aspect of a worldwide pattern, which is especially predominant in common-law jurisdictions. Even though it is elegant in the talk around innovation and the courts to recommend that courts haven’t changed since the 19th Century, the courts and councils have consistently been developing. Albeit some procedural issues remain shockingly impenetrable to change, the procedure of improvement and change has been entirely consistent. Furthermore, the scene of contest goal has likewise adjusted, with the approach of plenty of private question goal components now accessible including intervention, pacification, settling, ombudsman, and ODR (Online Dispute Resolution).


AI & Glocalization in Law, Volume 1 (2020)

While we are considering the manners by which the court framework is said to be weakening, it is essential to remind ourselves what, regardless of the capital constraints, it presently progresses nicely and to guarantee that its fundamental beliefs and qualities are imported, quite far as possible, to the arrangement of things to come. This incorporates, in addition to other things, straightforwardness, trustworthiness, unprejudiced nature, reasonable procedure, and considerable justice directed by an incorrupt and experienced legal judiciary performing to the best expectations. In his 2019 book ‘Online Courts and the future of Justice’ Prof. Richard Susskind suggests that we should transform our court system as it will improve access to justice and that we should do so even if such transformation will be far from perfect. This is to modernize and reshape our dispute resolution procedure to give more noteworthy access to the individuals who feel prohibited from the open justice framework and more prominent convenience for the individuals who are right now battling without representations. Having the internet to make and communicate data, we currently have massive complex data indexes that give experiences into procedures and conduct. In the current phase of advancement, man-made brainpower or ‘progressively fitting computers’ breaks down big data to anticipate, decide, and instruct themselves with the goal that they autonomously increase their ability. It used to be felt that the capability of computers was constrained by the human inventiveness and mind of their software engineers. Be that as it may, this is by all accounts wrong. Not exclusively can progressively fit machines 'think' however that they can beat human specialists utilizing 'beast power preparing'.

References 1.



Claire Morel de Westgaver, Olivia Turner. 2020. Artificial Intelligence, A Driver For Efficiency In International Arbitration – How Predictive Coding Can Change Document Production. Kluwer Arbitration Blog. [Online] Febuary 23, 2020. of%20t. José María de la Jara, Daniela Palma, Alejandra Infantes. 2017. Machine Arbitrator: Are We Ready? . Kluwer Arbitration Blog. [Online] 2017. Law Technology Today. 2019. Three Ways Law Firms can Use Artificial Intelligence. Law Technology Today. [Online] 2019.

4. 5.





199 Maddison, Christopher. 2016. Mastering the game of Go with deep neural networks and tree search. s.l. : Research Gate, 2016. Markarian, Josh. 2019. AI & Machine Learning Within Document Review. Teris. [Online] Febuary 24, 2019. Smith, Andrew. 2020. The Biggest Software Failures in Recent Years. DZone. [Online] January 6, 2020. Theodore W. Ruger, Andrew D. Martin & ors. 2004. The Supreme Court Forecasting Project: Legal and Political Science Approaches to Predicting Supreme Court Decisionmaking. s.l. : Columbia Law Review, 2004. Thomson Reuters Practical Law. Document Retention Policy. Thomson Reuters Practical Law. [Online] White & Case. 2018. 2018 International Arbitration Survey: The Evolution of International Arbitration. s.l. : Queen Mary University of London & School of International Arbitration, 2018.


AI & Glocalization in Law, Volume 1 (2020)

19 Regulation of algorithmic trading, scams and flash crashes Sameeksha Shetty Junior Associate Editor, Indian Society of Artificial Intelligence and Law, India

Abstract. Algorithmic trading is rapidly gaining popularity all around the world with majority of trades currently taking place through complex algorithms in more developed markets. There are varied economic implications that follow the rise in algorithmic trading, specifically high frequency trading, around the world. A new array of manipulative practices, scams and market fluctuations also arise with the increase in algorithmic trading. The fundamental shift in the market structure to have an impact on the lives of citizens makes it imperative that we carefully evaluate the evolution of the markets in the modern era. (High-frequency trading and the flash crash: structural weaknesses in the securities markets and proposed regulatory responses, 2012) Through this paper, I seek to analyse algorithmic trading and its economic effects to arrive at the most effective regulatory methods and measures for the prevention of algorithmic trading scams and flash crashes. In order to do so, the history of development of algorithmic trading in various markets shall be analysed. The presence of algorithmic trading and its effects shall be assessed. The paper shall also examine the occurrence of adverse events, such as flash-crashes, and assess the role of algorithmic trading in the manifestation of such occurrences. Responses of regulators to such events and the effectiveness of those responses shall also be examined to arrive at optimal regulatory strategies and alternative suggestions concerning regulation. The paper shall also envisage the ways in which policing and regulation of algorithmic trading would develop in the years to come given current market trends. This paper explores the development and regulation of algorithmic trading, specifically from an Indian perspective. It deals with the Indian regulatory regime concerning algorithmic trading and analyses the same to gauge its effectiveness. Since Algorithmic trading was permitted in India in 2008, the market is still developing and continues to grow at a rapid rate. The paper traces the development of regulation as well as adverse events through the last decade and studies its effects on the market. This ensures the determination of the effectiveness of regulation as well as that of responses to market

201 adversities. A cross jurisdictional comparative approach is adopted to analyse the differences in regulative strategies and market fluctuations in various markets to determine the effectiveness of Indian regulations. The possibility of overregulation and the consequences of the same are also observed. Analysis and suggestions on the Indian regulatory regime with respect to algorithmic trading are presented. Further, the paper observes the presence of algorithmic trading and its use in global markets, specifically foreign exchange markets. It discusses the existing regulations in such markets and showcases the need for a global regulatory authority along with the possibility and challenges of the formation of such an authority. Keywords: Algorithmic Trading, Disruptive Technology, Fintech.



Algorithmic trading or “algo-trading” is becoming increasingly popular in today’s markets. This new form of trading is vastly different from traditional methods of trading and has several economic implications, both positive and negative. It also leads to the creation of new forms of market manipulation. The regulation of such trade is thus a complex task and regulatory strategies are still being developed. In India, the algorithmic trading has entered the markets fairly recently as compares to more developed markets and its development is examined. Through this paper, I seek to analyse the economic effects of algorithmic trading and the regulatory strategies to be adopted in regard to the same, specifically from an Indian perspective.

5. Algorithmic trading and Its History Algorithmic trading is a relatively recent system of trading which uses advanced mathematical tools to facilitate decision making in financial market transactions (2020). It entered stock markets in 1980s (Oberoi, 2018). It has since gained popularity and has now become one of the primary tools utilised in trading. Algorithmic trading provides obvious benefits such as speed and accuracy and can be utilised to gain significant advantages in the market. According to some, algo-trading is a prerequisite to survive in financial markets in the future (Aggarwal, 2019). Today, close to 50 per cent of exchange volumes in the futures and options segment and over 30% in the cash market happen through algorithms (Oberoi, 2018). The complexity of algorithms has evolved over the years. High Frequency Trading(“HFT”) is one of the most popular forms of algorithmic trading practised today. This tends to lead to the usage of the two as synonyms. HFT is a subset of algorithmic trading that involves the use of powerful computers to process market infor-


AI & Glocalization in Law, Volume 1 (2020)

mation (HFT and Market Quality, 2014). HFT is defined as a sophisticated tool utilised to pursue different strategies, ranging from market making to arbitrage (IOSCO, 2011). Regulation needs to be directed not only at algorithmic trading in general but specifically at HFT as the use of HFT in trading increases. For the purposes of this paper, I will primarily be considering HFT while analysing the effects of algorithmic trading as it the most prevalent form of algo-trading practised today. Since algorithmic trading entered market in 1980s, it has grown at an increasingly fast pace due to the technological revolution to now become the most prevalent form of trading. In developed markets such as the US, 70-80% of trades are done through algorithmic trading (Khandelwal, 2019). The surge in popularity of algorithmic trading can be gauged through its penetration of various markets. A report by Thomson Reuters estimated that algorithmic trading systems were responsible for 75% of global trading volume and further estimated a growth of 10.3% compound annual growth rate between 2016 and 2020 (Kant, 2019). Further, it is likely to increase from $11.1 billion in 2019 to $18.8 billion by 2024, expanding at a compound annual growth rate of 11.1% (Aggarwal, 2019). The speed at which algorithmic trading is growing suggests that there is a likely possibility of algorithmic trading completely replacing traditional forms of trading in the near future. However, an important observation is to be made regarding the users of algorithmic trading. Although majority of trades in the US market are done through algorithms of HFTs, they represent only 2 per cent of the approximately 20 000 firms in operation (The regulation of high-frequency trading: A pragmatic view, 2015). There can be a multitude of factors contributing to this trend including technological infrastructure of companies, estimated benefits and regulatory compliance requirements. To promote fairness in the market by enabling the use of algorithmic trading or HFTs by smaller traders, it is necessary that regulatory authorities promote measures for testing and development as well as establish comprehensible and attainable compliance requirements. The securities markets in the The Asia-Pacific region now account for more than one third of the global market capitalization (Algorithmic and high frequency trading in Asia-Pacific, now and the future, 2019). The entrance of Algorithmic trading into Indian markets was deferred compared to more developed markets, but has since speedily picked up. SEBI permitted automated trading in India in 2008 but it only gathered momentum after co-location was introduced in 2010, after which it grew rapidly to currently constitute an estimated 50% of the total trading volume (Khandelwal, 2019) (Aggarwal, 2019). The Indian market has crossed the halfway mark of the US and European market levels in a decade owing to lower costs of technology, access to computing power and availability of skilled resources which likely helped accelerate the transition (Oberoi, 2018). Owing to the use of algorithmic trading in Indian markets being a recent phenomenon, the regulations concerning such trading are still developing.


6. Economic Effects of Algorithmic Trading It is inevitable that algorithmic trading would have varied effects on the market due to the fact that it has highly advanced capabilities compared to traditional human-based methods. It has both positive and negative implications and effective regulation can operate to keep its negative effects at bay. In this section, I shall analyse in detail the various economic effects of algorithmic and HFT trading on the market. HFT has a number of significant advantages when compared to traditional trading mechanisms including speed, feasibility and increased liquidity in the market. Quicker reaction time is one of the primary benefits of algorithmic trading enabling it to spot arbitrage opportunities and executing trades in split seconds to make a profit even before a human trader blinks (Kant, 2019) (Aggarwal, 2019). The increase in speed afforded by algorithmic trading and enhanced technology enables the execution over a million trades in a day and enables complicated automated trading strategies, such as index arbitrage at minimal marginal costs (Highfrequency trading and the flash crash: structural weaknesses in the securities markets and proposed regulatory responses, 2012) (High speed trading begets high speed regulation: SEC response to flash crash, rash, 2010). The speed at which trades can be completed through algorithmic trading and HFTs are one of the most evident benefits of algorithmic trading. However, the speed at which transactions occur makes the monitoring and regulation of these trades a challenging task. It also leads to the elimination of Human error as computerized calculations provide accurate statistical and mathematical results. Studies have also linked algorithmic trading to increased liquidity in the market. Algorithmic trading has, however, also been linked to a number of adverse occurrences, the most notable events being flash crashes, which causes a rapid downturn in the market followed by a quick recovery. The biggest concern of market participants is that such crashes could lead to a recession in the near future. (Kant, 2019) Market breakdowns are defined as days when the stock plummets for more than 10% of the price and subsequently reverts back to within 2.5% of the price at the end of the day (Algorithmic and high frequency trading in Asia-Pacific, now and the future, 2019). It can be observed that developers of algorithm trading strategies have the power to substantially effect trends in the market. Most hedge fund managers today opine that algorithms have rigged the market (Kant, 2019). This can lead to distrust and fear concerning algorithmic trading. This indicates a pessimistic or sceptic attitude towards algorithmic trading which could have a negative impact on the market. Increased accessibility to resources and dissemination of information concerning algo-trading should be undertaken to ensure its integration and acceptance in the market. Effective regulatory controls on such trading could also help promote confidence with regard to this method of trading.


AI & Glocalization in Law, Volume 1 (2020)

However, bigger traders could have an impact on market trends due to reflexivity. Reflexivity is when the very act of observing or measuring something changes it. If an influential market participant says the market is bullish, it makes people buy, and that buying makes it more bullish. Technology amplifies reflexivity. (Shenoy, 2020) . With newer, more advanced technology, a new array of scams and bad market practices emerge. With the increase of algo-trading, an increase in high frequency spoofing and scams can be observed. These practices, if not regulated or monitored properly, can result in large scale economic consequences. For example, the 2010 flash crash of the NYE stock exchange was said to be enhanced by spoofing. Manipulative practices such as quote-stuffing and spoofing have emerged with the increase in algorithmic trading. In quote-stuffing, HFT traders send a vast number of orders that are subsequently cancelled to targeted exchanges to reduce the speed at which exchanges can inform other traders. (Limit Up–Limit Down: an effective response to the “Flash Crash”?, 2016) In spoofing, order display is manipulated by submitting large orders near the current price with the intent to cancel them immediately (Limit Up–Limit Down: an effective response to the “Flash Crash”?, 2016). Owing to manipulative practices such as quote-stuffing and spoofing, the effect of HFT on providing liquidity is questionable (HFT and Market Quality, 2014). Increase of spoofing and similar practices also results in increased costs of regulation. It is observed that manipulative HFT can cause more damage than manipulative non-HFT because of its superfast order placement and cancellation and has the potential to create higher volatility (Limit Up–Limit Down: an effective response to the “Flash Crash”?, 2016). Manipulative HFT can cause systemic risk because of its nature. (Limit Up–Limit Down: an effective response to the “Flash Crash”?, 2016). The speed of manipulative HFT leading to greater instability in the market will also complicate the task of regulators as new mechanism that are capable of reacting to such fast market changes will have to be established. Preventive, ex ante regulatory measures may therefore be more effective in regulating such practices. Some flash crashes have been analysed further to examine the role of algorithmic trading as the causal factor of such crashes. In October 1987, a few years after the usage algorithmic trading commenced, stocks started to fall heavily. This decrease triggered insurance algorithms which automatically sold shares in response to this, taking prices further down, until the market closed more than 22% down in a single day. (Shenoy, 2020). The 2010 flash crash in the US is one of the most famous flash crashes whose causes have been analysed to study the effects of HFTs and algorithmic trading. It was caused by a multitude of factors. The flash crash occurred on May 6, 2010 when the Dow Jones Industrial Index slumped approximately one thousand points within minutes, losing almost 9% of its value. It wiped around $862 billion off the American stock market (Kant, 2019). However, the rebound from the crash also occurred within minutes (Limit Up–Limit Down: an effective response to the “Flash


Crash”?, 2016). The cause of the crash was a 4-billion-dollar sell order placed by Waddell & Reed, a US mutual fund (Kant, 2019). Subsequently, HFT traders repeatedly bought and sold e-mini contracts, resulting in a hot-potato effect with two hundred contracts traded more than 27,000 times in just 14 second (Kant, 2019). A London-based futures trader was arrested following allegations of market manipulation such as spoofing, specifically high speed dynamic spoofing which contributed to the crash (Limit Up–Limit Down: an effective response to the “Flash Crash”?, 2016). While it has been recognized that HFT exacerbated market volatility, it did not trigger of the Flash Crash (Kirilenko, et al., 2014). Commodity Futures Trading Commission and the US Securities and Exchange Commission (SEC) identified the causes behind the flash crash in a joint report and concluded that actions of high-frequency traders contributed to volatility during the crash. However, the market would not have recovered so quickly if it were not for the action of high-frequency traders (The regulation of high-frequency trading: A pragmatic view, 2015). It has been pointed out many a times that HFTs are not the cause of the 2010 flash crash. However, it is important to acknowledge that the crash would not have occurred if not for the presence of Algorithmic trading. HFTs exacerbated the impact of the initial downfall. While it may not be the direct cause of these crashes, it has unarguably exposed the market to more complex manipulative practices and scams, and has also increased volatility, which contributes to such incidents. Thus, regulation catered to the possible effects of algorithmic trading is important. Although computerized trading mechanisms can be seen to be a mere catalyst, the unique effects that it produces unlike traditional trading mechanisms requires unique regulation. It can lead to wild fluctuations unless effectively monitored as seen above. HFT may not be the main cause of the crash, and has contributed largely to the immediate recovery, but flash crashes simply could not occur without HFT acting as a catalyst. The SEC developed rules in response to the crash under which exchanges were required to issue five-minute trading halts in a security if prices fluctuates upwards of 10% in a five-minute period (High speed trading begets high speed regulation: SEC response to flash crash, rash , 2010). These rules, however, overlook the possibility of there being a good reason for the fluctuations. Implementing such rules could be intrusive and hamper trades. Sufficiently testing of rules should be conducted before they are permanently enacted and authorities should be cautious not to employ a one-size-fits-all threshold as it can prove ineffective for some securities and intrusive to others (High speed trading begets high speed regulation: SEC response to flash crash, rash , 2010). This market crash was made possible by growing interconnectivity in securities markets and the surge in high-frequency trading, which can result in the destruction of trillions of dollars of wealth before human traders can react. (High-frequency trading and the flash crash: structural weaknesses in the securities markets


AI & Glocalization in Law, Volume 1 (2020)

and proposed regulatory responses, 2012). It can be seen to render the market sensitive to changes. Since the May 2010 flash crash ,hundreds of mini-flash crashes and also more serious incidents have occurred (Kant, 2019). During the 2018 flash crash, an abrupt plunge in S&P 500 E-mini contracts (Kant, 2019). This shows that the regulatory response to the 2010 flash crash failed to mitigate the risks caused by algorithmic trading. Further action is required to monitor and prevent such incidents in the future. The Indian market has not experienced many flash crashes as compared to developed markets (Oberoi, 2018). However, algorithmic trading has been present in the Indian market for only over a decade and further increase will determine whether the market is stable and regulations have effectively curbed flash crashes and other such occurrences. There have been some mini flash crashes in the Indian market and SEBI has responded to these events to ensure a more stable market. In 2018, the Bombay Stock Exchange (“BSE”) Sensex tumbled about 280 points after investors were jolted by an over 1,000-point plunge in afternoon trade. (PTI, 2018). The benchmark suddenly tanked 3.03 per cent before staging an equally sharp recovery. (PTI, 2018). The Sensex finally ended lower by 279.62 points (PTI, 2018). In a Diwali Mahurat trading session, prices in certain derivatives plunged by nearly 20% due to a faulty algorithmic trading software at a broker. (Shenoy, 2020). All trades on this day were annulled (Shenoy, 2020). It can be observed through the occurrence of flash crashes and market fluctuations as a result of HFT and algotrading, that these new methods of trading made the market highly volatile. These occurrences raise concerns about the effect of algorithmic and high frequency trading on market stability. There has also been a surge in manipulative practices as mentioned earlier with the growth of algo-trading. The National Stock Exchange (“NSE”) co-location scam is one such example. The stock exchange provided data to brokers in a round-robin mechanism, sending it first to whoever connected to its trading system first, and then to the second broker and so on, which was manipulated to obtain illegal gains (Shenoy, 2020). After letters were received from the whistle-blower, SEBI formed a panel under its Technical Advisory Committee which found that the architecture of NSE was prone to manipulation and market abuse (FE Online, 2017). As can be observed, algorithmic trading, if not effectively tested and monitored could cause negative implications in markets and also for companies undertaking trading activities through such mechanisms. Manipulative algorithms can cause considerable damage to the market and regulation of such algorithms is a challenging task due to its speed. New methods of regulation such as computerized regulation should be looked into to prevent adversities. Development of regulatory strategies has to take place to effectively counter such manipulative practices and to monitor activities of algorithms in the market. III- Regulation of Algorithmic Trading


High-Frequency Trading firms owe much of their success to the fact that they operate within a largely unregulated niche of the market (High-frequency trading and the flash crash: structural weaknesses in the securities markets and proposed regulatory responses, 2012). This indicates the pressing need for regulation of algorithmic trading, but it could also be inferred that overregulation could prove disastrous for the growth of algo-trading. Over the years, regulators around the world have imposed new regulations to control the economic effects of algorithmic trading. Effective regulation is key in maintaining market stability and keeping manipulative algorithms at bay. In this section, regulations imposed by Indian authorities shall by analysed to gauge its effectiveness and spot possible areas of improvement. In the USA, the SEC found that poor risk controls in HFT, lack of stringent process adopted by countries for development and deployment of codes and a number of out-of-control algorithms on the market were some of the factors that contributed to the 2010 flash crash (The regulation of high-frequency trading: A pragmatic view, 2015). These factors should be considered while analysing regulations. In India, algo-trading was permitted in 2008. The two major stock exchanges, NSE and BSE have outlined different prerequisites to obtain approval for algorithmic trading. The NSE mandates that members submit an algo-undertaking and a Computer to computer link software (“CTCL�) undertaking, Network Diagram and Board Resolution which has been duly signed on stamp paper and notarized, and require a vendor confirmation letter. Further, a member has to apply for NEAT id’s in the respective segments they want to trade and upload its location code details along with dealer details under the CTCL ID (2018) The steps to be completed for BSE approval for algorithmic trading are obtaining of IML Ids from BSE by members for placing orders, submission of Intimation letter to BSE in case of approved algorithms and uploading a 16 digit code along with dealer details (2018). SEBI has developed and amended restrictions imposed on algo-trading on a regular basis since it was permitted in 2008. In 2012, the SEBI circular prescribed that all algorithmic orders be tagged with a unique identifier code provided by the stock exchange in order to establish audit trails (SEBI, 2012). It also announced two regulatory measures in response to the 2012 Flash Crash. The first is a volume limit of Rs. 100 million on any order to be accepted, and the second is a dynamic price limit of 10 per cent on each stock (SEBI, 2012). The volume limit is innovative, since major stock markets do not have similar regulations enforced. It ensures the avoidance of human error and also prevents sudden price change due to excessively large orders. (Volume Limit: An Effective Response to the India Flash Crash?, 2019) However, the volume limit regulation is based on a one-size-fits-all approach and may not be able to prevent an intraday Flash Crash (Volume Limit: An Effective Response to the India Flash Crash?, 2019).


AI & Glocalization in Law, Volume 1 (2020)

Regulators have to be cautious to implement measures in light of their possible consequences to obtain the best results. Indian exchanges enforce penalties on firms that have a high order-to-trade ratio for the orders that are priced beyond the mentioned trade price range (Khandelwal, 2019). It was observed that on a blanket imposition of an OTR fee, unproductive orders which imposed negative externality on the overall market as market participants modified their behaviour to mitigate the effect of the fee. But when the exchange used the OTR fee to manage high order submission rates on limited infrastructure, the aggregate OTR level reduced, the liquidity improved and the volatility declined. (When do regulatory interventions work?, 2019). Thus, it can be seen that an approach specific to the goal of a regulation, rather than a one-size-fits-all approach works more efficiently to produce expected results and increased market stability. Most HFT firms rent space on the server racks on the same network in the stock exchange premises in order to be the fastest to respond. This is called as Co-location (Khandelwal, 2019). The advantages of co-location are reduced latency, i.e. the time your system takes to respond to any trigger. India has one of the lowest co-location charges among the peer exchanges across the globe (Khandelwal, 2019). Co-location has led to a more efficient market due to the decrease in the bid-ask spread, as the market makers can respond much faster to new updates and can afford to quote much tighter prices (Khandelwal, 2019). Relatively high speed infrastructure, and the availability of colocation services and low political and regulatory risks are positive factors for co-location trading in India (Algorithmic and high frequency trading in Asia-Pacific, now and the future, 2019). In 2015, SEBI undertook measures to ensure fair and equitable access to the colocation facility such as fair access to data, make available the description of colocation on websites (SEBI, 2015). They continued such efforts in 2016 when it advised stock exchanges to allow direct connectivity between colocation facilities and between servers of a stock broker placed in different colocation facilities. (SEBI, 2016) Positive measures adopted by stock exchanges are also important as observed in this case. The promotion of trading strategies such as colocation which has positive effects on the market by SEBI ensures that the employment of different rogue algorithms is reduced and leads to a more stable market. Further, such measures increase competition and help support smaller participants. Analysing the nuances of algorithmic trading trends in the market and directing regulatory and promotional measures in consonance with market trends can lead to more effective regulation and preferable market activity. In 2018, efforts were undertaken to monitor algorithms. SEBI adopted measures to increase surveillance and regulate algo-trading further by mandating stock exchanges to allot a unique identifier to each algorithm approved by them (SEBI, 2018). Stock exchanges were also advised to provide a simulated market environment for testing of software including algorithms (SEBI, 2018). To check price


swings, SEBI has also said that penalty would be levied on algorithmic orders placed more than 0.75 per cent away from the last traded price (Nirmal, 2018). In 2020, SEBI relaxed its rules by permitting stock exchanges to introduce additional slabs up to OTR of 2000 from existing OTR of 500, with deterrent incremental penalty. (SEBI, 2020) They also prescribe cautious limitations. On the third instance of OTR being 2000 or more in 30 days, the concerned member would not be permitted to place any orders for the first 15 minutes on the next trading day as a cooling off action (SEBI, 2020). These rules enable a more specific approach to be adopted by the stock exchanges in light of trading activities. Such regulation allows for adaptability to specific conditions and could lead to more effective results. As per the rules established by SEBI, a member who intends to start algo-trading should have a Base Minimum Capital Requirement of Rs.50 Lakhs (2018). Establishing a minimum requirement has several effects. Although it keeps the usage of algo-trading limited, ensuring easier regulation, it can reduce competition and work to disadvantage weaker participants in the market. Market- wide upward revision of minimum trading unit does not effect highfrequency traders as it mainly influences the size of trades they participate in and refrains market participants from optimizing their trade sizes and force them to trade at the minimum specified trade size. (Does trade size restriction affect trading behavior? Evidence from Indian single stock futures market, 2020) Contrary to expectations, the upward revision of the minimum size of derivative contracts in the National Stock Exhange SSE market had a weak positive impact on liquidity and trade volume (Does trade size restriction affect trading behavior? Evidence from Indian single stock futures market, 2020). Lowering the capital requirement can provide wider access ensuring fair competition in the market and promote the usage of tools such as ago trading by more participants which allows retail traders and sectors to grow. Structural factors should also be considered while dealing with the regulation of algorithms. The structure of the market can have a significant impact on regulatory costs and efficiency. Higher regulation costs invariably lead to difficulties in regulation and can cause a hit to efficient functioning of regulations. Market fragmentation can lead to incurring higher regulatory costs. Effort need to be made to maintain a less fragmented market structure to ensure regulation works. The Securities and Exchange Board of India has implemented no major restrictions on algorithmic trading and HFT and regulators are taking steps to increase the accessibility of algo-trading by offering free market data and shared colocation services (Algorithmic and high frequency trading in Asia-Pacific, now and the future, 2019). Such an approach is beneficial as overregulation can hamper the growth and development of algo-trading, leaving domestic market participants at a disadvantage compared to those in more developed markets.


AI & Glocalization in Law, Volume 1 (2020)

While regulation is necessary to maintain stability, ensure surveillance of markets and reduce manipulation, regulators should be careful not to excessively or unnecessarily restrict the activities of traders as it could have negative consequences. Australia is a leading market in financial technology innovation. (Algorithmic and high frequency trading in Asia-Pacific, now and the future, 2019). The ASX has implemented several technological upgrades to improve latency and attract algorithmic traders and it is also one of the least expensive markets. (Algorithmic and high frequency trading in Asia-Pacific, now and the future, 2019). The Australian regulatory authorities seem cautiously optimistic about algorithmic trading, which has perhaps rendered them a leading market. In the US, The Commodity Futures Trading Commission's effort to more closely monitor the use of algorithms for derivatives trades is sparking warnings that tighter regulations could lead those on the buy side to shun algorithm use entirely in their electronic trades. (Baert, 2016). The call for algorithmic code to be supplied to the CFTC also had raised alarm, with fear of disclosure of proprietary information. (Baert, 2016) Adverse reactions to increased regulatory measures can be observed in this case. Regulators have to walk a thin line in order to maintain the balance between effective regulations and over-regulation. While it is necessary to ensure monitoring of market activities, the reaction of participants need to be gauged before enacting regulation. Under-regulation can lead to increased manipulation whereas overregulation can lead to decreased participation. Thus, various factors need to be considered while regulating this form of trading. Reform to market structures and new methods of regulation which are more suitable to increased levels of algorithmic trading should also be considered. Regulators should also state the objectives while imposing them. Regulators will be able to better deliver outcomes if the problem and the desired outcomes are stated upfront as part of the objective as it helps optimise the design of the market intervention and lead to outcomes that can be readily measurable and visible to the public in whose interests the interventions are being done. (When do regulatory interventions work?, 2019) Order trade ratios and volume limits can be used in some scenarios try to limit the no. of trades on the market which controls the effect of these super-fast algorithms. However, the need for computerized regulation is becoming more and more evident with the flash crashes occurring around the world. Computerized regulation should be considered to keep up with the speed of algorithmic trading and to be able to monitor trades that occur within seconds. Development of regulatory mechanisms that operate at an equal or faster rate as compared to development of trading mechanisms should be undertaken.


7. Global markets and regulatory authority Algorithmic trading has surpassed domestic trade and has made its way to foreign markets. Today, it has become the primary form of trading in foreign exchange markets as well. It is estimated that algorithms currently account for about 80% of trading volumes in foreign exchange markets (Shenoy, 2020). I have previously examined the economic effects of algorithmic trading and explicated the need for regulation of such trading. In this section, I shall examine the existing regulation of algorithmic trading in foreign exchange markets and present my analysis of the same. The FX global code, which was released in early 2017, is a common set of principles applicable to all FX market participants aiming to promote the integrity and effective functioning of the FX market (Rethinking spot FX regulation, 2019). It is built around 6 leading principles which are ethics, governance, execution, compliance, risk management and information sharing and also has several supporting principles (Rethinking spot FX regulation, 2019). The Global Foreign Exchange Committee reports that over 150 market participants have made a statement of commitment to the code in less than one year, with over 80% of these statements made by private sector market participants (Rethinking spot FX regulation, 2019). Regulatory measures and establishment of market principle such as these is important to ensure the stability of the foreign exchange market and to avoid event such as flash crashes. However, until such a code is adopted by the majority of participants in such markets and enforcements mechanisms are set up, the risk of market adversities and manipulative practises remains significantly high. The Foreign exchange market still remains one of the opaquest markets an implementation of stricter rules of disclosure including measures like time stamping are necessary to enhance transparency in such markets (Rethinking spot FX regulation, 2019). Efforts to maintain records of and transactions and timely reporting mechanisms for suspicious cases should be undertaken to deliver a fairer and more transparent FX market. (Rethinking spot FX regulation, 2019) Indian authorities are promoting the use of algorithmic trading and similar tools in foreign exchange markets. In 2010, the NSE enabled the Financial Information Exchange protocol on its trading platform to boost transaction speed for overseas investors using direct market access (Gupta 2019). The growing foreign exchange market with increase in algorithmic trading calls for effective regulation. Efforts to set up a foreign regulatory authority to monitor algo-trading in the forex market should be undertaken. The formation of such an authority is a complex task involving the cooperation of multiple nations. It is therefore impetrative that discussion regarding the governance of algorithmic trading in foreign markets be commenced and forwarded by nations. Unlike national regulation which can be


AI & Glocalization in Law, Volume 1 (2020)

imposed instantaneously in response to market adversities, foreign market requires the cooperation of countries and is a time-consuming procedure.

8. Conclusions In conclusion, the prevalence of algorithmic trading in todays markets and the expanding reach of the same showcase the need for its regulation. Such trading has various economic consequences, the most common implications being flash cashes and similar events which need to be monitored and regulated effectively. Monitoring and regulation of such trade can be seen to be a complex task. It can be noted that computerized regulation would be most effective method given the peed at which transactions occur. There are challenges to be overcome in terms of formulation of such regulation as it is to be developed in ways that wont hamper its growth. Further, the need regulation of algorithmic trading in global markets can also be observed. Although efforts can be made to develop the most effective regulatory strategies, regulation of such a new and rapidly developing form of trading is a challenging task with a multitude of factors to be considered and can only be judged once it is implemented.

References 1.

2. 3. 4. 5. 6.


Aggarwal, DK. 2019. Will the rapid rise in algo trading leave traditional traders behind? The Economic Times. [Online] 10 August 2019. [Cited: 25 July 2020.] Algorithmic and high frequency trading in Asia-Pacific, now and the future. Zhou, Hao and Kalev, Petko S. 2019. s.l. : Pacific-Basin Finance Journal, 2019, Vol. 53. Baert, Rick. 2016. Effort to police algorithms raising alarms. Pension&Investments. [Online] 08 August 2016. [Cited: 22 July 2020.] 2020. Definition of Algorithm Trading. The Economic Times. [Online] 03 August 2020. [Cited: 03 August 2020.] Does trade size restriction affect trading behavior? Evidence from Indian single stock futures market. Banerjee, Anirban and Banerjee, Ashok. 2020. 3, s.l. : Journal of Futures Markets, 2020, Vol. 40. FE Online. 2017. Algorithmic trading case: Income Tax department searches premises of ex-NSE top officials, brokers . Financial Express. [Online] 17 November 2017. [Cited: 26 July 2020.] Gupta, Anuptriya. 2020. Algorithmic Trading In India: History, Regulations, Platforms And Future. QuantInsti [online]. 2020. [Accessed 3 August 2020]. Available from:

8. 9. 10. 11.

12. 13. 14. 15. 16. 17.


19. 20. 21. 22. 23.


213 HFT and Market Quality. Biais, Bruno and Foucault, Thierry. 2014. 5, s.l. : Bankers, Marets and Investors, 2014, Vol. 128. High speed trading begets high speed regulation: SEC response to flash crash, rash . Serritella, David M. 2010. s.l. : U. Ill. JL Tech. & Pol'y, 2010, Vol. 433. High-frequency trading and the flash crash: structural weaknesses in the securities markets and proposed regulatory responses. Poirer, Ian. 2012. s.l. : Hastings Bus. LJ , 2012, Vol. 8. IOSCO. 2011. Regulatory issues raised by the impact of technological changes on market integrity and efficiencyt. [Online] July 2011. [Cited: 25 July 2020.] Kant, Ravi. 2019. Why algorithmic trading is dangerous Asia Times; May 8 2019;. Asia Times. [Online] 8 May 2019. [Cited: 25 July 2020.] . Khandelwal, Nitish. 2019. 5 Little-known Facts About Algorithmic Trading. Business World. [Online] 13 April 2019. [Cited: 29 July 2020.] Khandelwal, Nitsh. 2019. 5 Little-known Facts About Algorithmic Trading. Business World. [Online] 13 April 2019. [Cited: 29 July 2020.] Kirilenko, A.A., et al. 2014. The flash crash: The impact of high frequency trading on an electronic market. SSRN. [Online] 2014. [Cited: 24 July 2020.] Limit Up–Limit Down: an effective response to the “Flash Crash”? Dalko, Viktoria. 2016. s.l. : Journal of Financial Regulation and Compliance, 2016. Nirmal, Rajalakshmi. 2018. All you wanted to know about Algo trading . Hindi Business Line. [Online] 02 April 2018. [Cited: 26 July 2020.] Oberoi, Rahul. 2018. Slow & steady, Algo-trading Takes Up Decent Share on Dalal Street. The Economic Times. [Online] 15 October 2018. [Cited: 25 July 2020.] PTI. 2018. Sensex ends in red after 1,000-pt flash crash. The Times of India . [Online] 21 September 2018. [Cited: 23 July 2020.] Rethinking spot FX regulation. Kang, Min-woo. 2019. 4, s.l. : Law and Financial Markets Review, 2019, Vol. 13. SEBI 2018. Rules and Regulations in India for Algorithmic Trading. [Online] 2018. SEBI. 2016. Broad Guidelines on Algorithmic Trading for National Commodity Derivatives Exchanges. Securities and Exchange Board of India. [Online] 27 September 2016. [Cited: 28 July 2020.] SEBI. 2015. Co-location / proximity hosting facility offered by stock exchanges. Securities and Exchange Board of India. [Online] 13 May 2015. [Cited: 28 July 2020.]; SEBI. SEBI. 2020. Guidelines for Order-to-trade ratio (OTR) for Algorithmic Trading. Securities and Exhange Board of India. [Online] 24 June 2020. [Cited: 25 July 2020.]

214 AI & Glocalization in Law, Volume 1 (2020) 25. SEBI. 2018. Measures to strengthen Algorithmic Trading and Co-location / Proximity Hosting framework. Securities and Exchange Board of India. [Online] 09 April 2018. [Cited: 25 July 2020.] 26. SEBI. 2012. SEBI circular. Securities and Exchange Board of India. [Online] 13 December 2012. [Cited: 25 July 2020.] 27. Shenoy, Deepak. 2020. Algos are Changing India's Stock Markets. Livemint. [Online] 13 January 2020. [Cited: 28 July 2020.] . 28. The regulation of high-frequency trading: A pragmatic view. Moosa, Imad. 2015. 1, s.l. : Journal of Banking Regulation , 2015, Vol. 16. 29. Volume Limit: An Effective Response to the India Flash Crash? Dalko, Viktoria, and Michael H. Wang. 2019. 2, s.l. : Journal of Financial Regulation , 2019, Vol. 5. 30. When do regulatory interventions work? Aggarwal, Nidhi, Panchapagesan, Venkatesh and Thomas, Susan. 2019. Mumbai : Indira Gandhi Institute of Development Research, 2019, Vol. 19.


20 The Age of AI and the New Global Order Dyuti Pandya Research Member, Indian Society of Artificial Intelligence and Law, India Student, University of Mumbai, Mumbai, India

Abstract. The entire beginning of human history and politics has created a fundamental change. The existence has been possible through human actions and the interactions within networks and organizations. Introduction of technology into the society gave rise to Artificial intelligence and its advancement. The emergence of these intelligent neural networks holds out a new prospect for a fundamental change. The change is a new prospective theory in providing a general analysis in understanding AI and a new global order. This paper will be analyzing the good and the bad side of implementing AI in today’s international, political affairs. How this implementation will help in the decision making process of establishing a new order? The paper will also study the predictive use of AI in understanding the potential outcomes in the arena. Light will be thrown on the economic, security and disruptive modes of AI. The study is essential to configure a nation’s functioning and adaptability to the new technology. The paper will then conclude with an optimistic note on AI and its potential being harnessed to shape political institutions in search of establishing a new order. Keywords: Artificial Intelligence; Diplomacy; Societies; Political regimes; New Global Order; International Relations; Foreign Policies; Utopia



“Anything that could give rise to smarter-than-human intelligence—in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement - wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league (Saperstein, 2012).”


AI & Glocalization in Law, Volume 1 (2020)

History has witnessed the rise and fall of several empires. Civilizations have faded fast as times progressed. The emergence of new societies shifted into establishing something called a world order. The existence paved its way after societies emerged after the rise of nation-state theory. To have a sense of linkage between nations, we have an abstract formation of a world order. This order has been brought by a diplomatic rationale and the global systems under it have been created by the merging of economic, technological, and ideological forces. This rational has given rise to new evolutions over time at a cultural and a commercial basis of entwining societies. The advancement of global order has been disrupted as times have gone by. This has severed the relationship between the ethical and moral values that existed during the creation. The orders are usually seen to have been disrupted when an upheaval is ordained. This goes on to coincide with a new social transformation on a progressively wide but a slower scale. The purpose of having a global order is to impose logic on chaotic bygone proceedings of the past. It was the age of information revolution that created a global order. The world is being surged with a new wave, the wave of technology. Technological surge will play a role in creating a dialogue between the nations and which nation will emerge out as a superpower, if so. This will mainly depend on the technological advancement each nation undergoes. Artificial Intelligence is something that is already been introduced in our daily lives. The way time goes by, its advancement and the involvement of big data will go on to create a new movement. The revolutionary upbringing being witnessed originally paved its way from mere science fiction books. Although, the question remains if the future will be that of a utopia or a dystopia, this is something that needs a discussion on. If there is one concrete thing that can be said is that AI is will change the facets of societies and their functioning. The components of technology including AI, big data, artificial neural nets and machine learning will play an important role in creating an interaction between humanity and emergence of new technocratic global societies.


Defining AI in the context of international affairs

I wanted to think of a new definition for AI, a definition that will integrate the aspect of International affairs in it. So, defining Artificial Intelligence in this context would be a systemic integration of different algorithms surpassing the criticality to formulate new aspects of anticipation and prediction of possible future events giving rise to a new edict. Using this integration in the study of global affairs, would largely include harnessing the relations between nations or end up pulling the strings even further apart. This makes me bring forth Isaac Asimov's laws on robotics into the frame. Having read Science fiction all my life, I have seen these laws to be taken into consideration as a framework for the basis of AI existence frequently. These guidelines are after all just principles on how a system should likely obey to maintain the human-


machine transmission along the way. They define a sense of ethnicity when it comes to understanding the true nature of Artificial Intelligence. In today’s times, the dialogues converging between nations have created more problems than establishing concrete solutions. Reshaping these dialogues would include developing a new mechanism of global existence. And this is where AI will play a proficient role. In the process of creating an AI to resolute a solution to the abovementioned scenario, abiding by the Asimov’s laws of robotics seems like a plausible option. This would help the designers, creators and programmers working for the good will to avoid letting the AI gauge into its disruptive nation. Let me define the laws for you nevertheless (M.Wells): A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its existence as long as such protection does not conflict with the First or Second Law —Isaac Asimov, 1942 Understanding AI and Global Affairs: Integrating the use of AI into global affairs will collectively help renew international competition between the nations. An order between the political systems might be established in the process. Looking forth at the political uprisings, we are experiencing a digital uprising with the introduction of AI to it. The algorithmic diplomacy will enforce a new flow of dialogues between the countries. The algorithms will be tailored to analyze vast amounts of data to formulate reasonable strategies. This will be widely useful in establishing narratives between nations to avoid any sort of miscalculations that would otherwise deter the relations. Another usage of AI in the sensor-based technologies and machine learning systems in the government enforcements will likely be of a help in curtailing regulation. This move as suggested by Tim O’Reilly will help in framing and modifying the patterns of government functioning through fresh data on a regular basis (Reese). Artificial Intelligence will help in instituting an interchange between the ways to govern the exploitation of scarce resources and congregating a free will for individuals to live in a technocratic society. Such a society has been shaped and is being shaped by four waves (Lee, 2018). These waves will play a part of a contributory ally in aiding the development of AI societies. Talking about the first two waves, they have already hit humanity and it is something that we perceive in our everyday lives. These are namely the internet AI and the business AI waves. They have reshaped the digital and the financial world by their gripping algorithms. The second wave of AI can be the institution of integrating AI into global affairs. Processing thousands of weak interlinked variables from the available data of the history and current affairs will give an outcome beyond our intellect. This will help in binding societies as a whole. The third and fourth AI will take time to completely weave into the fabrics of our civilization. The perception


AI & Glocalization in Law, Volume 1 (2020)

wave of AI will help in creating a new vision of the global economy through a new geopolitical landscape. The question here is, will one country seize a global leadership or it will be unified collectivists of AI superpowers? One needs to wait.


Global way of looking: The Yin-Yang of Using AI

The Yin-Yang of Artificial Intelligence depends upon how it affects the development of major sectors. Artificial Intelligence will be implemented in the area of Economic disruption and Opportunity. The global market can be arranged thereby by introducing AI in it. Once we understand the position of where the domestic/national interests in global markets lie once AI starts being the key player, the development program will be important as the objectives of foreign policy leaders. Foreign ministries will likely need to retool their observation in developing AI technologies in markets. The data that AI will garner will be important in stabilizing regional inequality, migration, and trade issues. It will also help in creating a dialogue between international players. It will likely ensure the free flow of information for activities that will viably increase the potential of economic creation and production. The main research advancement will lie on how machine learning will help intelligent systems distinguish between algorithms and their variations. The combination of the extent of data poured into the system will play an important role in understanding the thought processes of individuals. This approach will benefit the outcome in the long run. Through the process of intellectual monitoring, future policies will be likely shaped. And will help in furthering and renewing relations. Artificial Intelligence technologies will end up integrating the political regimes by providing a framework and structuring a diversified contribution to a new global order. Next is introducing AI in monitoring the security and autonomous weapons systems. Besides contributing to the economic viable ordeal of societies, AI will also influence the prospect of autonomous weapons. The top priority of the governments/diplomats would be researching and overviewing the update on arms, control, and strategies. This needs to be highlighted to deal with a possible AI arms race that might escalate if the array of maintaining an order goes into a different direction altogether. The major powers of the world will need to initiate common policies and work towards a collective interest but not a singular interest. An example of the policy will be limitations on offensive capabilities. The main objective is not letting the autonomous weapons fall in the hands of terrorists. The handling of autonomous weaponry in the wrong hands needs to be stipulated by a substantial public policy and diplomacy. This needs to be done to establish a code of moral conduct and at the same time influence stakeholders in different sectors. In the philosophical aspect of studying global affairs, AI will also influence the democracy and ethics viability of a nation. We know the job of foreign ministers in democracies


include reflecting values that will strive for open societies. To have a collective order, the ministries need to be promoting and strengthening the institutions that will reflect social inequality and representation around the globe. Then we can focus on creating a network between the human/civil rights system in the sense of governance, market, and security in the international regime. With the advent of artificial intelligence, it is a possibility that a sense of tension between security and freedom will likely arise within the societies. Ministries will need to embrace the AI-driven tools for projects that will aid development, and produce a potential to keep the possibility of widening gaps between different nations. One can quote the political and progressive agenda that encircled the development of the Internet. The challenge for the foreign policy will be to provide a viable stand when it comes to artificial intelligence and leverage policy engagement within different nations and different sectors. As we discussed above how one will be able to integrate AI into global affairs, we need to talk about the utopian and dystopian side of it too. There are three challenges that need to be addressed before at first. The policymakers, researchers, governments, and the citizens need to encounter these before looking at the nature of Artificial Intelligence (Wright, 2019). First challenge is technology is how is technology going to impact domestic political regimes? And how that impact is going to affect competition between the countries in establishing a world order? Will AI be able to thrive in open societies? Or AI will impose digital authoritarianism? We can take the example of China that is building a system within its authoritarianism regime that will stand at the crossroad with what America is building within its liberal democracy. How Artificial Intelligence will change the means of production in relation to what industrial revolution did? How will this impact the economic and social sectors? (Schwab, 2016) What will happen when the AI manages to surpass human intelligence? A possible answer will be that the singularity of AI will accelerate/decelerate the way humanity perceived life and civilizations (Kissinger, 2018). Addressing the questions also leads us to adapt to a different shift in the way the means of development, privacy, and governance were introduced prior. Government policies need to be recreated by relying on each country’s policies when it comes to analyzing tradeoffs in the issues of data privacy, digital monopolies, online security, and algorithmic biases. If we want to use Artificial Intelligence, we need to use it in a manner that will contribute to the development of global order. One can rely on the tackling of these issues by first understanding if favoring privacy over technological processes is needed to build the society or the reverse? The answer lies in leveraging technology to build a new order will need new policies across borders. To address issues of establishing a global order or international relations, we need to bring out one question. How will the AI-related technology enable us to maintain a domestic political regime? Will the country get rich and manage control


AI & Glocalization in Law, Volume 1 (2020)

over its citizens? Looking through the vision of an optimist, the technologies relating to AI do promise to have a considerable effect on the way society is controlled. What exactly do we know of the intelligence’s capabilities? For now, one can predict that AI will be able to analyze the game of diplomacy. This will be done by assessing a series of cases and data to calculate the relationships between the political actors to give a plausible outcome. This will play an integral part in deciphering the foreign policies and the geopolitics arena. The Intelligence as we know will not be swayed by emotions like human beings and fixate the desired outcome. Alan Turing once predicted, “Machines will outstrip our feeble powers.” (Philosophia Mathematica; Intelligent Machinery, A Heretical Theory, 1996) And this is what might just happen. I do not want to focus wholly on optimistic imagery, but also bring about the AI’s role in incorporating surveillance and giving rise to a new form of authoritarianism – the digital one. As times have progressed, new forms of authoritarianism were witnessed. These were namely the era of fascism in the 1920s, the communist rule in China in 1950, the surge of the liberal democratic regime in America, Russia’s hybrid regime of that of democracy and authoritarianism, etc. All the political systems of the 21st Century have a myriad of digital variants emerging within them (Keulenaar, et al., 2018). This will help in understanding the nature of the Intelligence because we have already been familiar with the revolution of Industries, Information Technology and Internet bringing changes around. Over time, we have been habituated with their nature. The technologies that will be implemented for an enhanced modification of providing a state of orders will help in providing a tangible solution to a lasting order in the age of technology. The intelligence will help provide a political exchange with the amount of data generated by industrial and commercial operations. It will help in creating a pathway on how policymakers view the globe and assess decisions of possible future events. A likely existent example can be that of modeling complex concessions with the help of a system. The predicament of a system can be utilized to understand valuable tactics and positions to demonstrate successful relationships among nations. This will be accumulated over a specific amount of period with a reasonable amount of knowledge that the system will need to acquire based on the previous foreign policy decisions globally. Using AI in restoring a global order will likely include its usage from trade, defense to development work. The intelligence will be able to allocate commercial resources thereby facilitating the trade network within the nation as well as between different nations. It will help in enabling a certain kind of regulation to keep the government power in check. Also, help in predicting and checking the vulnerability of climate change. The development and advancement that will procure in the coming times will represent a kind of revolution in technology. The intelligence system will be a necessitated tool used for decision making/management, predicting data from a varied source of information, and synthesizing it to obtain the desired outcome. AI in foreign policy is being involved in the making of policies, initiating diplomatic stances, helping in establishing bilateral and multilateral engagement (Miyake, 2019).


We know that Artificial Intelligence has managed to become an important tool in monitoring agreements, overlooking military power, threats, and warfare. The Chinese AI assistant is being readily used in the everyday decision-making process and this has been implemented in the Ministry of foreign affairs. The technology trends are being explored by this nation and thereby it has been collectively induced for its advancement (He, 2017). The country has managed to use the intelligence system as a simulation machine and prediction platform to examine the foreign investment projects for the past few years. Further to which, China revealed another project for foreign policy, it’s known as geopolitical environment simulation a prediction platform. This particular intelligence works by feeding on huge amounts of data and furthermore provides foreign policy suggestions to Chinese diplomats (Eun Chang Choi, 2019). The machine is stated to use a network of artificial neurons and deep learning for assessing the risks and prediction of events that would majorly include terrorist attacks, any kind of turmoil in the political domain. The country has managed to construct a position for the Artificial Intelligence system in its society. China’s optimism in embracing AI as a powerful tool in the area of foreign affairs and diplomacy has managed to alarm the rest of the world. The other superpower America is seen to be developing AI technology in bringing about policy changes to enhance transparency and promote awareness. Looking at the abovementioned examples and the way the world is being seen today, we can say that the United States and China will likely be leading the road when it comes to producing applications for AI (Lee, 2018). They will widely be responsible in contributing to the world economy with the innovations. The other countries will remain to invigorate valued influences in the process that is needed for social evolution. The artificial intelligence policies are being used to maintain the leadership potential in the new field. It has been studied and reported that Artificial Intelligence may very well be helping in bridging the communication gaps in areas of diplomacy and international relations. This could be done by AI assisting in international humanitarian operations by overlooking elections and emphasizing the security of diplomatic missions. This convergence will help in establishing a kind of new order in terms of democracy, ethics, economic disruption, and security. The array of algorithms that are being used to predict the outcome of events in international affairs can bring about a major shift in the world of business and geopolitics. A case in foreign policy that I would like to cite is how a team in Japan is using algorithms to build a system that will help in changing rate changes and predict what the Bank governor’s next move will be, this will help analyze the majority of transactions that involve huge amounts of money overseas (Abedi, 2020). Equipped with the system that will generate predictions, nation-states, and non-state actors will now be able to deal with the rising tensions in a different way. Focusing on the intelligent systems, we know that the Algorithms an AI would use will be something that will be designed by humans. As pointed out, that data used in designing an AI system is being inputted and tagged by humans, and this has a high chance of being full of human bias and errors (Sajin, 2018). This might


AI & Glocalization in Law, Volume 1 (2020)

shift into something unpredictable and disruptive altogether based on the objective of what the AI system was designed for. Many nations none the less are seen to be developing AI systems that will harness the predictive capabilities to reduce human errors to a certain extent. Here the main question lies on who will manage to bear the responsibility for the geopolitical tension that might likely arise during the time of a prediction by the system? This brings me to the period of 1983, where the world was in the middle of facing a nuclear war because of the falsity in the alarm system caused by a Soviet computer system (Mathews, 2019). The computer screen showed several missiles being launched at the Soviet Union. Stanislav Petrov took this as a false alarm and disobeyed the Soviet military orders and protocol. By doing this he averted what could have been the beginning of a Third World War. Even though times have progressed, and systems have become more efficient, the likelihood of such failure will always lie even with AI. The responsibility lies in how well the AI algorithm will be trained in undertaking foreign affairs and diplomatic based decision. When the world is amid regional conflicts and geopolitical tensions, the AI policy will be harnessed to undertake decisions. The whole arena of geopolitics may be transformed based on the array of suggestions on the simulations and predictions made by Artificial Intelligence regarding foreign policies. By integrating artificial intelligence into the nations’ policies and affairs, one can conclude that AI will facilitate the decisions that would be free from any sort of human emotions. But before implementing AI in its totality, we need to put forth how the intelligence will ensure a kind of vision that will either propagate a utopian idea or might devolve into a dystopian phenomenon depending on the approach countries take. But for now, let us focus on how AI will reshape global power. There are two fundamental ways where AI can be used in reshaping the power/order (Shoker, 2019). The first is that artificial intelligence will be able to distribute the physical frame of structures. The infrastructure building will be beneficial in reshaping the economic policies. The tech companies will steer the operations and continue to promote the innovation thereby creating a dialogue between nations in enhancing innovations. And the innovation will stir more in liberal democracies thus shaping the order to an extent. Secondly, AI will be able to help in redistributing power. This will be utilized when there a large number of global players. This would be an efficient way of distributing power between nations on a global scale. If there does not emerge a single potential player than global players may rise and create a network of AI systems to maintain relations between nations. Further to which AI will also hold an answer to the social challenges as long the algorithms are built in a manner to ensure social good, transparency, accountability, and fairness.



How should governments/foreign ministries address the change?

The change needs to be addressed by the lessons we learned in the former decades. The best example is that of community response to change when the Internet became the new revolutionary. This medium developed the means of affecting international relations and foreign affairs. The organizations of the diplomatic institutions will need to undergo the methods and test approaches to problem-solving through critical means of using AI in their daily lives. We need to integrate technical knowledge in the everyday conservative units of institutions. There needs to be a substantial arrangement and implementation of a new framework that institutions need to undertake. Governments will be needed to take on new and effective strategies in handling the era of artificial intelligence. The changes need to be from economies to security to democracy promotion. We will soon need to prioritize AI and develop new competencies. The second change needs to be addressed by allocating effective responses between stakeholders and ministries. This will formulate a network of private companies, civil society organizations, research institutions, media, and varied forms of government agencies. This is very essential as a piece of acquired knowledge is much needed when we are dealing with a technology that is yet not ventured to its full potential, and establishing a sense of new order requires an effective collaboration from all sectors within multiple nations (Schwab, 2015). One example that I need to highlight is the AI policy-related issues related to security have a parallel linkage in the way arms controls were handled back in the cold war and post-cold war periods. Another example is that of promoting internet freedom, this would be instituting collaboration between the tech industries and human rights organizations. The dialogue between foreign policy and AI can be adapted to the method of problem-solving (Cummings, et al., 2018). This generally includes avoiding the bureaucratic tendencies on gravitating on methods and procedures that would not necessarily bring out a solution for us. The way AI is changing requires a structural array of reviews and revision, and while dealing in methods of complexity like foreign affairs, establishing a global order we need a new approach. This can be done by proposing the following: (Flemming, 2020) Acquisition of bundle of resources in exemplifying knowledge on AI and foreign affairs Defining the problems of disruptive technology and addressing them accordingly Evolving policies/programs on a trial run method by the usage of preparing algorithms that address a specific issue and give out the outcome (This can also include pilot execution schemes). The AI market lives upon problem-solving processes. The software industry has a new area of pioneering the knowledge acquisition process. And how the procurement of the data assembled will impact the global studies and uplift a new order is


AI & Glocalization in Law, Volume 1 (2020)

something we need to be focusing on. We need to be focusing on human resources on viable commercial bases. The human resources will act as the joints in advancing technology in diplomatic practice. We need to find an environment where we garner analysts whose expertise lies in foreign policies and integrate that into AI affairs. The change needs to be instrumented at a global level through the network of institutions. And we need to opt for a pragmatic approach to AI and foreign policy. This needs to levitate away from the constraints of political regimes, budgetary allocations, bureaucracies, and a conservative array of human resources. A radical change brought by AI will prove to be novel and innovative when it comes to policy planning, public diplomacy, embassy reporting, and implementation of projects to strengthen relations around. A toolbox is needed to address this aspect to effectuate a pathway for a new order (Scott, et al., 2018): Policymaking: To maintain a streamline of businesses, foreign ministries will need to evaluate the issues that will encircle the intersection of Artificial Intelligence and international relations. This will help in the development of governmental policy positions, also aid in the issues related to security and ethical dimensions. Policymaking needs to assess the threats and the opportunities that will set an institutional change and bring them in motion. Public discretion: Discretion in a public scenario means diplomacy. This means communication between governments, the public, civil society, media to take in the view of our society and advance the values and interests. Making people aware of the implications of AI for international relations should be prioritized. The sense of evaluating the basis of an order should be based on the policy choices and democratic norms. Two-pronged and many-sided assignation: A dialogue needs to be started between allies to collectively strategize and assess the new perspectives of artificial intelligence and how frequent tests are needed to adjudge the foreign policy responses. An example here can be highlighted as that of what the international community did with the issue of cyber security. International and treaty organization action execution: the entire assessment lies on the coordination for AI policy in both formal and informal groups of multilateralism. One needs to talk about the threats and consequences that may arise due to the advent of artificial intelligence. Further to which foreign ministries can also converge the relations by bringing in influential stakeholders from regions to confront the challenges and opportunities of AI. A foundation is needed to build a global civil society to engage in learning the implications of artificial intelligence for information gathering and analysis. National interests should be considered in alliance with the objectives of establishing or contributing to the global society/order.




To develop possibilities and to understand perils on how artificial intelligence is going to implement a shift in the AI global order. One can agree that artificial intelligence will change and transform international relations and foreign policy in the nearby future. David Gosset, who is the director of the Academia Sinica Europaea at CEIBS and founder of the Euro-China Forum says, “AI, more than any other technology, will impact the future of mankind, it has to be wisely approached on a quest for human dignity and not blindly worshiped as the new master of diminished humanity, it has to be a catalyst for more global solidarity and not a tyrannical matrix of new political or geopolitical divisions� (Katte, 2018). Artificial intelligence will end up creating a mechanism for the decision making process. The main objective is to ensure that innovation keeps progressing while having to contribute to the development of a new order. An optimistic perseverance is something that is needed in order to create equilibrium between the disruptive technology and the way foreign relations take place. Below mentioned are the ways we can incorporate AI in this field (Walch, 2020): People and the planet should be benefited by artificial intelligence. This can be done by lashing comprehensive advancement, justifiable expansion, and well-being. AI systems should be designed in a way that follows the ethical code and moral conduct. This would mean that it respects the natural rule of law, humanoid rights, self-governing values, and assortment. The ensuring of appropriate safeguards is a must when human intervention seems necessary for the survival of a functioning society on moral grounds. Besides the development and realizing the full potential of AI, we need to also ensure full pellucidity and have an answerable revelation around AI systems to make people understand the outcome and potential and also the challenges that need to fully reach that level. It needs to be highlighted that AI systems have to function in a vigorous, protected, and benign way throughout their systemic/algorithmic life. If there are any potential risks while designing and building they need to be assessed and managed continuously. People and organizations responsible for the deployment and operations of the artificial intelligence systems need to ensure their proper functioning incoherence to the above-mentioned principles. Funding should be allocated for the development and deployment by tech Corporations, governments, and foundations with the objective of humanitarian goals. To invigorate AI, we need to bring the humanitarian factor in; this will be done through datasets and preparation algorithms that will help in solving the emergency crisis and giving a potentially viable solution. This will further be profitable for the governmental organizations thereby adding to the overall economy of the nation. But the objective needs to be developed for a non-profit basis to help and aid the ones during the crisis.


AI & Glocalization in Law, Volume 1 (2020)

All nations should get a hand in the expertise of developing artificial intelligence and thus effectuating the whole population: Artificial intelligence will soon have more programmers and policymakers to make an impact on the structural inequalities. The systems of artificial intelligence need to be developed and positioned to assess the societal risk and inherent bias issue. To contribute to the global society, nations need to start investing in education, workforce, and economy to contribute to the capital of global intakes. Policymakers will play an important role when it comes to the development of intelligence in the international relations sector. Lastly, the technology used in AI has broad applicability; we need to ensure that clear coded practices are taken so that the benefits of AI can be shared across while managing the risk potential factor none the less. Regulating AI is going to be a different mechanism altogether and policymakers, foreign analysts, technologists, and the governments need to fundamentally regulate this while taking in lessons from models adopted to enhance the procurement of a newly established order. The most necessary aspect while establishing a new kind of upheaval, we need a public debate over the utopian and dystopian side. This paper covered an overview of how AI will end up playing a role when it comes to establishing a global order. What foreign policy analysts need to take up in while creating a dialogue between nations? If there is one last thing that I would like to add to us that we as a human civilization are leaving behind a system of centralization and decision making and entering an era of intelligence beyond the human intellect. Forgoing about setting a new order, we will now need new tools to bridge the systems, algorithms, neural networks with a concerted form of decision making. In order, a regime needs to be established and built on the capabilities of Artificial intelligence, big data, and neural networks. We will see in the coming years what potential AI brings and what regime, superpower, or an established new order comes through.


References 1.

2. 3. 4. 5. 6.

7. 8. 9. 10. 11. 12.


14. 15.

Abedi, Sajad. 2020. Diplomacy in The Era of Artificial Intelligence. Diplomatist. [Online] March 2020. Cummings, Mary L. ‘Missy’, et al. 2018. Artificial Intelligence and International Affairs: Disruption Anticipated. s.l. : Chatham House, 2018. Eun Chang Choi. 2019. Will algorithms make safe decisions in foreign affairs? Diplo. [Online] 2019. Flemming, Sean. 2020. World order is going to be rocked by AI - this is how. World Economic Forum. [Online] 2020. He, Yujia. 2017. How China is preparing for an AI powered future. s.l. : Wilson Briefs, 2017. Katte, Abhijeet. 2018. Artificial Intelligence Is A Key To Future International Relations Dynamics. Analytics Magazine. [Online] August 2018. Keulenaar, Emillie V. de and Melissen, Jan. 2018. Critical Digital Diplomacy and How theory can inform practice. 2018. Kissinger, Henry. 2018. How the Enlightenment Ends. The Atlantic. [Online] june 2018. Lee, Kai-Fu. 2018. AI Superpowers. s.l. : Houghton Mifflin Harcourt, 2018. —. 2018. Why China Can Do AI More Quickly and Effectively Than the US. WIRED. [Online] October 2018. M.Wells, Philip. The Importance of Issac Asimov's Three Laws of Robotics. [Online] Mathews, Dylan. 2019. 36 years ago today, one man saved us from worldending nuclear war. Vox. [Online] September 2019. Miyake, Kuni. 2019. How will AI change international politics? the japan times. [Online] January 2019. Philosophia Mathematica; Intelligent Machinery, A Heretical Theory. Turing, Alan. 1996. 3, 1996, Vol. 4. Reese, Byron. Episode 51: A Conversation with Tim O’Reilly. GIGAOM Voices in AI. [Online]


16. 17. 18. 19.

20. 21.



AI & Glocalization in Law, Volume 1 (2020)

Sajin, Stas. 2018. Preventing Machine Learning Bias. Towards Data Science. [Online] November 2018. Saperstein, Gregory. 2012. 5 Minutes With a Visionary: Eliezer Yudkowsky. CNBC. [Online] August 9, 2012. Schwab, Klaus. 2015. The Fourth Industrial Revolution. Foreign Affairs. [Online] December 2015. —. 2016. The Fourth Industrial Revolution: what it means, how to respond. World Economic Forum. [Online] 2016. Scott, Ben, Heumann, Stefan and Lorenz, Philippe. 2018. Artificial Intelligence and Foreign Policy. 2018. Shoker, Sarah. 2019. How artificial intelligence is reshaping global power and Canadian foreign policy. [Online] May 2019. Walch, Kathleen. 2020. How AI Principles Help Shape AI Globally. Forbes. [Online] January 2020. Wright, Nicholas D. 2019. ARTIFICIAL INTELLIGENCE: CHINA RUSSIA AND THE GLOBAL ORDER. s.l. : Air University Press, 2019.


21 Artificial Intelligence and Law: Competition Law and Algorithmic Pricing Rishabh Arora & Vasundhara Mahajan Students, Amity University Kolkata, India

Abstract. Artificial Intelligence (AI) is composed and self-assured to create a havoc in the world. With the emergence of quick- witted machines enabling high-level comprehension processes like rational and using thought, understanding, knowledge, analytical problems and higher cognitive process, integrated with the advanced data technology and collection. The existence of AI responds to opportunities to accompaniment and adjunction human intelligence and enhance the living and working standards of people. Algorithms are progressively hired by enterprises because of intrinsic chunk of their enterprise paragons stated the accessibility of huge information and quantum leap in the application and automation in artificial intelligence. Within the span of time, the rivalry perspective hopped onto digital environment and questions were raised that even when the conventional competitiveness policy mechanisms were produced in the parallel epoch, however endured pertinent in sermonizing algorithmic and competition distorting practices. In peculiar, the exclusive implementation of algorithms and arithmetical tacit conspiracy uplifts enforcing challenge and imposing ultimatum for the officials and administration because of the impotence of subsisting post factum estimates to sufficiently tackle these algorithmic distorting competition behaviour and performance. The paper discusses the challenges faced in antitrust laws in India, critical analysis on artificial intelligence outlook in 2025 and the impact of COVID-19 in the artificial intelligence market. Keywords: Artificial Intelligence, Algorithms, Competition Law, Machine Learning




AI & Glocalization in Law, Volume 1 (2020)

The algorithm is a series of instructions fed by a computer programmer to carry out such tasks. Pricing algorithms allow it possible to continuously modify and refine individual prices based on fortuitous and by discovering trends from a wide quantity and diversification of data, contributing to highest valuation. When organizations obtain additional customer information and systems provide further resources to play with product selection and recommendation, advertising is increasingly complex, personalized and customized. While the idea of pricing algorithms is not fresh, with the world going global, it has been popular in recent years. Enterprises, internet stores, shipping providers, the pharmaceutical industry etc. all use algorithms to determine rates depending on demand and supply. Amazon is a shopping gigantic whose technology may enable certain stores, or destroy them. Moreover, the European Commission 's draft study on the workings of e-commerce noticed that about 500 third-party vendors on Amazon use pricing mechanisms to tax consumers on the basis of various requirements. Amazon is offering itself an advantage by not specifying the delivery price on its own goods. Other retailers have to compensate Amazon to receive the same value. In the point of view of innovation, what counts is how such formulas are currently used, so whether they can be maintained even if the usage of such formulas is not per se illegal but equal use of them. The consequences of algorithm pricing on competitiveness are double in amount. Not only does it impinge the privacy of an entity by changing the rates depending on the data the person gathers from various online outlets, it also allows businesses to tacitly engage in online collusion. Anti-competitive deals are forbidden under section 3 of Competition Act , 2002. This segment will not, however, carry the collusion through the selling of algorithms because there is rarely an explicit collusion between firms but a subtle collusion. Even with its online appearance and purpose, bribery by algorithm pricing does not need to be explicit and may be taken in if planned in such a way across the system. As mentioned in an OECD article, algorithms may be used to predict competition risks, particularly in the context of a new competitor or the growing impact of an established player, quite easily, which in effect makes organization to take a pre-emptive strategic decision. Some of the "tacit conspiracy" doesn't even show to the human eye. For instances-suppose there are two websites that promote ticket booking online. When rates in one of them grow based on further traffic on that specific platform, the other will sense this shift automatically and increase / decrease its costs, based on their business strategy. Furthermore, if a single customer had been taking advantage of the services of a specific airline business, the other airline companies might predict that by their software and lower their rates to retain that client. Such variables are generally referred to as business activity, and contact level. The difference between overt and indirect collaboration illustrates how, in a market with little sellers and straightforward supra-competitive pricing policies, the fair commercial conduct of and business on the market can be a logical outcome. It is because of implicit collaboration or intentional parallelism lies beyond the


scope of antitrust laws on competitor agreements. In a political viewpoint, though, even a tacitly collusive solution might not be ideal, because it allows companies the power to dramatically depress production or increase costs to the disadvantage of customers as well as an overt arrangement. This question needs to be tackled because it is quite important that there would be a need to tackle algorithmic collusion needing a different interpretation of what an antitrust agreement is, since defining a "agreement" between rivals is a requirement for implementing the law against collusive outcomes. The agencies related to competition enforcement are studying and analysing the impact of artificial intelligence on the competition sector market. The enforcement agencies examine more carefully and analyse the AI related activities prevailing in the market. In the former period, antitrust authorities wanted evidence for the monopoly created and competitors in the merchandise area. Nowadays, AI is expedite in matching algorithmic software and assisting in price conspiracy by way of price monitoring. While firms may brilliantly modify their prices to their contenders, they cannot interchange details on upcoming pricing objective either explicitly or implicitly like through price signalling. This constitutes a modern complaisance challenge for firms to monitor algorithms and utilizing price matching or executing blockchains to execute smart contracts, specifically firms in markets consisting of hardly any large competitors. In other instances, AI could ease the manipulation and misuse of market competence either by unfairness and biasness or exclusion of challengers. This is only possible through a merger or fusion or a complete collaboration agreement emerging in the amalgamation of a large and distinctive set of ‘Big Data’ ,or it can take place where a ascendant company clasping such huge and peculiar set of ‘Big Data’ dominance to victimize against its contender or consumers. Coincidentally, enforcement agencies acknowledges that AI also can intensify competition by accelerating targeted marketing and expeditious competitive reactions to price variances, which may eventually provide more rivalry, cost effective and finer services for consumers.

2 Obstacles in Anti-trust laws & potential proposition: the Indian outlook In India, the new competition legislation is regulated by the competition Act , 2002 passed in the Indian Parliament. According to the passage, the previous monopolies and restrictive Trade practices Act, 1969, which had experienced from numerous frailties, struggled with monopoly concerns. Indian antitrust laws are mostly framed along the outlines of the European antitrust law system and has often imported concepts from US competition law. According to section 3 of the act which is similar to article 101 of the Treaties on the Operation of the European union (TFEU) enshrines the clause banning cooperation. Section 3 of the Act allows for the applicable clause restricting lateral price-fixing arrangement.


AI & Glocalization in Law, Volume 1 (2020)

Different Algorithmic Collusion Classes Falling under the Current Legal System. The current problems can be categorized in two sections in the case of algorithms. Firstly, locus of the price algorithm promotes or strengthens collaborating activity that hitherto descends beyond the existing limits of competition laws and secondly, when the calculation results in a comparatively latest form of conspiracy that is not covered under the existing legitimate structure. Throughout the second category it is often observed that algorithm will enable implicit cooperation without any contact between the competitors being required. The instance of Control cognition and Hub and spoke model falls into the first category while signalling algorithms and algorithms based on machine learning will fall into the second category. Examining algorithm and Hub & spoke model. As in situation of examining and observing cognitions the function of arithmetic is confined to the fruitful application of the competition distortion agreement underneath it. Considering that the underlying arrangement actually appears and is prohibited by antitrust law , the legal system for antitrust law is adequate to comply in accordance to the above-mentioned actions. In this situation the only innovative aspect is by using a computing machine algorithm to promote the execution of the deceitful agreement. Similarly, in the situation of the hub and spoken model, the algorithmic program is just the way to promote an established consensus. Throughout in the scenario of Uber, for example, well all chauffer permit uber to decide the quality of their assistances. While there is no existence of arrangement between the drivers to set rates converge, they essentially permit a third force (Uber's algorithm) to set rates owned behest. Therefore, the chauffer decided they should not demand a higher price than the algorithm- determined one. There is also a mutual understanding between all drivers that all drivers should cost the same price for the same benefit that would effectively fall under the meaning of Section 3. Hence, in such a situation, the present system of competition law is well suited to cope with such a hub and spoken framework. Competition commission of India struggled through the above concerns and maintained aforesaid covert conspiracy trend of Uber 's axis and speech platform was not unlawful. Indian legislation as it currently stands includes clear consent to set rates between the drivers intersect. We strongly conclude, however, that the Commission mistook in arguing that a direct contract among chauffeur is a precondition for keeping the aforementioned hub and speaking platform unlawfully. In one of the explanations for doing the identical was that all the chauffeur realize that with uber all operators embrace the same terminology and provisions. Consequently, they indirectly delegated the resolution to a-gateway and decided to strand their convey services at the market rate (Uber's algorithm). It is a 'concert practice' which comes within the reach of the 'agreement' concept. There are several other arguments to


claim why India's Competition Commission committed mistakes in the preceding sequence. Even though there are reasons that still exist and remains for Uber. Algorithms for Signalling & Machine Learning. Both the forms of algorithms, i.e. the signalling or waving and the computed machine algorithm, face antitrust law problems. Indian competition act, such as the American and European type of antitrust law, does not preclude collateral activity as long as it stems from coercive action without prior contact with the parties. While supra competitive pricing arose from autonomous business choices taken by actors on the market (like the decision to increase rates upon realizing that a market leader has risen costs), their actions would not come under the limits of antitrust laws. In such circumstances, Richard & Bailey write “such parallel behaviour does not, in itself, amount to a concerted practice under Article 101(1).� However, in case of signalling algorithm that leads to overt conspiracy, there is little likelihood of determining blame because there is no communication. In the case of understanding the concept of algorithms, the nature of the algorithms makes it nearly strenuous to regulate that the algorithm is in collaboration with algorithm of another competitor. Due to the impossibility of humans to back-pedal engineer the procedures that took place in the 'black box' algorithm, it is difficult to determine that how is algorithm calculated with the ideal rate for a specific commodity and regardless of whether it intrigued or interfered with the other algorithm which resulted in super-aggressive prices. In fact, machine computed swotting algorithms may also show that conspiring with that other algorithm is expected to accomplish the aim of optimizing yield. The person who is vendor, utilizing the algorithm may not be conscious of similar (unlawful) actions nor does himself or herself wish to engage in coercive behaviour. Therefore, it is not feasible to circumvent the difficulties of identifying algorithmic manipulation triggering supra-competitive costs. Assurance of algorithm program wealth. Across most cases, the digitalized and computerized economy has overhaul its physical predecessors and its economic productivity capabilities are only projected to expand by leaps and bounds. Taking e.g. electronic shopping. This encourages full accountability, ensuring symmetrical knowledge is transmitted and it is convenient to do. This makes for reduced penetration hurdles and incentives to grow without obstacles, which decreases the accumulation of control in one player's pockets, rendering monopolies a unusual business occurrence. It brings the digitalised economy to competitive market. However, the exponential development of online channels is guided primarily by the emergence of huge information & rational and self-trained algorithms. Prof. Ariel Ezrachi asserts that the magnitude of facts and information; the acceleration on which the information is accumulated, utilized and circulated; the variation of data


AI & Glocalization in Law, Volume 1 (2020)

combined and the utility of the information regularly signalize huge information. Moreover, he communicates that the utilization and worth of big information has escalated with the rise of huge rational: The potential to develop computer learning mechanisms which can ingress and interpret extensive volumes of knowledge. For starters, Amazon, which provides remote shopping and is a retail website, utilizes computer algorithms to automatically modify prices, rather than physically. Such algorithms expose out private and business information to suit the finest deals available on shelves for the goods. It might contribute to a 'data edge' scenario among companies to reap greater business income. Considering that online retailers will start depending on algorithmic pricing and artificial intelligence, their competitors are presumably to be compelled to build 'wise' algorithmic pricing to maintain competitive pressure. The potential usage of advanced pricing algorithms and AI to join in collaboration or that can contribute to intentional resemblance and ultimately have impact on their competitiveness on the virtual merchandize sector embellishes a policy issue. It is generously illustrated in the Google case, where the Competition Commission of India ('Commission') lately adjudicated on.


India’s Google Case Complying on Forage Unfairness

Previously, the authority levied a fine on Google for infringement of their leadership role in regular web search and advertisement services. The Commission order prosecutes the operations of Google as a 'forum' for linking Web consumers with internet distributors / marketers. Despite the two-sided existence of a digital forum, the Commission dismissed the argument that Google's services are only rendered for "free" Internet users. It remained that web users take the search results pages into consideration by supplying their focus or 'eyeballs.' The Commission observed that Google participated in 'application preference' by displaying different templates of results, i.e. the unfair treatment of its own pages by Google over its rivals. Provided that Google search outcomes are focused on computerized machines developed by the automation industry itself, this mechanism is probably to be skewed in favour of its possessed websites. Google does have the versatility to interfere with the self-training and educating algorithms and, to its benefit, skew the rating seen on the SERP (Search Engine Results Page). This creates duo alleged violations of confidence: One, google recommends its possessed goods and resources when producing progress on SERP. Also as powerful party, Google can be rational in rating SERP. It should not nevertheless make distinctions against its contenders. While the committee has also recommended, Google may have transmitted details on the algorithmic issues in a timely manner just to not jeopardize its competitive position and ensure transparency. Second, Google has connections to a vast range of information records ('Big Data') centered on the user's searching background, subsequently transformed into data-driven


analysis and adaptive pricing. At the downside of other players, Google could use its huge information strength to maximize income in its competitive business. According to the perception of the European Commission that presiding agreement have a specific duty not to obstruct legitimate undistorted rivalry in the industry, the Indian regulator has expressly announced Google's 'unique obligation.'

4 Collusion Supported By Machine Algorithm The essence of today's cartel conduct has progressed: machines today aid in conspiracy. Pricing algorithms extend the field of anti-trust conduct. The foregoing are few circumstances where machines could promote developed price-collusion methodologies. The Messenger scenario Machine learning algorithms are often used in this situation to encourage the transmission of knowledge through controlling the cartel functions. Exemplifying, competitive corporations' representatives will settle rates, divide regions or offers, or minimize. Then, the arrangement will now be implemented and controlled through the computer machines. The systems are pure mediators of the immorality in essence of the human input behaviour settled upon. Algorithm based on hub and spoken framework The computerized systems are utilized as a central hub in the conventional hub and spoke framework to synchronize the rates of the rivals and other activities. Therefore the 'center' is the major players/or specific participant, who oversees all the other contenders’ actions, i.e. the 'spokes' one inclusively or the other seperately. Prof. Ezrachi believes in fact that to represent a cohesive hub and spoke plot, instead of numerous unconventional intrigues, there ought to have a 'edge' of competitors that are stratagem of a scheme and have justification to conclude that their own advantages depends on the perceived effectiveness of the whole undertaking. In addition, software algorithms conduct the 'core' role in an algorithm-fueled hub-and-spoke template to promote cooperation amongst competitors. In current market conditions, algorithmic marketing has enabled participants fast to react. Rivals usually don't actively participate with one another on digital shopping. They all utilize the given portfolio of the upstream suppliers. Subsequently, many contenders functioning within the identical system use a solitarity algorithm and the costs range inevitably. 3. Algorithm intensified responsive comparability or implied complicity


AI & Glocalization in Law, Volume 1 (2020)

Valuation models availed by businesses explicitly react to market trends and can become synchronized and predictive in doing so. There is no existing deal between the employees. Businesses works unilaterally by their own cost algorithms, which attain an analogous mutual understanding and are not pointedly bargained. Each participant, nevertheless, is well informed of others' to make use of such pricing algorithms. So they promote whether in tacit conspiracy or responsive parallelism. In these circumstances, explicit proof is complicated to gather and might be enforced based on the company's anti-competitive motive. Considering the dynamic structure of the algorithms used and the challenges of evaluating the exact offender, these conditions are difficult to ascertain. Artificial Intelligence The competition caused by artificial intelligence (AI) may be misleading. As Maurice E. Stucke explained through his publication: “Digital Competition� which explained enhanced potential of computer system for data processing at real-time speed could lead to a divine perception towards the market. This may amplify the tacit collusion. In addition, with observation, AI is better positioned for building much more ultra-modern algorithms that can give us a perfect picture of simulated competition from the virtual Eye lens.

5 Key Considerations for AI Policymaking in India Two major AI approaches have been published by India. We've been reacting to all of these policies. Besides the two development plans, the given policy brief focuses towards a range of factors which need to be put together in order for the State to exploit and effectively implement AI within divergent markets, cultures and innovations. Ensure Adequate Government Funding and Investment In R&D. A study of majority of national AI strategies by different nations shows a substantial financial contribution by their government for research & development around AI. Many of the policy documents talk of the need for safeguarding national goals in the competition for production of AI. So as to achieve this, a comprehensive plan for AI research & development, the recognition for nodal agencies to facilitate such project, and the formation of institutional magnitude towards Avant-grade research are necessary. Many countries, such as Japan , the United Kingdom and China, addressed partnerships between government and industry to ensure substantial growth in AI research & development. The EU (European Union) has expressed the need for using the existing PPP (public-private partnerships), with special focus on big data and robotics for boosting investment by over 1.5 (one and half)


times. This move was adopted by NITI Aayog for their strategy. The paper catalogues the factors which are enabling for adoption of maps and AI for adoption at a large-scale targeting ministries and government agencies which could help such development. Four committees were also set up by the Ministry of Electronics and IT in February 2018 to develop a masterplan for the national programme for AI. The committees are currently discussing AI within the sense of citizen-centric services; data platforms; expertise, skills, R&D, political, regulatory and information security perspectives. Democratize AI Technologies and Data. Reliable, clean and properly administered data is important for the preparation of algorithms. Mainly, massive volumes of data on their own do not turn into improved outcomes. The accuracy and cure of data should be a condition for the quantity of data. Frameworks for creating and accessing further data should not be connected to concepts of centralized data storage. In general, the government and the private sector are facilitators of large quantities of data and technology. Ryan Calo termed this a data parity problem, Where only some well- established pioneers within the sector have such resources to collect data for creating datasets. Such access to any data comes with their own issues of privacy, ownership, protection, completeness and accuracy. There are a variety of methods and techniques which can be implemented to allow access to such data. Open Government Data. The vigorous open data collection is one way through which an access may be allowed. Open data is especially essential for small scale start-ups because they create prototypes. While India is a data-dense country which has a National Accessibility and data Policy in place, India still does not have a comprehensive and detailed open data collection across fields and sectors. Our work has shown that this is another barrier to innovation in the Indian scenario, as start-ups mostly look to open databases in the US and Europe to develop models. Nevertheless, this is troublesome since the populations reflected in the data set are substantially incompatible, resulting in the creation of solutions which are proficient for a particular population and thus need to be re-educate on Indian data. While AI is agnostic technology and in cases where data analysis is used differently, demographically specific training data is not ideal. It applies in particular to other categories such as education, health and financial details. The government will play a major role in providing such access to databases to support the operation and efficiency of AI technologies. The government of India has also moved towards open databases through the OGT (Open Government Data) Portal, which offers access towards a variety of data collected by different


AI & Glocalization in Law, Volume 1 (2020)

ministries. Where Telangana has implemented their own OPD (Open Data Policy), which stands out for its openness and the accuracy of the data gathered and aims to promote AI-based emulsion. State and central governments shall vigorously follow and enforce the National Data and Accessibility Strategy in order to promote and foster innovation. AI Marketplaces. NITI Aayog 's National Roadmap for AI recommends the development of a national AI marketplace consisting of a data marketplace, a data interpretation marketplace and a marketplace solutions/model marketplace that can be deployed. In particular, it envisages the data market to be built on blockchain technology and will have the following characteristics: access restrictions, traceability, compliance with international and local regulators and a robust data price recognition mechanism. Many questions which need to be acknowledged are based around inflation and guaranteeing fair access. It may be fascinating if the government allows private sector firms to provide data. Many new data markets are driven by the private sector. National Infrastructure to Support Domestic Development. Creating a comprehensive National Artificial Intelligence Development involves the creation of an adequate indigenous network capability for data processing and storage. Although this does not automatically apply to the compulsory location of data as provided for in the draft privacy capacity, bill should be built to store data sets created by native nodal extremity. AI Data Storage. Capacity demands to be increased as the amount of data required to be processed within India increases. This involves maintaining sufficient storage space, IOPS (Input / Output per second) and the potential to process large volumes of data. AI Networking Infrastructure. Organizations have a need to enhance their networks in order to optimize and upgrade their scale efficiency. Scalability shall be addressed with a high priority, requiring a high bandwidth with low latency and innovative infrastructure that needs sufficient last mile data curation compliance.



Future Artificial Intelligence (AI) Market Outlook

The global demand for artificial intelligence is projected to hit $169,411.8M in 2025, from $4,065.0 million in 2016 to 55.6 per cent in CAGR from 2018 to 2025. Artificial intelligence has become one of the swiftly growing technology in the last few years. AI is correlated with human intelligence with related features, such as language comprehension, reasoning, thinking, problem solving, and others. Manufacturers on the market are witnessing limitless intellectual challenges in the development and revision of this technology. AI is at the centre of the next generation of mobile innovations on the market. Companies such as Microsoft, Google, IBM as well as other leading players have successfully adopted AI as a key part of their technology The study focuses on growth opportunities, limitations and artificial intelligence industry trends. The growth in the number of new start-ups and technical developments has led to an increase in investment in artificial intelligence technology. In addition , the growing need for analysis and interpretation of vast quantities of data is growing the need for solutions from the artificial intelligence industry. In addition, the emergence of more powerful cloud computing infrastructures and the advancement of advanced artificial intelligence technologies have a major influence on the growth potential of the AI field. Nevertheless, the lack of qualified and skilled workers will slow down the growth of the artificial intelligence industry. The study presents Porter's five strength research of the AI industry in order to observe the effect of multiple factors, such as the negotiation strength of suppliers, the competitive power of contenders, the threat of new competition in the market, the threat of substitute products and the bargaining power of consumers on artificial intelligence productivity. The study also sets out the AI sector forecast for the period 2017 to 2022 as shown below in figure 1.

Figure 1



AI & Glocalization in Law, Volume 1 (2020)

Competition Analysis

This report provides competitive analysis and profiles of key market leaders in artificial intelligence, such as MicroStrategy Inc., Apple Inc., Alphabet (Google Inc.), Baidu, IPsoft, IBM, Microsoft Corporation, NVIDIA, Verint Systems Inc. (Next IT Corp.) and Qlik Technologies Inc. The pivotal strategies adopted by key players between 2015 and 2018 were acquisitions, collaboration and product launches. Covid-19 Impact on the Artificial Intelligence Market . • Numerous organizations, from agriculture to healthcare, have put in place artificial intelligence ( AI ) technology to develop measures to combat the pandemic of Covid-19. From accelerating research and treatment procedures to increasing customer satisfaction, AI has aided health care and government institutions in a variety of ways. • AI-enabled programs have been put in place to track symptoms for Covid-19 and also chatbots are being put in place to address public questions. • The AI system presents a thorough evaluation of agricultural activities in real time to assist farmers and vendors manage demand-supply and undertake inventory planning. AI was shown to be an effective method in the midst of the pandemic, thus driving market growth.

Top Impacting Factors. • Increasing innovation in AI technology and a rising need to analyse and interpret vast volumes of data are anticipated to drive the growth of the AI industry. Nevertheless, the shortage of qualified and skilled workers is anticipated to hinder the development of the AI industry. Increasing Investment in AI Technologies. The possibility of AI technology that process data collected in an efficient manner and predict behaviour via crucial algorithm helps to improve efficiency; for example, Netflix recommends movies based on past viewing experience. In the cur-


rent market situation, AI has reinvented business management by integrating process brand-purchase advertising, management tools, trend forecasts, and other services. These are key factors for rising progress in AI technologies and the market for machine learning. In addition, several small start-ups and tech companies have been investing in the implementation of open-source AI systems to improve profitability in established value chains. In addition , the expanding availability of low-cost AI technologies is also expected to make a contribution to the development of the sector. Growing Need for Analysing and Interpreting Large Amounts of Data. AI has a wide range of applications, including finance, advertising & media, retail, automotive & transport , healthcare, agriculture , education, gas & oil, law, and other sectors. It has powered the AI market along all the globe thanks to technologies including self-driving vehicles, space research, reliable weather forecasts, and so on. In addition, AI is anticipated to have an effect on health advances due to its intellect to examine large quantities of genomic information collected and to enable more effective diagnosis and avoidance of health conditions. Lack of Trained and Experienced Staff. The creation of AI is possible after solving complex algorithms. In addition, the management of AI and autonomous machines is often difficult. Which involves outstanding software engineering skills and extensive expertise in the understanding of distributed and simultaneous information flow or programming protocol regression testing. Nonetheless, many places, especially emerging economies, lack people with these skills. The lack of skilled labor force is therefore a significant constraint on the AI market.


Conclusive Thinking

Technological developments in the pricing field have culminated in regulatory insufficiency. It has been a noticeable rise in the quota of vendors implementing robotic pricing tools for price determination throughout the last decade or so. This, combined with other characteristics of the online market, such as smart robots and high transparency, has undoubtedly led to coactive results in varied cases. These sophisticated methods track the prices of rivals and respond immediately to any change in the market by processing large quantities of data within such a short period of time. Technology-Advanced machine learning Intelligent Datasets have the ability to boost with observation and are skilful of conspiring without any human knowledge or intervention. Outsourcing of consumer prices to robotics has created


AI & Glocalization in Law, Volume 1 (2020)

substantial efficiencies on both the demand side and on the supply side, and thus a full ban on the implementation of algorithmic pricing software is not an efficient solution in a social sense. The need for the hour is to have effective legislation equipped of dealing with such tacit collusion where the human function is limited to the rollout of algorithms without actively engaging in any price fixing agreement with rivals. Monitoring dataset, hub and spoke prototypes fall within the ambit of the existing structure of competition law, As the role of machine learning in these cases is limited to the implementation of an implied agreement for non-compete. Nevertheless, problems exist in the case of signalling and machine learning algorithms where there is no human contact prior to the 'concerted operation' of these algorithms. Because there is no 'agreement' to be reached by human suppliers, the implicit manipulation generated by the enrolment of these machine learning falls beyond the scope of competition law. The main issue is therefore how to interpret 'agreement' and 'concerted action' to decide if implicit coordination with algorithms will be in the ambit of competition law. This research recommends that the concept of human 'communication' be abolished as a necessity for conduct to sum to an 'agreement'. In addition, the affixing of liability is another matter that needs recognition. This research proposes that the parties/parties implementing the algorithm (with some caveats) be held responsible. This appears to be the most successful solution and, in turn, also tackles the question of 'robot individualization.' In addition, the issues of implicit collusion detection are another obstacle. Infrastructure improvements and the training of specialist in this area by the competition sovereignty are urgently required. Such rules include the compulsory exchange of software codes, the acquisition of a theory of collective supremacy, compulsory regulation which requires evaluation of algorithms prior to their actual implementation, etc. The dilemma arising out of algorithm pricing is, what if the AI predicts future growth of a company and the general public with AI prediction invests into the company where the company grows exponentially at present rather than future growth. This enables the AI to prediction true, but people invest based on the prediction. This also results in loss of competition in the market and a vicious circle where people invest based on current prediction for future growth by AI, which results in present growth at an alarming rate if people start investing with the given information.


References 1. 2. 3. 4. 5. 6. 7. 8. 9.

Nicholas, How Meyer v. Kalanick Could Determine How Uber and the Sharing Economy Fit into Antitrust Law (September 27, 2017). 6 Mich. Bus. & Entrepreneurial L. Rev. 2, 2018, Forthcoming Richard Whish & David Bailey, Competition Law, 8th ed (Oxford University Press, 2015) at pp. 603. Ariel Ezrachi & Maurice E. Stucke, ‘Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy’, Harvard University Press (2016), at p. 21. Ryan Calo, 2017 Artificial Intelligence Policy: A Primer and Roadmap. U.C. Davis L. Review, Vol. 51, pp. 398 - 435. - for-AI-Discussion- Paper.pdf 214265


AI & Glocalization in Law, Volume 1 (2020)

22 Steering Towards a Digitalized Education System Kapil Naresh, Ezhil Kaviya, Aarathi Manoj, Madhumidha, Ooviya Sekaran Students, VIT School of Law, Chennai, India

Abstract. This decade has marked a period of unforgettable memories to say the least. With the current fight against the pandemic and where social distancing, wearing masks etc. have become the norm, the creators and inventors are in the race for developing the latest technology to adapt to these recent changes which is now paving the way for a whole new lifestyle. From among the many sectors affected, the education department is surely one of them. In a way it is unfortunate that the education policy makers have taken minimum to no steps to improve the methods of teaching rather they focus more on drilling in the existing mechanic approach. Research shows that children subjected to teaching methods supported by ICT (Information and Communication Technology) evince a more positive approach towards schools. Only recently have schools and universities managed to provide the impetus for the use of technology to facilitate the learning process. But the question still remains, for an institution preparing a child to face his or her future will this minimum incorporation of technology be enough? The answer is no. One must take due note that in this day and time where the growth of technology has reached far beyond our imagination, it is high time that the educational institutions, who hold the key to the overall development of a child, train students to adapt and learn with the assistance of technology. As a means to ferment this concept what else can be a better approach than to incorporate the concept which is the foundation for many recent Inventions – Artificial Intelligence. This paper aims to explore the same on how AI can elevate the current approach, what have been the improvements so far and the benefits that legal education can attain from of such a contemporary shift. Keywords: Education, Technology, Artificial Intelligence (AI), Distance learning, digitalization.



Paradigm shift in the education system

Following the unfortunate events students have been subject to immense stress and pressure as there is no clarity about the future. Due to no effective alternative being developed, the Government of Kenya has declared the ‘year lost’. This will most definitely backtrack their education and create an imbalance which is not easy to amend. Moreover, having children coming back to school can most definitely affect the immense measures taken to curb the disease. Hence it is high time that we rethink the structure of our current education system and develop a much more feasible and effective course of action. The only way we can strike a balance is by devoting our time to bring about an effective change to not only provide a solution to this predicament but also a lifestyle for the future. Evolution of online education. Prior to the advancements in technology, long distance courses were offered by few universities that were based in England, which involved transferring copies of the materials to the students, as one can imagine it was a tedious and snail-paced process. This kind of a system was new leading many to develop alternatives to elevate its growth and minimize its cons. Soon technology served as the answer to the need of a change. During its initial phases online education didn’t survive as it was expected, due to the lack of globalization and most importantly the factor that technology was not as universal as it is now. Many opted the traditional face to face system which pushed institutions to bring out a hybrid or blended online education systems that combined the best of both worlds enabling users to have real time classroom sessions wherein they were provided with chat rooms, discussion boards etc. to keep the class more lively and appealing to the customer base (Online Education: Worldwide Status, Challenges, Trends, and Implications, 2018). Although not as widely prevalent as the traditional education set-up, a proportion of individuals have been using online pedagogy platforms to be their primary source of education. With this new digital revolution, it is possible for a student in India or in any other country to access high standard courses offered in prestigious institutions sitting in the comfort of their homes. Concept of Distance learning and online education. Distance learning, also known as distance education is a form of education system which permits students to study remotely from his/her place of convenience where the student is not physically present in a classroom. Instead of attending classes and lectures inside a classroom, students’ study from home with the assistance of internet which includes online and offline modules, virtual classes, digital assignments etc. Merriam Webster gives the definition of distance learning as, “a method of study where teachers and students do not meet in a classroom but use the Internet,


AI & Glocalization in Law, Volume 1 (2020)

e-mail, mail, etc. for having classes” (University of the People. 2020). Students who reside far from well-established institutes or universities would benefit from this kind of educational system. This type of education is additionally ideal for those students who seek a more flexible education system than the traditional classroom education. The barriers of time and location is absent thus allowing students from anywhere in the world to learn in any institutions or schools of their chose catering to their academic demands. Even the government institutions have been encouraging online forms of education, which is accelerating its growth but it still hasn’t reached its full potential. How it works. Students can learn on their own with the use of study materials provided by the institution. But this would take some motivation, determination and discipline from the side of the student to make this work. Students will use the study materials and support provided by the institute to interact with faculties and then give and receive feedback. This creates a flexible schedule for students to learn at their pace, which the traditional class system does not provide. Students are then evaluated based on their work done through various study tools or channels. Types. Video conferencing - This could be an individual meeting or a virtual class session in which many students would participate to interact with the faculty. It can develop a student’s interaction and teamwork skills. This is really helpful for the students as they won’t be missing sessions because of that. Synchronous learning – It is a type of learning in which all students are gathered at the same time and sometimes even same place to study together whereas the teacher will be at another place. It is usually done with the assistance of video conferencing or teleconferencing which connect the teacher and students on a digital platform. Synchronous learning is less versatile and includes more interaction. It is a challenge for students from different geographical locations because of difference in time zone. Students who have network issues will also face challenges. Asynchronous learning – Asynchronous learning allows the students to study and work at their own pace. It is more flexible and is in a less restrained format. Instead of a live online class session where all the students and faculty are present, this learning system gives tasks or assignments with a deadline. This gives students more time to concentrate on their work and complete the task in their way. This is ideal for students from different geographical location as they can login and study at whatever time they want and there’s no fixed time or compulsory class. Open schedule online class - This type gives students a great amount of freedom. This is a type of asynchronous learning with less deadlines and it gives no limit on attendance. Students are provided eBooks through email and few assignments to complete their course and work at their own pace. Open schedule online course is


ideal for students who love to work independently and for other learners who have other work and needs a flexible schedule for learning and completing their course. Fixed time online courses - This comes under synchronous type of learning. It is an online course in which students have to login to some particular learning website at the specified time for live class sessions. Hybrid learning - A specific type of combined learning of synchronous and asynchronous learning in which students are asked to attend class together at a specific time and deadlines are given to complete their tasks at their own pace. Some of the students will be physically present while other students are remotely learning. Computer based distance education – In computer-based distance education, students have to meet in a classroom or a computer lab at a fixed time given by the institution. This a common practice that institutions usually use. Advantages. Flexibility - It allows students to work independently and at their own convenience. Students have the option of choosing when and where, what time, what course they want and how they are going to learn that course. Students who prefer to have more live interactions and a fixed schedule, can choose video conferencing and students who wants a flexible schedule, can opt for asynchronous distance education. Less cost - The cost for travel, fuel, parking, books, accommodation rents etc are saved. Most self-paced courses offered online gives students the opportunity to graduate in less time compared to the traditional education which means lower educational costs and also many online degrees ask lower tuition fee. No geographical boundaries - Distant education gives an opportunity to every student to access courses in different countries despite their geographical distance which creates a barrier to quest for knowledge. Development of skills – Time management is very important in any kind of online education. Submission of digital assignments, keeping track of the deadline, attending fixed time class sessions etc. can teach time management skills to the learner which is an essential skill to maintain work-life balance. Distance education also improves a student’s interactive skills, teach teamwork and helps to develop confidence. Adapt to the needs of the people - This form of pedagogy definitely serves as a gift to students who have to juggle work and studies simultaneously and moreover it also assists people with disabilities to access education at their comfort. As this concept saw heights many researchers, scholars, teachers, students and many others took significant interest to explore various possibilities this form of education can offer. Disadvantages. Quality issues – Even though distance education is very helpful, some have raised the question of quality and validity of this type of education. The biggest reason for


AI & Glocalization in Law, Volume 1 (2020)

this preconception is the existence of online "diploma mills" that hand out fake degrees. The only way to beat this bias is to make sure you earn your online degree from a properly accredited institution (Oanca, 2020) Losing motivation – Students need to be focused, have motivation and determination to complete a distance education course. A flexible schedule can make students lazy and they might procrastinate which would result in spending more money or retaking courses if failed to complete some assignments or missed an exam. Connection issues – Distance education is only possible with a reliable source of internet, electricity and some other tools. This can be difficult for students from areas that have frequent power cuts or has network issues, they might miss out on class meeting sessions, delayed submission of projects and assignments. Feedback – In a traditional classroom, the teachers will be able to check the assignments and give feedback immediately whereas the same cannot be said for distance education. Submitted or emailed assignments have to be checked by the professor and the feedbacks might not be given as quickly as in traditional classroom.

2 Countries in which they are prevalent At present certain countries have adopted policies that encourage the use of technology in the field of education. Among those are: USA America had seen a drastic growth in online education with over 100 full-fledged online education platforms and umpteen other online courses that students can access. A study by the Sloan Consortium, conducted in 2011, displayed that over 6 million students have chosen at least one online course. Many students have fancied online education which led universities to develop comprehensive online master degree programmes to meet to the needs and stay relevant in the field. The growth of it in such an immense degree have influenced other education institutions worldwide to emulate the model followed in the US (2020). South Korea South Korea is another great example that has proven the triumphant impact of online education to the world. Assisted by the high-speed internet connectivity paired with the growing Tech industry has fuelled many colleges in South Korea to propound complete online education that attract crowd not only from within but also from other countries by expanding their lingual to English and other widely spoken languages around the globe through ‘smart learning’. They plan on creating a synergy between the online education and offline activities for the overall development and understanding of the child. India.


A study conducted by KPMG India and Google in 2017, suggests that online education has the potential to cover about 9.6 million users by 2021, this figures with the current demand has the possibility to double in due course of time. This was due to the fact that at present a significant portion seems to have access to the internet and since online education seems to provide a cheaper alternative many actively engage in this form of education. Understanding the potential growth market, the government from its side has also brought about schemes and programmes to felicitate this process, some of it being NandGhars, SWAYAM, India Skills Online and many more which have made a huge impact. Technology and legal education. The legal aspirants spending a tenure of five years or three-year years at an institution and the effectiveness will have to be introspected, as the nature of the course is theory unlike practical experiments in engineering and fields of education. The fact that law does have a moot court component wherein physical moot hall are established to experiment the learning with other peers as a mock-up before stepping into the court arena. The linkage of tenure of course, is based on the structure which doesn’t require a brick and mortar form of general education. Institutions slowly started the digitization but this current phase has brought these institutions to go virtual. Virtual mode is effective when students want to revisit sessions, how many ever times they want, to get more clarity on the subject, however the adaptation and acceptance are not widely seen from the faculty and students. Likewise, the legal industry has gone virtual with the judiciary to adjudicate cases. The law firms and lawyers are now working from home with the technology that enables the day to day activity which they likely do even in a physical office set up.In the western world, we are seeing a rise of technology especially that will compete with humans is artificial intelligence, it is currently a research tool that is productive than humans (Goldring, 1995). Hence the legal industry will change right from education to the judicial systems. As virtual education platforms started gaining importance many law Universities have updated themselves to this new phase, for instance Loyola University, Chicago and The University of Southern California, School of Law are few among the many universities that offer an entirely online LLM degree (Rienstra-Kiracofe, 2019). Since legal education is often known to be expensive, this kind of alternative will bring about a revolutionary change to the current system. It can also be witnessed that many law schools have now adopted online legal data bases platforms that has reduced the burden over student to ruffle through plethora of books to attain clarity over the subject. The amalgamation of technology with law has most definitely eased the process to understand the subject with the benefit of receiving top quality content. Unfortunately the degree to which it is prevalent is not significant enough. One fine example that exhibits the success of virtual legal education is that of ‘The Jurist website’ (Zahralddin-Aravena, 2001). This website showcased an animated version of Professor August, who worked in Washington State University. Though released on a trial basis this Virtual professor managed to break the boundaries of


AI & Glocalization in Law, Volume 1 (2020)

traditional legal education. The intention behind its development was to assist students who used Prof. August’s casebooks as the material for the subject. The enhanced feature in the animated version was that it was able to respond to the question thrown by the students and elevate their understanding in the subject. The success of this trail was reflected in the results of the students who adopted this form education over the others. This just proves the impact of technology in the current scenario and the possibilities one can explore with its association in the field of law. Possible Improvements AI Possess in the Legal Education and Effectiveness of an AI Tutor. For a considerable amount of time, the idea of computerized reasoning or artificial intelligence has required imaginative translations of logical hypothesis to catch an individual’s cognizance. As AI is becoming more proactive and evolving from time to time, it is being adapted in all imaginable spheres of life. It sometimes becomes difficult to keep up with advancement of technology, the law itself must figure out how to communicate and adjust to these advancements such as a lawyer bot, smart contracts, robot policing the list is never-ending. With the ignition of Artificial Intelligence and other bot related technologies, law schools see this as a great opportunity for students who are fluent in technology and how this can be effectively put to practice to better serve their clients. There isn't much in the method of practically equivalent to process work in the law school’s educational plan that correspondingly anticipates learning with the use of an artificial intelligence. In any case, law schools are feeling the requirement to satisfy the needs of a quickly advancing calling and business showcase. AI and the Possibilities it Possess in the Legal Education. A surfeit of news reports and statistics show how Artificial Intelligence is being coordinated into an ever-increasing number of features around the globe every year. Statistical surveying organization Statista ventures that incomes from AI will develop from $1.62B in 2019 to over $31B by 2025 (Insights). Daniel B. Rodriguez, Former Dean of North-western School of Law, has stated that law students should be well connected with technology, especially Artificial Intelligence which will build a more constructive and fruitful path, which will bridge the advocate and the consumers seeking legal advice or counselling. The magnitude of bringing equity and justice becomes more accessible with the help of technology. Computerization along with AI has both microscopic and macroscopic levels of influence on the legal profession. Law Schools are starting to inculcate imaginative arrangements to their educational programme so that the students would be able to tackle such situations when technology escalate further ever than before (Zimmerman, 2018)


. Any attorney who's at any point done research utilizing Lexis or Westlaw has utilized legitimate computerization. Be that as it may, with technology at our fingertips and Artificial Intelligence, the legal research is taken to a next step. For instance, Ross Intelligence utilizes the intensity of IBM's Watson supercomputer to discover comparative cases. It can even react to inquiries in plain English. The intensity of AI-empowered exploration is striking for utilizing regular examination techniques, an insolvency legal advisor had a case that he was working on for hours figuring out the issues regarding the case, when Ross AI discovered it in less than a minute (A Primer on Using Artificial Intelligence in the Legal Profession, 2018). In case a person seeking for a legal counsel is on a time frame or limitation, the weight of confronting a lawyer is more crucial than ever. For legal advisors on the billable hour, time is actually cash and the ascent of elective charge plans makes legal counsellor’s time considerably more valuable. One major aspect which consumes a lot of time is legal research or background checking of the client’s case. Converting theirs notes and other raw information into something potent and useful can be a hectic work and this is one of the areas where AI can possibly be of help as to store econtent or provide an analysis on the facts provided and make things more accessible and in reach. Another aspect of AI that never fails to fascinate us humans, is how moderately precise the work of an artificial intelligence can be. When lawyers or legal researchers are using an AI tutor or the programme, AI will know what kind of information it should gather upon, it will invalidate the unnecessary documents and produce the content and context that are data mined from thousands of documents in a very less time. Whereas it is impossible for a human lawyer to skim through that many documents. Precedents are always important while citing a famous or a landmark case law or for establishing similar circumstances. In such cases Artificial Intelligence can recommend a case guide, loaded with information driven quotes for different parts of the referred case and these boundaries could be the decision proclivity of an adjudicator, the strategies of a particular law office, or the earlier legitimate inclinations of an organization. The internet along with the e-documents are like a gold mine and sometimes lawyers can even find more to underpin their case with the help of AI, as it can filter the adequate information from a platform of millions of data within minutes. As technology grows day by day and proves to be a vast platform, the pressure to meet the stipulations of the law or the legal field for that matter becomes more influential. From the visible perspective, AI will significantly change the activity method of the legal industry itself. AI as a Personalized Tutor. One of the meritorious factors of an Artificial Intelligence is that, it can be personalized and customized. One individuals’ pace of learning and the method of learning might be different from others. As AI is a technical form of intelligence which is abstract, it can carry out its functions using both verbal and non- verbal communications too. Over time, the AI might collect and read through thousands of texts and documents and it will have the ability to answer the user’s questions. Another


AI & Glocalization in Law, Volume 1 (2020)

point that has to be highlighted regarding the usage of AI as a personalized tutor, it can be used around the clock. Every person has a different pattern and time of studying as some might be comfortable learning at night whereas some in the morning. At times where we can’t approach a professional for help, AI comes handy. AI Tutor also provides interactive sessions which will dig up the user’s interest more. Individuals who are uncomfortable with voice interactions, can learn through Chatbots too. Also, while learning along with an AI, the user will be able to learn much more outside the scope of what he is looking for and the exposure he gets is overwhelmingly impressive. Therefore, how far the AI will be effective is subjective and based on the user if he wants to explore further or not. The question as to how the AI can be customized or how it guides the user by providing the right knowledge is debatable. Every AI will have a lot of calculations that demonstrates how to act in specific condition. In any case, the Machine Learning forms have additionally a ton to do in light of the fact that through this field of innovation the virtual guide is customized to gather data and use how to instruct better. This kind of advancement supported the achievement of AI in training, giving current and imaginative apparatuses to understudies. Examples of AI in the Current Education System. When a lawyer is handling a case, he has to go through the facts of the case, what it is based on and should run a thorough background check and should be able to analyse a legal issue or a situation. This due diligence is required for insightfully counselling the customers on what their choices are, and what moves they should make. To establish the significance of due diligence, a former lawyer name Noah Waisberg founded the company Kira Systems. One of the reasons as to why he found Kira Systems was he felt that many lawyers did errors in the process of due diligence (AI in Law and Legal Practice - A Comprehensive View of 35 Current Applications., 2020). Kira Systems states that its product is equipped for playing out a more precise due constancy contract audit via looking, featuring, and separating important substance for examination which is great for novice lawyers and law students. Other colleagues who need to play out numerous surveys of the substance can scan for the separated data with connections to the first source utilizing the product. The organization asserts that its framework can finish the errand up to 40 percent quicker when utilizing it just because, and up to 90 percent for those with more experience. Moving on to something much more famous and predominantly used AI in the legal field is ROSS Intelligence, which utilizes regular language to look and give legitimate data from references to full lawful briefs. ROSS further improves the quality of their web index by executing a term fixing feature (ROSS Intelligence, 2020). This innovation permits AI frameworks to take ordinary language and verbally expressed words and transform them into ideas and substances, and organized information and the scope of AI is limitless and we can achieve more when used in the right way.




Technology from its inception has been developed and enhanced to simplify the human life. Artificial intelligence especially is very much incorporated and utilized everyday in all the modern-day activities of an individual. In the educational sector, the traditional teaching mechanisms has now witnessed a wide shift to a complete virtual learning in an online platform. With the advancements in Artificial Intelligence, it can be used as a teaching instrument in areas where teachers reach out to students is limited by an impediment of any sort. By augmenting, machine learning and virtual reality in AI, it can potentially evolve into an effective tutor, thereby providing the users a personalized interactive mechanism. In the legal field that is theory based, a personalized AI tutor can be a potential key game changer, that will be available anytime and ubiquitous. Access to law will become easy in terms of knowing the law and justice can also be leveraged through this which will be the next judicial revolution. This concept of a tutor powered by Artificial Intelligence would be the next milestone in the space of online education platform as online teaching will be the norm soon as national education policy 2020 will be looked upon to change tact to make the education work centric or more practical learning with next generation requirements.

References 1

2 3 4 5 6


8 9

2020. 8 countries leading the way in online education, 2012. ICEF Monitor, Market intelligence for international student recruitment [online]. [Online] 2020. [Cited: August 28, 2020.] A Primer on Using Artificial Intelligence in the Legal Profession. Donahue, L. 2018. s.l. : Harvard Journal of Law & Technology [online], 2018. AI in Law and Legal Practice - A Comprehensive View of 35 Current Applications. FAGGELLA, DANIEL,. 2020. s.l. : Emerj [online]. 2020, 2020. Goldring, J. 1995. Coping with the virtual campus some hints and opportunities for legal education. Legal Education Review . 1995, (1995), 6(1), 91-116. Insights. AI and the Future of Legal Work: A Good Thing for Law Firms | Insights, [online], [n.d]. Automation Applications for Law Firms and Legal Depts [online]. [Online] [Cited: August 22, 2020.] Oanca, A. 2020. 5 Pros and Cons for Studying a Distance Learning Degree. [Online] 2020. [Cited: August 28, 2020.] ng-a-distance-learning-degree-in-2020.html. Online Education: Worldwide Status, Challenges, Trends, and Implications. Palvia, S., Aeron, P., Gupta, P., Mahapatra, D., Parida, R., Rosner, R. and Sindhi, S. 2018. 2018, Journal of Global Information Technology Management. 21(4), pp.233-241.

10 11 12

13 14

254 AI & Glocalization in Law, Volume 1 (2020) Rienstra-Kiracofe, C.. 2019. Legal Education in the Digital Age: Online Degree Programs. . Journal of Law, Business & Ethics, (2019) 25, 25-30. 2019, Vols. 25, 25-30. ROSS Intelligence. 2020. What is AI - ROSS Intelligence, [no date]. ROSS Intelligence [online]. [Online] 2020. University of the People. 2020. What Is Distance Learning: The Benefits Of Studying Remotely. [online]. University of the People. 2020. [Online] [Cited: August 22, 2020.] Zahralddin-Aravena, R. X. 2001. International legal education. International Lawyer. 2001. Zimmerman, E. 2018. Why More Law Schools Are Prioritizing Technology Integration. Technology Solutions That Drive Education [online]. 2018. [Online] 2018. rioritizing-technology-integration.


Transcripts of the Conference


AI & Glocalization in Law, Volume 1 (2020)

23 Transcript of the Research Panels of the Conference Akash Manwani, Chief Innovation Officer, ISAIL Anjana Dhital & Vanya Dhawan, Associate Editors (Legit), Internationalism Ponguzhali, Associate Editor (Legit), Internationalism |

Research Panel 1 - Decrypting AI regularization and its policy dynamics

Panel discussion with Prof. Christoph Lutge, Luna de Lange, SĂŠbastien Lafrance, and Bogdan Grigorescu: Decrypting AI regularization and its policy dynamics. Mr Sushant Samudrala was the Moderator. [You can watch the Panel broadcast at]

Owing to the development of Artificial Intelligence as a constant life companion of people as well as most professions around the globe, it is highly important to come up with some sort of legitimate structure and approaches concerning AI. This panel discussion, therefore, seeks to understand AI regularization and the need for dynamic policies to address the same based on the viewpoints of related experts in the sector. The theme of the discussion The world is at a crucial point in the development of AI and AI ethics. A large number of substantial developments in AI have been observed in the last few years. Accordingly, there exist too many unknown unknowns of AI systems. There are issues of right to privacy, security, intellectual property law, consumer trust, and various other fundamental human rights associated with AI. Thus, for the actual golden progression of AI, the technology has to be made secure through the initiation of new regulations and ethical guidelines.


What are the ethical risks to AI specifically from a business perspective and how can this be managed? There are several ethical risks associated with AI. Some of them have recently turned up in the press as well. So, there are risks with damages or even fatalities. However, other additional risks exist for example, in terms of discrimination, AI systems that might systematically discriminate against certain parts of the population for different reasons; in terms of security, hacking into systems, risks of an increase in invasion as well as surveillance-related risks. Regarding risks from a business perspective specifically, if an AI kills people, that damages the reputation of the company that manufactured it or if an algorithm for determining credit worthiness turns out to be systematically discriminatory, this is a risk for the company's reputation. About regulation, even in the EU, there is no effective AI regulation yet. There is GDPR but not AI regulation. The deliberation on the issue is ongoing in the EU administration. So, in the absence of that, it is a huge responsibility of companies to come up with other rules for these systems to be governed to gain the necessary consumer trust; to make AI trustworthy. Also, there has been a substantial increase in legal fines in many other areas against discrimination and other wrongdoings. For example, Volkswagen had to pay almost 30 billion euros for the diesel gate scandal. That is why, even in the absence of regulation, companies do something on their own to come up with ethical guidelines, tools, and other things for their assistance because it will help them, their products, and their sales. Privacy is one of the key human rights impacted by AI. There are many other human rights touched upon such as freedom of expression, etc. but, one of the key concerns is privacy issues by the invasion through the technological ways of dealing with things that are offered and available by artificial intelligence. The lack of regulation and the inequality between countries are the other issues. As mentioned earlier, the European Union has the GDPR and in the United States, they do not have regulations specifically dedicated to artificial intelligence. Likewise, the Chinese system is not regulated per se. However, it is reported that by 2025, China will have seen the initial establishment of artificial intelligence laws and regulations, ethical norms, and policy systems. But, for now, what China has are few documents that address these issues but are very vague and have not directly addressed human rights. China is not alone here. There are other countries too. The United States has made a lot of investments in artificial intelligence but there is still a big gap in terms of regulations. Artificial intelligence does not care about borders, the nation's state system, or system Westphalia to talk in political science terms. Should we and can we have international regulations at some point? Well, ‘there's many a slip twixt cup and lip’ as Shakespeare would say. But it applies to the field of artificial intelligence as well.


AI & Glocalization in Law, Volume 1 (2020)

On top of it all, there is a problem of domestic regulation and laws as opposed to international laws and regulations. So, where we are right now is pretty evolving and should be looked at to answer the question; it should be looked at as soon as possible and many more efforts should be done and taken about this from all countries and institutions. The EU, in this regard, has taken more steps than China or the US. Highlights on the legal tech AI products and what kinds of risks are associated with them? By definition, legal technology is the use of technology and software for the provision and support of modern legal services in the legal industry. These are made to be destructive technologies that have and continue to change the traditionalist approach to conducting legal practice which until now has still been very much conservative with lawyers being dubbed to be paper pushes. Legal technology can help traditions practice services like practice management assist the aid of remote working whether someone is traveling or whether there is some kind of strict restriction or incumbency and respect thereof as is present now with the COVID-19 situation and also with the electronic discovery of documents billing, accounting or whether it is setting up of appointments or consultation settling, etc. So, by definition, legal technology cannot possibly replace the traditional roles that any paralegal or researchers could conduct within a law firm which is the scary truth and that is why we need to advance ourselves and become relevant to this age that we presently find ourselves in. Further, legal technology employed includes lawyer matching services on various platforms where consortiums of legal practitioners come together and one can go on the internet and choose a relevant practitioner to assist you. Also, in respect of eliminating services that were previously performed by lawyers like document verification at the property transfers registry updates, etc. The American bar association has leaped forward has promulgated a rule for all legal persons of the legal profession in their model rules of professional conduct whereby lawyers must keep abreast and they are mandated to keep abreast of technology by also understanding the benefits and the risks of such technologies. Coming to the biases in legal technology, it has been discovered in Florida that the biases which are built into technologies had found persons of a certain race and color to be authentically guilty owing to the biases built into that system. And, when they had to find why this had occurred, they did not even have to go into the algorithms itself but the biases that were built in because this technology had learned from judgments handed down by previous judges in that jurisdiction. That raised the question thereof. Those biases are correct to apply. There must be biases integrated into this technology and the algorithms. However, it must be a correct and fair bias at that.


We have also discussed the risk of exposure of personal data reputation and cyber-related risks, AI terrorism, and the identification of certain correlations for example, where legal technology can match a person who is a low-income owner to automatically be of greater risk and bear the propensity to commit some other type of crime. Legal technology, from an Indian perspective, has been implemented by the court with the announcements made at the Indian national Constitution day on 26th November 2019 for the launch of a natural machine translation tool to translate judgments and orders handed down by the court to be automatically translated into nine vernacular languages. So, this is one of the positives that we can see through the introduction of legal technology in the field of law. There is a fundamental lack of morals and thus we need to use a critical method that targets principles such as fairness, accountability, sustainability, and transparency. The use of these principles may vary but they ought to be the pillars of the framework for an AI. An AI must be designed to facilitate end-to-end answerability and audibility. We need people to spread out across the entire design and implementation chain that can be held accountable along with activity monitoring protocols to review and observe the whole system. There are two big challenges to be addressed in the public sector, namely, accountability gap and complexity in the production process of AI systems. The accountability gap occurs as all automated actions cannot justify themselves as the hardware and the software composing them are not responsible in the same morally relevant way as people. This is why we need human agents that can be held accountable for the actions that affect the lives of others. It would help to have clear imputable sources of human answerability attached to decisions based on or assisted by output given by AI systems. However, establishing human accountability is not easy due to the complexity of the production process which leads us to our second challenge. The development process of AI systems involves multi-agents, several algorithms, and many moving parts such as delivery leads, department heads, technology experts, data procurement and data preparation people, policy and domain experts, implementers, and change managers along with a few others. This makes the landscape very complex on both the human side and the technical side and renders it difficult to determine the responsibility. The output given by the algorithm and the process involved behind it should be offered by the right human authorities in a language everyone can understand. The AI system must be accessible for overview and proper records of the data monitoring process should be made transparent to the right authorities. While retaining the legal framework on one side of the spectrum and innovation on the other, intellectual property rights would be one of the key areas to be assessed and the norms of intellectual property must be defined. In a firm statement given by the United States Patent and Trademark Office (USPTO), an AI cannot be an innovator in itself. Artificial General Intelligence has not been achieved even though every intelligent technology is a high-end machine in the learning process.


AI & Glocalization in Law, Volume 1 (2020)

The root cause of the issue is that people do not own their data. The data of the people such as their device type, location, etc. can be collected and they are not even aware of it. They have no control or say in where the collected data goes. Questionnaire 1.

What do you think are policy restraints in AI regularization and how do you think we should be able to tackle them? The main problem is the fact that there are no borders with the AI resulting in uneven regulations across regions. For example, the USA and China are not very regulated for various reasons whereas the European Union (EU) is working on updating the General Data Protection Regulation (GDPR). Ideally to tackle this challenge there should be internationally binding legal regulations that would help to provide basic data protection. It is important as security issues such as these put democracies at stake. Data protection and privacy are seriously challenged. In pure capitalist economic systems, access to private data can be waived off. The heart of the solution lies in securing basic human rights. However, as it is not possible to achieve it at a global level just yet, the problem can at least be tackled at a domestic or regional level. 2.

In the absence of the legal regulatory framework, how effective do you think are the ethical regulations, and what kind of ethical regulatory framework should be adopted so that we can start trusting technologies like autonomous vehicles or AI-based education and healthcare? The present normative culture and resources are important in dealing with the issues of AI ethics but they are not the entire solution. In the last two years, we have come across many high-level abstract frameworks across the world but it is time for a different approach. We have to work together with the developers and the startups to reach a concrete level. Companies are starting to recognize the ethical aspects of AI and we need an interdisciplinary approach where people from the technical field as well as people from social sciences and law work together effectively. 3.

What do you think would be an appropriate legal framework for AI, especially keeping in view the innovation aspect? Transparency is a very important consideration followed by fairness and accountability. These things should be integrated into the design system itself and not implemented later as an afterthought. Future-proofing such technology is also important and we need strong data protection laws, regulations regarding technology, and cyber resilience. Machine learning technologies and predictive algorithms are based largely on the quality and type of data and thus ensuring quality data become a distinctive responsibility in itself. One of the very principle things to keep in mind


is to not go as far as letting AI gain consciousness so that it does not impede human ethics. 4.

Do you think it is still premature for AI regulations and we are approaching these regulations in the wrong way? The point is we have a system right now that penetrates almost every aspect of an individual’s life and people are not even aware of it. Regulations for AI are on the back foot right now and that partly because of the speed of change in automation that is faster than ever in the history of humankind. The law will take time to evolve due to so many moving parts involved in the process but it is not premature because we desperately need these laws and the sooner, the better. For example, fake news can have a huge impact on lives, and human rights are also affected. So, it is imperative to have regulations now but they should not be too bureaucratic to stifle innovation but at the same time, they should be thorough and enforceable to prevent, reduce and mitigate the majority of issues brought by the AI domain. Conclusion Artificial Intelligence has made geography a history. Therefore, AI regulations are of immense importance and a legal framework would go a long way in addressing the issues and challenges posed by them. We cannot tackle 21st-century problems with a 20th-century mindset. Because of the advancement of AI and its increased use in various sectors around the world, there is a clear need to develop sets of regulations and ethical norms to regulate its operations as AI has links with human rights, intellectual property law, consumer trusts, and security issues. Not just domestic regulations but international measures are also equally necessary here.

Research Panel 2 - Algorithmic Diplomacy, Geopolitics and International Law: A New Era

• • • • • •

Abhivardhan - Moderator [Founder, Chairperson & Managing Trustee of the Indian Society of Artificial Intelligence & Law] Akash Manwani – Co-moderator [Chief Innovation Officer, Indian Society of Artificial Intelligence & Law] Nicol Turner Lee - Expert Speaker [Director - Centre for Technology Innovation, Brookings Institute] Bogdan Grigorescu - Expert Speaker [AI - Platform Manager] Emmanuel R. Goffi - Expert Speaker [Director - Ethics & Artificial Intelligence Observatory, Institut Sapiens] Roger Spitz - Expert Speaker [Founder and CEO of Techistential, Chairman of Disruptive Futures Institute]


AI & Glocalization in Law, Volume 1 (2020) Eugenio Vargas Garcia - Expert Speaker [Diplomat, Brazil Foreign Ministry & Senior Advisor, United Nations General Assembly] [You can watch the Panel broadcast at] ▪

Akash Manwani – Co-moderator We are beginning with this program and session. First of all, good morning, good afternoon and good evening to all the panellists and our international viewers. I am so glad that we are able to make this thing work from the comfort of our homes. Well, the topic is pretty interesting today, “Algorithmic Diplomacy Geopolitics and International Law: A New Era”. This topic calls for discussion on a lot of aspects of myths and facts of artificial intelligence, algorithmic diplomacy, information warfare, algorithmic decision making and so on. To do justice to this topic, we have an amazing set of panellists with us. We have diplomat, founders, experts and researchers. I must say that I had the opportunity to invite each and every one of the panellists and I can confirm without hesitation that they are some of the nicest people. The questions which they have put forth have actually helped us channelize the nature of this discussion and I think this discussion would be really great. I have Indian Evidence Act exam tomorrow but I told myself that there's no way I am going to miss this panel discussion. I think then we can go forward with the discussion, Abhivardhan if you can just introduce our panellists so we can begin with the discussion. · Abhivardhan - Moderator Thank you so much Akash for this. So, I would like to introduce our esteemed panellists today. I agree with Akash that it's a very diverse panel. It's one of the most premier panels of the Indian Conference on AI and Law - so in this panel we have Mr. Roger Spitz, he's the founder of Techistential, and he’s an AI investor and a very interesting luminary. Next up we have Mr. Emmanuel Goffi, he's a Phd and the Director of Ethics & Artificial Intelligence Observatory, Institut Sapiens. From the Brookings Institute we have Nicol Turner Lee who is a Senior Fellow and the Director at the Centre of Technology Institution at Brookings. We have Ambassador Eugenio Garcia who is a Diplomat at the Brazil Foreign Ministry & Senior Advisor, United Nations General Assembly. We have Mr. Bogdan who is an AI- platform manager. Alongside me, my co-moderator Akash Manwani who is a Research Analyst at Internationalism and my introduction being the Chair of Indian Society of Artificial Intelligence & Law (ISAIL) and the President of the Indian Conference on AI & Law, 2020. So, let us begin with this session, we shall ask the questions and at the same time we will deal with the concerns of the audience as well. I am sure that it will be an amazing session Akash so let’s begin with the session and take the remarks forward. The questions would be very basic but we will try to get a very interesting understanding of this. Most of the times we'll assure that all the questions are dealt with but if for example that does not happen then you (audience) can mail us the questions.


Firstly, I have a question for Mr. Roger Spitz. Sir, When we discuss the idea of algorithmic diplomacy, it's actually a very nascent idea because it has not developed technically. Algorithmic politics, Algorithmic trading and all of those technologies have made developments but still a lot of work needs to be done. So the question that I have is, A major part of Algorithmic diplomacy would depend on algorithmic decision making. It is not very long from now that AI will play an active part in Board rooms and company management, so what do we learn from algorithmic developments in strategic decision making and how do we remain relevant in the future? ¡ Roger Spitz - Expert Speaker Thanks Abhivardhan. Of course, the debate in terms of the semantics of AI and whether how much AI is pervasive today and its ability to make decisions, I try and look at it just in the terms of a value chain of decision making and of course in the past machines were typically used for optimization to automate processes, accelerate repetitive tasks etc but if you look at the value chain which moves to augmentation and which also moves towards tackling human mandated domains such as creativity, even drafting legal contracts or negotiations, you can see that little by little AI is moving up that value chain. I think it is undeniable whether the debate is in terms of the nature of AI and whether it's AGI or what have you. There's no doubt that in terms of the value chain, AI is having an increasing role in terms of moving towards decision making and so that journey will continue in my opinion irrespective of its ultimate outcome. That journey which increases to what are human mandated domains of decision making. Therefore I think what humans need to do to stay relevant is really understand where humans bring value and I am very much in agreement with and it is difficult not to be, with Stephen Hawking who considers the 21st century to be the centre of complexity and one of the features of complexity is that the unknown unknowns. There's no direct causality or at least not anxiety and it's not necessarily a range of right answers. So, data and AI is less valuable in a complex domain than a complicated domain where at least there's known unknowns and a range of right answers and some more transparency and understanding of causality. So within that framework I would say that it's essential for humans to basically be more agile in the sense of understanding the entire systems, understanding that the cause and effect is not that predictable and understanding the interdependencies of the moving parts. Therefore following emergence in terms of their own behaviour to know where to amplify your dampen to move in the right direction, experimenting, tinkering, using innovation so that the instructive patterns can emerge. I think that emergence, in particular the agility to emerge in complex systems is one where humans have an edge but to do so we need to acknowledge where our value is and we need to acknowledge that today we are not necessarily good at being agile at complex decision-making and at emergence. Therefore, I think that there's a need for humanity


AI & Glocalization in Law, Volume 1 (2020)

to understand in terms of the education system and the leadership teams, the way companies operate and organizations what it means to be anticipatory to be agile and to be anti-fragile. What it means to develop and upgrade those features when you're competing against “AI” which is continuing its journey in terms of decision making. · Akash Manwani – Co-moderator Thank you so much sir. I completely resonate with you that the AAA formula is the way forward and we as humans need to find our value. Getting into the crux of the debate now, there is no multilateral wing at the international level when it comes to algorithmic diplomacy or AI and I think Mr. Eugenio Garcia would be the best person to answer this question. Sir, my question to you is, at the outset in your opinion why first of all, there is no independent multilateral body on Artificial Intelligence or Algorithms under the aegis of the United Nations? Well, if there is a future International Artificial Intelligence Organization (IAIO), how would it look like? What could or should be its objects, goals, scope and path? Over to you sir, thank you. · Eugenio Vargas Garcia - Expert Speaker Thank you for having me. Yes, AI is a general purpose technology with multiple capabilities and likely long-term impact across the board so in the background, I believe AI can pose risks that need to be prevented or mitigated so setting standards, norms and benchmarks to mitigate these risks are possible responses to the negative side of the technology. It's not just a question of doing good or promoting responsible behaviour among States or protecting the weak from the dominance of the powerful. At the end of the day, it's a matter of negotiating disagreements, navigating uncertainties and finding common ground to build accepted rules for all and minimal standards to safely develop the full potential of AI. So yes there is no multilateral body on AI right now for a number of reasons. Technology has been evolving too fast, we don't know exactly what should be regulated? and how national legislation is being discussed in many countries? so how do you engage in international negotiations on AI if your country has no policy or strategy on AI itself. Overall the debate is beginning to take shape but there is too much work to do so it's probably too early for an international organization on AI exclusively and if you try to be too ambitious from the very beginning you can reach a wall pretty soon. Now, of course we have a very challenging international situation today, people are sceptical of any substantive agreement among the great tech powers. We face scenarios of growing competition, interstate rivalry, AI nationalism, decoupling and so on. So, I think with the absence of forms of governance and mistrust towards the multilateral system, it can make it much harder for international communities to cope with today's challenges and reach a workable international consensus. This idea seems to me rather a remote possibility in the short term but it's important to continue exploring alternatives to take concrete steps in AI governance in a robust and comprehensive manner.


So you ask what if sometime in the future there is an international AI organization and what should be the role of this organization. Well, ideally it could be an international regulatory agency to discuss remote cooperation, oversight, set standards, guide, and implement AI policies globally to promote safe, beneficial and friendly AI, both in the short term and in the long term. Again how do you get there? so how do you strike a balance between soft law for example, principles recommendations, flexible cooperation, agreed arrangements and by the commitments, norms that are supposed to be mandatory for states and other stakeholders. You can say signing an international treaty is too difficult, so agile governance is much easier so let's work just with existing law. This should be enough, many people also say that if you only have soft law, there is no guarantee that these principles or recommendations will actually be implemented at the national regional and international level so just to conclude, I think one of the key challenges is to keep the major players engaged so that AI governance can be instrumental in providing stability to safeguard the system in everyone's interests. So, before moving to an international organization, maybe we could think of establishing first, an intergovernmental panel on AI so as you know the UN has some developmental panels on climate change, the famous IPCC which has an extremely important role in giving scientific advice on climate change. What if we decide to create a similar body on AI to provide policy makers with technical assessments underlying the opportunities and the implications of potential risks of AI so that's an idea maybe we could explore and discuss further but as technologies become more and more sophisticated and difficult to understand for the public, this integral method panel would play a key role for example giving sound technical device and for internet negotiations to be launched, we need to separate myth, hype and misinformation. Thank you. ¡ Abhivardhan - Moderator Thank you so much sir. These are really valuable insights on practical realities which lie at the international level. In that regard, I would like to ask Nicol, a special question regarding the vices of algorithmic autonomy because we know that the whole constituent nature of Artificial Intelligence when it comes to practicality is that it is very disruptive and the devices of deception obviously exist. We have come across several vices of an algorithm and I can name quite a few of them, but is it also possible that algorithmic autonomy which is I think a very important aspect which countries are proposing themselves, the EU especially and Americans, is it possible that algorithmic accountability can have a positive impact in the realm of international relations? ¡ Nicol Turner Lee - Expert Speaker Good morning everyone and I bring greetings from the Brookings Institute where I am the Senior Fellow in the governance studies as well as the Director of the Centre for Technology Innovation. I think that's a really interesting question in terms of algorithmic accountability in terms of the public sectors and international relations and whether or not it can have a positive effect. I would suggest and par-


AI & Glocalization in Law, Volume 1 (2020)

ticularly the work that we do at Brookings, technology in general can have an either-or effect and in many respects that's why we're having this conversation today. Technology has the ability to enhance, it has the ability to become much more efficient in terms of its practice, it has the ability to create the type of relationships that pans cross borders but at the same time, technology can be harmful too. It can have vices, it could have areas where it can liberate while at the same time it can oppress or create systemic biases that keep people contained into certain realities, whether it is the use of technology for elections where it is basically expounding and expanding upon freedom of speech or in the case of the use of technology elections where it's actually stifling people's ability to vote or it is creating misinformation or disinformation that essentially restricts people's ability to exercise a democratic right. With that being said, I want to start with some of the positive uses and then sort of end up my comments on where we have to really reel in these vices. Particularly we're talking about international considerations first and foremost, we have seen technology become very useful in solving problems around climate change, around national security, around educational issues and concerns, around commerce and around the portability of data. The new innovations that we are seeing in AI essentially and as someone mentioned it, as a general purpose technology has allowed us to do things that we as humans could not do at such an accelerated pace or without the preciseness of the decision making that a computer system actually brings and those technologies will continue to advance what I believe are public-good applications that lend itself to even greater causes like poverty or the lack of food or food insecurities, health or housing etc. Those are areas I personally believe and we have at Brookings been studying this magnificent fortitude of the technology. But, at the same time those technologies often can engender the same type of biases, the same types of discriminations that take the question of solving climate change and sort of fence in surveillance of communities and land for the purposes of systemic discrimination or inequality. So, a great example of that and I will share for the United States is the use of algorithmic accountability and criminal justice applications whereas we know that judges may have high workloads and high tasks when it comes to high-risk decision-making, the use of risk assessments in the accountability of those algorithms can actually err on the side of further criminalizing or overcriminalizing certain populations because the data itself starts out flawed. When I talk about algorithmic accountability, I always suggest to people that it's not the computer system that in many respects actually creates the biases, it is all of us! On this call who bring a set of norms, values and assumptions about the world that actually feed into those systems. It also has international implications, how we actually look at the design of AI also comes with our lived experiences based on where we are located in the world, based on where we are located in the context of the problem that we're trying to solve, based on who we are and how we want to exert power and influence in those models? These are real issues and I know we're going


to talk about this coming up but why we call these vices is because the very thing that we are designing to help us to accelerate the speed of thought have the ability to be weaponized against certain populations and have the ability to sort of box-in population. Going back to that criminal justice algorithm that we talk about in the United States, it is great that judges have the preciseness of a computer model to determine the threat to society when it comes to individual defendants but it is not so great when the majority of those defendants are black and brown people in the United States who tend to be over represented in those data sets where the model is drawing from. So we have to constantly ask ourselves, can we use a technology for public good when there are these inherent vices? these inherent insufficiencies, these inherent biases that in many respects may sort of void out. I always tell engineers, they void out or they silence the good simply because we've not necessarily dealt with holistically. How to address those inherent biases that come from the societies in which we live, so I will end there but I am looking forward to having a conversation because if you do not couch this around civil and human rights, these conversations within these regulatory frameworks of how we proceed with these technologies, I fear my friends that we will have very different set of norms and values around algorithms, around machine learning algorithms as well as AI systems in general. We will have the ability for us to be much more separated and polarized in our applications of what is the right context of use which in the end can be weaponized against certain populations. So very happy to be here and Thank you. ¡ Akash Manwani - Co-moderator Thank you so much ma’am. Those are really powerful views and powerful observations. We appreciate it and even we would like to have further conversation on this. I guess that is the whole object of this discussion. Yes, there are vices to technology but again there is a positive side too as you said, technology has brought great impact in commerce, education, climate change etc. and maybe for all of us in this pandemic, I guess the only silver lining is rapid adoption of digital tools. Moving forward on the same lines, considering that we have literally skyrocketed the adoption of digital tools I would like to ask a question to Mr. Emmanuel Goffi. Sir, in light of this, what is responsible AI? and what role would it play in developing trust in algorithms for sensitive matters like geopolitics and international relations? Could we ever trust algorithms with these things? ¡ Emmanuel Goffi - Expert Speaker That is a really tough question for a philosopher, obviously there is no easy answer to this kind of question. The first thing that I want to stress is the use of those words you were talking about, responsible AI. We are talking about trustworthiness, these words are right, and the first thing that I want to say especially when it comes to dealing with international relations is that trustworthiness doesn't mean the same thing for you as it means for me. Maybe there are some interconnections but in some countries trustworthiness does not have the same sense as here in Europe or even in North America. The same thing is for responsibility, that we all


AI & Glocalization in Law, Volume 1 (2020)

know lots of countries from our side, though our lands would be labelled as irresponsible. So, responsibility and trustworthiness are really complex. I want to stress on the second point, that is the fact that you cannot apply those words to a technology. Actually if I just take the case of customers, yes I have heard a lot about and especially here in Europe, we're talking about developing this transfer. My concern is that you do not trust AI, you trust the people that are developing AI and trust people that are using it right. It is just a tool except if at some point AI becomes fully autonomous and if AI is able to make decisions by itself but so far it's not the case right, so there is always someone behind the those algorithms, there's a programmer, there is a user, there are a lot of people that can be held responsible for what is going on with algorithm. So I think that's really important to separate those two words trustworthiness and AI. It doesn't make sense in my opinion. I mean I have a strong stance saying that trustworthiness when we use trust or when we use responsibility like explainability and transparency are just smear marketing tools. We are just trying to make people feel better with that, feel more comfortable with AI. But I would definitely ask you the question, all of you that are watching this today is, Is trust something that is really necessary? I feel like, for example in campuses like democracies right now, in France and living in Canada for a while is not based on trust it's based on distrust. We have a lot of things that prove that the political system that we have here in France is based on the fact that we cannot truly trust each other so we have to put safe words in order to make sure that everyone will be able to benefit from all that is available in the society. So trust is not something that I like about the country. I would say that yes I really have to be vigilant on what I am doing and what kind of tool the people are developing. I have to be vigilant for the people that are using those tools right but the tool itself doesn't matter technically speaking, so far once again and we are not yet in the situation where we have autonomous AI that is able to make its own decision based on its learning. So at the international level I would definitely say that this is kind of a tough issue because when you look at the way people are understanding those words, you will see that there are lots of differences and I just want to move to another point that is directly related to something that I am working on in the Observatory on Ethics of Artificial Intelligence here in Paris. I work with the Identity School for Technology Business and Society, we are really interested in bias that has never been tackled at all. We hear a lot about gender biases, scholars bias, or social status biases but there is a bias already at the international level which is the fact that most of the ethical debate is influenced by western thoughts not by India or Africa, not by China and not by southern America too. This is a big strong bias that we have and we are all using those words because those words have been developed by western countries. Look at European Union, we have this kind of word in our rules to get exactly the same quotes regulating AI in North America. Weirdly really because we do not share the same culture. You will find exactly the same words in the Chinese documents that have been especially there is one that has been issued quite


recently which is about AI and children so if you look at those codes, you will find that it's really striking that the words in the sentences, everything is almost the same with some differences. I cannot understand that because we do not share any kind of culture with China. I do not understand why they are using, I feel like there is kind of a weird use of those words so they do not resonate the same way here in Europe so there is a strong bias at the international level that we really have to address quite quickly and we're working on that once again with the Observatory. We will launch a big study on that to see how this debate could be can be renewed using different wisdoms, different cultural, different perspectives, different religions from all around the world just to see where those different religions can bring something new to the discretion so far there is a big bias a about how we tackle ethics. I would say this bias is even greater because even if you look at the way we deal with ethics here in Europe, the western world I mean, it's only through the three main theories virtually consequentialism and the ontology. There are lots of other ways to see that, let's just mention for example, the ethics of care by Carol Gilligan, the ethics of responsibility, lots of journalists have a lot of people that have written on that and we do not even look at them. We don't even study them, most of the people that are studying Aristotle, they will be studying German ventum or even Kans but they will not look at other philosophers from other countries and cultures so this is really something that we have to work on and step back from all those words that we put with AI that actually do not make sense at all. They do not resonate in our mind the way they resonate in the minds of people of other countries. ¡ Abhivardhan - Moderator Thank you so much sir. There surely needs to be an equal diversification of ideas around the globe. Now, I would like to ask Mr Bogdan who also attended the first panel on AI policy considerations and that panel was amazing too. I think it's a very interesting question which I have in my mind which involves myth busting in AI ethics so we are talking about algorithmic decision making, philosophy and understanding the critical realities which differ country to country. The question that I have for you sir is that In the interest of myth busting how probable is algorithmic decision making, autonomy and human intelligence? Does it mean that AI really needs to be super intelligent or just intelligent enough to outsmart us in order to replace us? ¡ Bogdan Grigorescu - Expert Speaker Short answer is no, it doesn't, it just needs to be good enough to be better than us. When I say better I am not saying necessarily more efficient, it's just we don't evolve and keep pace in terms of morality, in terms of how we do things, in terms of how we treat each other and so on. Now I think there's quite a number of researchers and scientists that believe that General Artificial Intelligence will never be achieved so actually no nobody knows but it doesn't have to be truly autonomous to create disasters. The onus is on us, the people to ensure technology is neutral, if anything can be neutral it's technology and you know end of the day it boils down


AI & Glocalization in Law, Volume 1 (2020)

to the question of what kind of society we want and use concepts as Artificial Intelligence and technologies related to it like machine learning, computer vision robotics and so on to make the planet and the society better. That’s really where the onus is on us, the people and not on a concept. I think Mr. Goffi explained it very well when he said there's these terms which don't really resonate in the same way across the world and we should take that into consideration pretty much equally. I would say not just looking at things from one viewpoint or another because it's actually dangerous with this kind of concept, it really is and Mrs. Lee talked about human rights I mean that's real we already have human rights issues and I talked at the last panel about the case in United States where somebody could not challenge a judicial decision that was made based on the output of a machine learning system because there's no laws around it and because the judge deemed that intellectual property rights are commercial secrets take precedence over the human rights of the accused to challenge a judicial decision. I mean I am not sure what kind of Centre it was but it involved a criminal record, it cannot get more serious than this in my opinion but it's again about us people not about AI or the technology that's really, when it comes to AI taking over the world and you know enslaved people. People enslave each other really wherever they use a concept to do that or not really. Again, technology is not going to enslave people, it's what we make of it that enslaves people. That's really my take when it comes to super intelligence and so on. As I said, nobody can agree if it's even really possible there are bright minds out there that argue that it's never going to be possible. I am a little bit wary of that I prefer to be vigilant as Mr. Goffi said and always ask what if that will happen because if that will happen, it's not going to be like that. It's going to be a relay course, you know one step at a time that would seamlessly get us there and before anyone realizes it, it's there and we are in deep trouble. Then we might start asking ourselves how is that possible? That's all I had to say so vigilance and let's ask ourselves what kind of world we want to live in that's where the onus is on us and not on the tech. ¡ Akash Manwani - Co-moderator Thank you so much sir. It is in fact true that nobody knows what future is going to bring with itself so it is better to be vigilant and not get complacent. We need to take care of ethics and the people who really make these artificial intelligence tools or computers or algorithms as Mr. Goffi said. Moving further, bringing back the discussion to geopolitics and international law I would like to ask Mr. Eugenio Garcia, that we have seen applications like Tiktok, Instagram having tremendous impact on setting the diplomatic narrative, so you being a diplomat I want to ask how algorithms are impacting diplomatic narrative on influencing the world view? & could social media algorithms actually make a whole population of a nation believe in something? ¡ Eugenio Vargas Garcia - Expert Speaker Well, thank you for your question, maybe not the whole population but you certainly can use social media algorithms to make millions of people to believe in some-


thing and this ability to manipulate public opinion for political purposes is not really new so think of a Nazi Germany in the 30s and how massive propaganda convinced millions of Germans that Nazism was perfectly fine and should be supported. So the problem now is the power of AI as a manipulation multiplier so a political regime for example can use data collection and AI tools to identify you to know where you are, track your movements, control the information you receive and so on. This is not only true for authoritarian states but even democracies as you know, situation is challenging, we face a democratization of disinformation or the very notion of surveillance and how personal data can be used to sell products which generate profits and make you more willing to go in a given direction depending of what's at stake. Mr. Emmanuel talked earlier about bias and there is no great debate on how social media, the internet bots, fake news and the apps in your smartphone can be utilized to reinforce beliefs and strengthen in-group thinking which can increase of course, polarization in politics, we can call it bubble fights i.e you live in a bubble of information and you pose a different opinion and nobody listens to each other. But having said that, let me go back to the first point of your question on the impact of algorithms in diplomacy. Diplomacy happens in a political space where foreign policies made so, having the public opinion on your side may be a strategic resource in particular if you are trying to sell to the public a specific narrative or to justify a policy or to push your ideas to make people support them. So no wonder governments everywhere are now much more active on social media on Twitter, other platforms and it's a battle for hearts and minds to win the ground to shape people's minds so all these AI tools that can be used to control public opinion may end up having an impact on diplomatic scores as well and in the digital age, this ongoing transformation has been around for some time so two points here to conclude, first the tension between traditional old style diplomats and the changes brought about by technological innovation with all this real-time information is spreading around the globe. Some people say that the United Nations for example needs to adapt to a new digital environment and that's true and so if the United Nations is too slow or too bureaucratic, it won't be able to deliver in the coming century. So I agree that we need a more inclusive multilateralism and embrace more stakeholders in all these debates. Then came COVID 19, that the good news is that the pandemic accelerated the adoption of several applications, virtual zone meetings working from home, more connectivity and diplomats had to take crash courses on the internet to catch up and to use these new technologies in a more efficient way so there are more opportunities to connect online but the bad news and there are always two sides of the coin, these virtual meetings don't allow real face to face interactions so some sensitive negotiations were made more difficult and informal dialogue outside of these virtual meetings, it's not the same anymore so diplomacy lost in the process and I mean here, diplomacy as a way to reach agreements when people disagree so it's easy when everybody's on the same page but when countries have very different views on the topic being negotiated that's the


AI & Glocalization in Law, Volume 1 (2020)

challenge. So to conclude the problem with AI and diplomacy is that foreign services are slow to adopt these new technologies especially in their daily work so in general this is just my perception as a practitioner. There's a long way to go until AI tools are really incorporated by the diplomatic community, you can say that there is a great potential. I don't know for instance, human machine hybrid decision makers but this is far from the reality on the ground for most foreign ministries so I will stop here Thank you. ¡ Abhivardhan - Moderator Thank you so much. I think it was a great insight on the same issue. So now, I have a question to Nicol Turner Lee on this aspect with regards to algorithmic biases and how they impact at national level. It is understandable that diplomacy itself when it comes to multilateralism and AI, is still not reaching there because grounds of consensus are not easy and I think it might not be easy for some time so my simple question on this aspect is how Global algorithms really impact global challenges and should nations come together and form a unified regulatory infrastructure? For example I think within them there are some initiatives happening obviously they are not some bodies, some working groups, some projects, certain collaborations with organizations and NGOs, something is still happening in that case how can a unified regulatory infrastructure be made? adding to this considering the current administration in the United States and whatever might come in the future Trump or Biden, what role could the United States play in that case? ¡ Nicol Turner Lee - Expert Speaker Thank you, yeah I think that is a very interesting question and as you all are watching, the United States, we're leading up to a national election that may actually have some impacts on the future of AI policy, I mean I think there's a couple of things at hand here I mean before we even speak about the United States, in the our election there has been a constant struggle in terms of this race to AI. I do a lot of work at Brookings on Chinese technology and the extent to which we're actually seeing the Chinese really try to become the forefront forerunner in terms of AI research and development and deployment. I mean the Chinese government has been very clear that they want to be the first to deploy and in partnership with countries across the world, you know and particularly in the continent of Africa we have seen them sort of invest in these types of resources necessary to sort of expedite the development of these products and we've seen them actually deploy them. I mean it's something that we're seeing on the infrastructure side and something that we're equally seeing on the AI side. I think the Chinese government has been very clear about the economic outputs and inputs as a result of this infrastructure, but at the same time we are seeing, across the world these very disparate attempts to come up with standards and working groups and for those of us who have been around for a while that does not necessarily suggest that we're going to have operationalized principles and action statements that lead the design and development of AI and I think it goes back to what Emmanuel talks about that depending on who you are, in what country you are, a different perspective on what AI means? When I travel around the world, When I talk about civil rights in the American context, that's


very different from human civil rights in Italy and other places like Chile where people are starting grappling with the diffusion of technologies and the impacts that they may have. I am writing a book about the digital divide and the extent to which we're not even counting for people and the design and development of these systems because they're not connected so we have this challenge as boxing, which will develop systems that may overlay the experience of people without necessarily knowing the impact on human subjects. Globally when I think about where we have some blind spots, one is around the disparate investments in R&D that we're making across the world in AI systems and the fact that those investments are coming in a variety of ways, some of it is coming around lab development and modelling etc, academic research, others are coming around ways in which policy structures and national security structures will actually rely upon the AI to sort of lead the national race. I was on a panel with wired magazine where AI cold war was referred which really scared me because again I think it has a lot to do with this disparate investment as well as the positioning of AI within different countries as to its worth and I think as it was mentioned earlier, I keep going back to, AI is a general purpose technology and it can be used for the good or the bad with that being said we're going into a national election here and many of you may have been up as I woke up to the news of the extension of the coronavirus now to our administration and so we wish nothing but blessings on our First Lady and our President to get out of this because this has a huge effect on diplomacy here in the United States and I think it sends a message around the world about the severity of this pandemic. With that being said, depending on who actually takes that lead next year will be very interesting. At Brookings I have written about this, what would an AI policy look like under Biden and Harris, it may look very different, one thing that we will know is that it will be a different approach to AI diplomacy that we may not see today. The United States actually has done a very good job from the administration in terms of coming up with standards around AI, around quantum computing but the extent to which we have been fully cooperative I know we have signed into some of the pacts with the OECD in the United Nations but the fact that we're fully cooperative may actually switch or flip a little bit with a Biden-Harris administration but it all depends on who they put into office I think at the end of the day we need global cooperation and I will end here because AI systems have this beautiful potential to solve human problems with the help of humans of course in the design of those models but in my work and I continue to stress this with my global partners it can be weaponized and forced national security vulnerabilities where systems that include financial systems to global climate change systems cannot just be weapons of mass destruction that we have actually seen in the past but they can be newly remodeled weapons of mass destruction that actually infiltrate data systems that we all think have some level of data security. AI systems and vulnerabilities globally, go hand in hand with the disparate development of privacy structures that actually permit the portability and the types of data that are permissible across the world and we don't even have that


AI & Glocalization in Law, Volume 1 (2020)

and so I think where do we go from here is the question I have said this to our friends at Chinese think tanks as well who wants to sit at the table so that we develop ethical AI, legal AI and AI that does not implode human democracy and what it means that we cannot pull it back and that's something that we're seeing in the United States with simple applications of AI that find themselves in housing applications, in accredited applications, employment and criminal justice that the very technology that all of us dreamt would one day become an equalizer has the potential to actually really implode the facets of any type of human diplomacy and to bring us further apart. So again I think I am one of those people like I want to say as many of you have said they're great uses but I think I err on the side of this club that without it being carefully tracked I think we're opening up unopened cans of worms that have basically been settled but we're also permitting it to do things that I think many of us in this audience do not want to see. I know our friends in India are grappling with the same types of challenges, in fact we've just bought on to the Centre for Technology Innovation at Brookings, one of our friends who has worked in India’s regulatory space to become one of our scholars so just looking forward to that type of collaboration as well. · Akash Manwani – Co-moderator Thank you so much ma'am. Well I am sure these are interesting times for the United States and irrespective of the political views, obviously everyone will wish well for Mr. President and the First Lady. This only tells that COVID 19 doesn't see stature, position, cast, creed or colour. So everyone please wear your masks when you go out. Anyway moving forward, a very quick question to Mr. Goffi, why Artificial Intelligence ethics is important and how do we effectively enforce it? · Emmanuel Goffi – Expert Speaker Two great questions, I would say that AI ethics is important because you don't have any kind of legal framework that is currently working so it's better than nothing. I would obviously prefer having to have a positive role with positive documents and you know international laws that would be really constraining but that's not the case and we know that this technology is evolving at fast pace and it's really tough to find at this invention before it's really difficult to find compromises on this and obviously you know that behind this technology there are strong interests and diplomatic and financial interests that are at stake so it's really hard to find a right kind of a legal framework so ethics is here just to compensate the fact that the law is definitely not efficient enough so far. On the other hand I would say that we should not hide behind what I call cosmetics which reveals just making things better than they really are. Obviously, you can use this kind of cosmetics using these beautiful words, those beautiful values and putting them forward, saying you can trust us because we are this bad because we are we are democratic countries, because we are inclusive, because we value the diversity and all those things. The point here is that those words must be a scene with a lot of coaches of question obviously, so ethics is important but I would say also, real ethics is important and


this is where as a philosopher that's a big issue that I have. Lots of people are actually entering into the debate because ethics is kind of a buzz word right now. When I say ethics you all understand what I mean but you don't know what it is exactly because ethics is something really complex. Obviously so if I go deep into this complexity you should be lost but anyone can just go on a TV show and say, Oh! look we are doing ethical things that will resonate in the mind of the people in a way that they will say, I can trust him but uh it mainly is cosmetics when it's not used by real philosophers. People that do have this background in philosophy, if when you hear a political statement, when you hear, you know a private company's statement, most of the time it's not really ethics, it's cosmetics. Once again because it cannot be operationalized and when you say how can you apply that, you cannot apply that for many reasons, first of all because those kind of rules most of the time they do not apply to some countries so when you look at the United Nations, we all agree and I have done that regarding the autonomous weapons right, that participated into this kind of debate, discussion will all around the table from different backgrounds, different culture, engineers, philosophers historian, political leaders etc We are discussing this this big issue on the ethical and the legal standpoints and we all are nodding at each other saying, oh yeah that's really interesting. I would agree with you that's it and when we actually leave the room, we feel like okay I will do whatever I need, whatever I want, you feel like sometimes the rules that have been said do not really or are not really matching your expectation, your culture, your ideas, your perspective your stances, so the problem is that, with ethics there is no way that you will apply them for sure and the second thing with ethics is that conversely to law, there is no constraint behind that, there is no sanction, they can be symbolic sanctions in terms of maybe, your image can be can rich or attained by because you did not really respect your your promises in terms of ethical statements but that's almost right, you cannot be sued for not being ethical, you cannot issue for that so ethics is nice, it's good, it's interesting, it's important because there is nothing else that is that is available that is impatient but we must not fall into the trap of cosmetics. Just so you know, hiding the reality of life and to making it look good, no life is not that good and obviously when you're developing this kind of tool at the international level, there are states that are much higher than just ethical consideration and I would say that if you look at private companies to gather the batteries in China or even some countries Canada, the U.S, France, Germany, you all see that if you look at the priority list, ethics is really low in the list. It's here because they know that it's important, if they want to sell the product, they have to reassure people and to make sure that people will be comfortable buying their product or using their product right but it doesn't mean that behind that, there is a strong commitment or there is a strong conviction about the importance of ethics. It's just here to sell the product and that that's really something I want to warn everyone about. Be careful about people that are talking about this without having any idea of what exactly ethics is. Ethics is highly complex. ¡ Abhivardhan - Moderator


AI & Glocalization in Law, Volume 1 (2020)

Thank you so much, it was really an interesting take So I have a question for Mr. Bogdan, so let's take a different take on it, we had an amazing discussion in the AI sector from you today as well as yesterday but now, about 5G’s role in it, what role would Splinternet and 5G play in AI development? ¡ Bogdan Grigorescu - Expert Speaker You keep asking me these questions and I only have a few minutes to answer and it takes really about two to four hours to answer it thoroughly so I'll try to summarize, well to put it shortly 5Gs are huge enabled for any type of technology and why is that it's because it's about the fuel and the fuel you know for any network for anything digital and for anything AI is the data and 5G is a huge pipeline for huge volumes of data in a very short time. Let's forget about the theory on the left but in real life, we're looking at about 20 times higher, not just speed but volume of data per second compared to 4G. It's actually much higher than any Wi-fi out there by about a factor of five to also latency which means how fast, not how fast in terms of speed but how reliable is the way you receive the data. Do you have any lag with 5G in a real world, it's two to three times better than 4G and 4G is already pretty low. If you take the lag, I mean that's way higher but in real world is two to three times, so you have 20 times more data per second and two to three times less lag, that means well for internet of things where you need typically really low latency and very low lag but also high volume, very high volume of data and this is not just autonomous devices like self-driving car or something but also assistance. For example, in care homes, in hospitals at home, 5G you could see that given that it's a huge enabler also it helps with the frequency so being a radio spectrum, there's a limited number of frequencies available for you to communicate with 5G. This split becomes much more possible and easy to do between commercial and government entities. You don't want your security services to talk on the same channels that a retailer does. They really have to be split but with 5G, it is much easier because the way it works is fundamentally different from now, the machine learning type of modelling can be used to find the frequencies available and move dynamically or you know the output of the machine learning models to be used to decide what to move, to what frequency dynamically and this is done seamlessly, you the user is not going to be impacting, not going to feel anything but it does mitigate the capacity issue, the bottleneck that we all see, for example right now with Wi-fi, you have congestion on Wi-fi and all of a sudden becomes slow. It may even time out, lose connection for a little while and you'd say one doesn't work but in fact, it's a congestion of the radio spectrum frequency spectrum that Wi-fi is using it. So 5G actually resolves that at least for a while, we know more capacity we give, more it's used, we'll resolve that for quite a while this capacity issue and then when it comes to AI working with 5G machine learning typically but generally speaking AI, it can be used to address a number of high heating issues in mobile services, for example resolving outages, with much planning network capacity and general network planning when it comes to expansion we see that seven eight years down the line from launch of 4G,


does this densification work and capacity work being carried out quite significantly so the same will happen with 5G but using machine learning model, that can be done much more efficiently and better, the same goes for the performance of the network, the same goes for the internet, addressing customer service problems and definitely the security so with 5G. The rest of IoT, especially the number of devices connected grows exponentially, it's actually mind-boggling and so obviously the security risks are raising a lot so machine learning can be effectively used to mitigate these security risks quite well. It can be machine learning and other forms which can be used for near real-time automatic decision-making, again technical decisionmaking. I am talking about here, not anything relating to society as such and so on purely technical decision making like predictive, what kind of resources do I have to bump up and where and when? Similar to this is the supply chain problems, What do I have to supply? What kind of products? and where? and when? so it can be used as well. There's few challenges here, the data quality, if data is of poor quality, everything falls flat and not is not going to work, even then a lot of data sources for the data will be available hence it becomes a bit difficult to manage. Storage of data, it looks to be stored in too many places that again gives birth to silos and silos, we know it's a test of any automation therefore the death of AI systems too. They're not going to work well at all if we don't manage that carefully so in a one line 5G is a huge enabler for any AI system. · Akash Manwani – Co-moderator Thank you so much sir. Well yes! 5G is definitely going to be very interesting, I mean the latency, speed, massive IOT and I won't be surprised if in next five years I would be sitting in an augmented reality conference room. Let's hope for the best anyway. I am aware that we are way past our time but we have reserved this last conclusive question for Mr. Roger Spitz, so sir What is the new era in political technology information warfare and algorithmic diplomacy, what shifts and impacts relating to AI could we expect in geopolitics? · Roger Spitz – Expert Speaker It's a huge topic but I'll just give a few quick thoughts. I think the first area which I find quite interesting in relation to the question, is that how much information is today driven by commercial entities so when you look at some of the technologies we're talking about, AI, machine learning autonomy, cyber quantum and that most of these technologies are driven at least in the West by commercial companies as opposed necessarily to institutional or governmental entities. That has a very important aspect and data obviously feeds those technologies. The second thing which I find quite interesting and noteworthy is that the unlimited information is transforming the society. I think all of us on the panel have discussed this aspect but this is huge so technology is really blurring the lines between what information comes from consumers or producers, amateurs or professionals, laymen, experts, government institutions, companies or institutions and that unlimited information


AI & Glocalization in Law, Volume 1 (2020)

has an impact which is not really, fully understood yet. We're only starting to understand the implications of that and in particular how it can be used as “political technology� by certain countries who may wish to do so. Another element which I would add is that some of the trade restrictions around sensitive technologies which we've put in place have accelerated the position of China in AI for instance and they and China obviously have a very strong strategy where the lines may not be as same between commercial institutional, governmental and military to say the least. So when you look at, for instance breakthroughs in technology and AI and machine learning, a lot of it is coming through trips and through machine learning and through the hardware side, image language recognition etc so where there are fewer safeguards in China, the AI national competitiveness given the scale the fewer safeguards when you look at technologies like AI, biometrics, surveillance across every single industry and the scale of that, you're reaching certain breakthroughs and scale which will have ramifications geopolitically in terms of the positions of different countries and I would add that information is security and cyber security and what does it mean in terms of the quantum race and the quantum breakthroughs in terms of the implications for cyber security and countries like Israel and the US and France and you know most countries in the world asking that very same question about cyber security and those breakthroughs so when you look at recent news this week, it was interesting to see that US and the UK are joining forces as an AI partnership to basically combine the research I think to put forward some of the reasons why there's this UK-UK alliance so that's basically on a geopolitical level and a lot of it is driven by AI aka information and data. Final point which I would make is that I think there's a risk of what might seem like randomness about polarized society, polarized views in terms of covert and climate change and social topics vaccines, voting, we talked about Tik Tok and social media I guess, the question is in whose interest is it to have some of these countries be destabilized and weakened by this polarization and what does it mean and I personally worry and I know Emmanuel, Eugenio and Nicol, all of us have talked about ethics and these considerations. I wonder whether some of the values we think we have and are trying to preserve in the west around ethics and privacy in particular. You know, you take the UK which just put through legislation for the retention of fingerprints and DNA profiles in the interest of national security and we're all giving ourselves comfort, I think Emmanuel used this wonderful term in terms of cosmetics around ethics, in reality we're not necessarily leveraging the commercial benefits of what AI and technology can do with this information is meaning that the populations and society is polarized and therefore when you look at the whole ad hoc testing and coordinator tracing, uncoordinated tracking random you know quarantined process which isn't necessarily effective, one of the reasons is that it might not be a limitation of technology or AI, it might actually be what is intentional in terms of destabilizing and political technology and information and make it more difficult for governments and society to be aligned and to move towards areas which might sort of seem more effective and so that last point is a little bit sensitive and I don't


know whether I did a good job in trying to explain it but I personally think that it's directly driven by information and potentially political technology. ¡ Abhivardhan - Moderator Thank you so much sir. I think it was an amazing concluding remark. We are at the end of the panel discussion so as the key moderator for this panel I would like to conclude in very simple words that number one, we have discussed the real plurilateral issues with regards to AI ethics and diplomacy. I think that's something which is a very nice threshold. Number two, we have actually gone through real understanding of how multilateralism can obviously it cannot outpace the growth of MNCs, the growth of companies and also the other issues related to the consequences of how AI is utilized and it's not just companies I believe but it's also governments, it's also other small entities and big entities in whatsoever manner, the third aspect which is very interesting is that multilateralism with regards to issues related to AI can be fulfilled however the fulfilment is not easy, the fulfilment requires better plurilateral efforts. Obviously that's the simple grandma you come up with some basic principles and then you come up with solutions and when consensus is formed it takes time. That's how your information age works, one thing is for sure that you and the last thing which is the concluding mark from my side as a moderator is that if that is the case then I think the best solution might be I am not saying it's the final solution, I am just sharing an opinion that companies governments who are collaborating for better principles, better solutions as Mr. Goffi and Nicol rightly pointed out and also our other panelists can actually come up with transitional principles for example you're transitioning or you're not outpacing the development of AI itself because that's also an issue with GDPR and also other regulations which are predominantly very popular and also very important. Thank you so much to all the panelists, having presented this session thank you to the audience who has been patient and thank you to my co-moderator Mr Akash so it's really been a great moment and I really thank all the moderators for their visits

Research Panel 3 - Algorithmic Trading & Monetization: Policy Constraints for Disruptive Technologies

Panel discussion with Akshata Namjoshi, Dr. Raul Villamarin Rodrigues, Ratul Roshan, and Pooja Terwad. Ms Arletta Gorecka was the Moderator. [You can watch the Panel broadcast at]


AI & Glocalization in Law, Volume 1 (2020)

Algorithmic trading is concerned with the use of algorithms incorporating certain rules and processes in order to employ strategies for trading. It gained momentum during the 1970s. Accordingly, this panel discussion therefore, highlights the monetization related to algorithmic trading alongside policy constraints for disruptive technologies. What are the red flags associated with open-sourcing of algorithms and how would it affect AI stacks? Algorithms and the related IP are put in the public domain so that all participants within the AI ecosystems are able to benefit from it. From a very plain reading of meaning of open sourcing of algorithm, that is what we are looking at and that is a red flag on all counts because the organic incentive to innovate and create a competitive edge for yourself as well as the question of user privacy are all kinds of questions that come up when you want to open source your algorithms. Multiple organizations, ministries have been trying to spearhead the conversation on AI. So, the DOT is the latest entrant here. The tussle is usually between the ministry of electronics and information technology and the central government think tank. But, neither of these two entities has made either of these two recommendations mandatory in any of their earlier papers. They made it voluntary. So, why is the DOT coming up with such a stringent form of regulation which no other ministry or organization has spoken about earlier? There are certain lesser concerns as well. They speak of a central AI regulator which is a horizontal regulator meaning that this particular regulator will be responsible for overseeing AI deployment across all sectors. Multiple problems persist here again. AI essentially is a sector driven initiative. The requirements, the motivations and the impediments for the deployment of AI solutions will vary across each sector. Therefore, to have one regulator which oversees AI deployment across all of these sectors is a one-size-fits-all solution which does not work at all. In India, AI is a fairly nascent sector. There are not too many regulations. So, for you to come up with a regulator which will arguably exercise hard control and oversight will be a deterrent to innovation in some form or manner as well. There is also the recommendation to create a centrally controlled database of AI data which is administered by a central controller. There is not too much clarity; there is quite a lot of ambiguity there. But, from a plain reading, anyone who wants to participate in the AI stack will have to submit their data to the stack which dovetails into the earlier conversation around competitiveness IPR misuse user principle. But it also makes sure that you lose some amount of control as to how this data is used by other parties as well because the controller is the central government controller. Therefore, once you let go of the data from your tunnel and it is in their pipes, they determine how it will be used. Regarding questions about how this will interact with the personal data protection and non-personal data protection, we have frameworks for regulating personal data and non-personal data coming up in India.


The question of having an AI stack depends on whether you are trying to set up prescriptive hard standards as to how an AI should develop in any country. From the conversations with industry bodies and multinational firms, it is quite clear that almost any company would go against prescriptive regulations as they want to be able to provide services on their own or else the users will not rely on them. Monolithic standards may create a lose-lose situation for the people involved and it is a problem AI stacks may have to envision going forward. There is also the problem of long-term data storage as there may be some points of data that may not have to be accessed at all and can be stored in the form of cold data but when we look at this from the lens of storage limits within the Programmed Data Processor (PDP) or other such stipulations from sector regulators, then it begs the question of how to enforce those stipulations in other instruments and how to ensure data minimization. We could look at data at federated learning. So, there is no need to go to regressive alternatives when there are other better alternatives in the market. Will the open sourcing of algorithms enable the creator of algorithms to benefit from exploitation of algorithm while also ensuring that it is more easily accessible as compared to licensing of the algorithm? When you depend on the products you are coming up with, you may want to voluntarily open source them and gain some commercial transactions through it or you may want to go for other solutions that would not require you to open-source the algorithm. Then, problem lies with government telling people what they are supposed to do with their Intellectual Property Rights (IPR). For example, a person is well within their rights if they want to bring in some technology through the ecosystem. Moreover, an AI stack is in itself a voluntary military. For example, if you have any AI stack deployed in India, then at some point you will have to participate in it and that breathes a lot of problems because mandatory prescriptive standards are amongst the biggest deterrents that multinational companies face in any given country on account of a myriad of reasons. Would such an approach cost regulatory burden and stun innovation as the open source mandate removes any sort of incentives? The regulatory burden will definitely be increased because the Indian government or any government in the world at this point does not have both technical and financial feasibility required to execute a complex program that consists of so many moving parts and it becomes difficult to determine where to draw the line. The apprehension regarding innovation is correct because if someone wants to open source their algorithm to make money out of it, they are free to do so but with the government issuing mandates, it takes away the incentive to innovate and also comes in the face of IPR protections under Indian law and some of our international obligations. This is quite similar to the conversation surrounding the realm of personal and non-personal data protection framework. It is a part of a larger umbrella conversation about the government trying to make sure that the small domestic


AI & Glocalization in Law, Volume 1 (2020)

start-ups are able to benefit from the IPR and the technology that the larger companies are bringing into India. So, the problem is a common strong running across multiple regulatory documents released by the government in the recent past. In which Indian sector should we expect substantial movement on artificial intelligence front going forward? We undertook an internal research projects where we tried to assess how many sectors or which ministries organizations are thinking about deploying AI in any shape, way or form within their sectors. And, we were able to come up with 34 names within just a span of a few hours. But, there is definitely one sector the government is trying to promote and that is agriculture. So, agri-tech is a sector where the use of AI has been referenced time and again. The reason for this is that in 2017, the government had come out with a 14 volume report on how to double farmers’ incomes by 2022. In multiple volumes, there were references to emerging technologies. Therefore, as a snowball effect of that, all other ministries as well as government think tanks have concentrated on agri-tech as the field where they want to see the first development on air and the most development on air. So, this is one of the only sectors where the government has given independent, distinct thought. To this effect, they have even undertaken a lot of infrastructural projects and invested substantially in those. How are the algorithms affecting FinTech industries and what are some of the governance issues of the same? The experience in Middle East has been quite different from the Indian AI scenario because there is very less policy discussion around how AI and (Machine Learning) ML are to be treated. It has been treated at par with any other emerging innovative technology. So, they have a Ministry of AI and Dubai Future Foundation under which anything that cannot and does not fit into the existing regulatory landscape, existing regulations can find itself at a place being regulated. It is not exactly a sandbox but more of a closed environment such that the technology in itself can thrive and they can essentially offer solutions to UAE and the region. With that background, there are different sectors within Fintech whereby there has been surge in the use of AI-based products and algorithms. Majorly, compliance, fraud, anti-money laundering, financial forensics, investment advice, loan lending, robo-advisory, digital investment and wealth management are the sectors impacted by AI. When it comes to the actual problems or the approach in itself, the problems actually vary depending on which of the sectors this falls under. To draw broad distinction from a very transactional perspective and a consumer protection perspective, what really changes the dynamics in terms of a conversation around regulations or what a regulator should do or how one should oversee such transactions or what the level of reliance on an AI-based solution is or what the level of interaction of a consumer with this AI solution is, are actually what have changed the dynamics completely when it comes to the Fintech solutions.


For instance, when it comes to pure AI compliance solutions, it has been pretty straightforward and very basic. The only problem with those is that it has limited itself to ethics and code of conducts when it comes to AI given the lack of policy conversation around it. When it comes to the ML and fraud sectors, given the dynamics of the region, money laundering is a major concern which is also heavily cracked upon by the regulators. Especially after the FTF guidance, in the last 2-3 years, most regulators in the region have taken major cognizance of this and they have been very positive and been relying on such solutions which can perhaps help in in forensics when it comes to the financial technology solutions in itself. When it comes to loan and lending, the revision or the change in thought process is seen not in the sector of AI but in the actual treatment of Fintech solutions in itself. So, there are a lot of clients who have come up with Fintech solutions which are a mix of many aspects, for example, they use algorithms, open APIs, block chain or perhaps all myriad of emerging technology which are the technologies that are there in that space. In Middle East, AI has taken a little back seat when it comes to the regulatory oversight because the risk that regulators have seen is more than the outcome of such Fintech solutions which means that the interaction of the consumer with that Fintech solution itself has been a major concern of the regulator and hence, the area in which AI gets regulated the most or the governance of AI comes the most is what the level or the ratio of reliance of this AI on this AI solution is vis-a-vis the output and the outcome that you're giving to your client if you are a Fintech solution. So, that has been the major area of conversation when it comes to regulators in this region. Regulators in the Middle East have not basketed technologies as much as they have really basketed the outputs of those technologies. So, what is going to be the outcome of that technology whether it is an open banking solution or a micro payment solution; whether it is an open robo-advisory or an investment advisory kind of solution, they have tried to basket that. And, then where they have weaved it in a reverse fashion, they are weaving regulations in a very reverse fashion. Now, this is exactly not an approach that has been taken by other regulators in the world. But, from a Middle East and UAE experience, there is a major surge in the use of technologies. The only major concern that we have seen especially in the sector of AI is in the case of robo-advisory fundamentally because it is absolutely at loggerheads with the concept of investment advice because most Fintech solutions have used AI and ML solutions backend. They have been sitting on massive insane amounts of data sets which they have relied on to come out with something which is perhaps as good as a recommendation but which could have the power to influence what a consumer does in terms of decision making. About investment advice, robo-advisory is extremely regulated in this part of the world like in other parts of the world so that blend between the technology and conventional practices of investments is now seeing a major shift at least in this part of the world especially because there


AI & Glocalization in Law, Volume 1 (2020)

are major concerns and that is where actually air governance plays the most important role. There are major concerns to analyze and assess what it is that the consumer is really relying on. Given the times that we are living in, everyone is really going after the retail clients. There is reduction in products, services and financial products that are getting offered to accredited investors. Mostly everything is for retail investors. And, in that situation, what is getting very difficult is for a regulator to rest liability on the AI commissioner or the corporates which are using it or perhaps the data processor. Therefore, the dynamics are actually getting difficult because you cannot always assure reparation and damages to the person who is the aggrieved party. When it comes to the trading aspect or reliance on AI-based solutions, there has been a major shift in the last few quarters or a year.

Research Panel 4 - Artificial Intelligence and its Synchronous Implications to Ecological Data Solutions

Panel discussion with Pinaki Laskar, Rodney Ryder & Ragahv Mendiratta. Mr Abhivardhan was the Moderator. [You can watch the Panel broadcast at]

Introduction Artificial Intelligence is often looked at as a silver bullet that can solve all the world’s problems. As it comes to the ecological data solutions perception is not very different however, it becomes very important to understand that technology is often a value-neutral. It depends on how we utilize and develop these technologies which determine whether It takes a positive direction or a direction that affects the environment and climate changes. One of the examples is, In 2018 a big oil invested nearly a quarter of billion-dollar in AI. This money was used in research in order to better predict where oil reserves are present and the tapping is being done to extract more oil from the reservoirs, such type of activities is not best implications for the environment. Nowadays the google and amazon are partnering with the big oil company in order to ensure that their data management systems and their AI technologies can be used to develop and extract more oil from the reservoirs that already exist within our current resources. So, we should look into AI as ‘be all end all� in terms of furthering ecological data solutions but having said that AI has massive potential to ensure that not only do we mitigate current global greenhouse emissions and can further stop climate change but also adapt the new technologies that ensure us to move further in our goal of improving and developing new technologies so that we can counter climate change to the possible extent. Technologies to mitigate the greenhouse gases, it could be by investing in streamlining existing power


generation development and also other industries. In streamlining power generation, we can have an idea of how AI can be used to optimize the existing power grids so that we can reduce transmission loss and the power generation becomes more efficient and therefore better for the environment. In terms of investing in new technologies in the sector that helps us in long run i.e. anywhere between 15 to 25 years, AI has been used to develop cleaner technologies like batteries that have a larger bandwidth to store and transmit that power. Potentials are high but there are equal challenges, so we need to take a balanced approach and not to depend on the be all- end all approach. How does this Issue transform to Indian Sector? Artificial intelligence is not artificial intelligence, only the artificial is present and intelligence is yet to come. We are trying to give machines a power of intelligence whereas we are also conflicting that how humans will be protected by air. Firstly, we need to know that the conflicts are not between the human and machines, this is not about to equal or exit human intelligence. In AI we have to achieve a general-purpose technology called Global Scale Platform (GCP) or GPTs or GPS. Intelligence cannot be regulated by Law, which is a form of mental ability, so In humans or machines, it is about modelling of real-world and real-time. How do you modulate? And the understanding of replying or applying knowledge and skills on those reactions. Nowadays even in the machine, we are far behind of intelligence of what we see, both machines and humans are biased and we need to get out of it. During the process of intelligence, we need to put the thinking of AI inside the module of AR so that it reaches the goal we need. AI thinking is four-dimensional thinking, It is one dimensional when you are specialized in one particular vertical or download a narrow AI. 2D contains several verticals and different verticals. 3D is like a mental model like three dimensions, we have principles of hacking and the 4D is thinking of the last 100 years and the future 100 years and it's completely based on time. Based on the 4 dimensions our machine will be enabled to operate in terms of intelligence, currently, it is a form of algorithmic phase or machine learning based on data and no intelligence is added till now. The neutral need to have a more power of EPU (Emotional Processing Unit) which need to be addressed both cognitive and emphatical process of machines. Empathy is a major process to come real nearby human intelligence. When radical technologies are addressing humanity and if AI addresses what will happen and what is the problems we trying to solve. An example in which when u think about headphones which will hack your mind and you make the learning process quicker at any age. This is where artificial intelligence will be separating this kind of intelligence on our machine. Secondly when we think about a gene drive in mosquitos of changing the gene that like the smell of mosquitos there will be no malaria, so this what radical solutions and disruptive solutions we are thinking for AI. The process we are doing should be called Artificial automation (AA) and the automation is completely biased by the human for its purpose. How AI can transcend into environmental issues?


AI & Glocalization in Law, Volume 1 (2020)

In the overall version of the law, the main issue before us is how to assign legal responsibility? Is the artificial intelligence system is the person and to the personhood can we ATTRIBUTE morality cause and legality and in later stages can we attribute ownership and the question of who will be the owner and if the AI system performs actions on their unforeseeable creators of that AI, then who should own that new intellectual property as far as the ecological systems and the deployment of artificial intelligence socially in data collection or data analysis, looking at samples throughout the world whether it is photographs or changes and where the artificial intelligence systems have functioned superbly in handling and analysing big data but we also so have our compliance and in the present system, the compliances are the regulatory tool kit that we have sufficient. Is it sufficient for now to what kind of regulations we need in the future and ultimately do we have an idea on the beginning for a proposed structure on AI regulations and Law because we still have our animal bits of intelligence and not able to cord their creatorship and authorship because we have not yet seen the independent creativity so, once we see that independent creativity that is the first thing to be recognized, according to a right and a responsibility? How do you see the same development in the global south? The current regulatory toolkit is sufficient? It is not sufficient. How are the concerns of morality, bias, transparency, and in terms of a regulatory tool kit to deal with all these technologies How other countries in the world are working in developing these toolkits, some countries in Europe are more advanced? When we look at how the issues arising out of AI and how the impact is received for example the GDPR does have provisions of automated profiling right and does talk about how the profiling will be dealt with when it is done by AI. Some of the jurisdictions are advanced, we do not have any such toolkit currently in India and we do see how some provisions in the data protection bill but it still feel that there is a room for a robust AI regulatory framework in the future and fact in point made by [Mr] Laskar made, he said when we look at AI in terms of developing ecological solutions, we need to be careful as to look at AI not to replace with human intelligence but it can be enhanced, When we look at an example of architecture, how traditional architecture is used to make our buildings and our infrastructure more energy efficient. We can now see how AI can be used to build on to a design that architecture does and make our buildings and make our instructors more energy efficient so in 2018 the google gave in charge an AI duty to control air conditioning and other systems. From this, we should not see AI through a lens where replaces human intelligence but it enhances human intelligence. On the Question of Categorization of AI Systems The effect of automation with the effect of the machine learning process, that the machine is learning through the data. If you are thinking of a policy-making that


making of data laws are important and making the model or the architecture of AI need to explainable that how I am training my model?, How the data was taken, and where I am getting that data need to be explained i.e. the frame of difference currently we need because we are using completely biasing way current way any companies who are using our email inputs in his system he is trying to get up most of our wire in terms of their business and its nothing related to others privacy and ethics. So, the first thing from a legal point of view is the need to define the data laws. There is a need for this law immediately that we need to frame. Intelligence cannot be bound by law. It is a form of mental ability but the way it was trained, you cannot at one point to say that who will be responsible for that for this system. Is the developer or business or the organizations or the users how will you define this system to be responsible. It’s a data science itself all this sense of concepts everybody is responsible in the system and it is not one deployment. This still we think it is cybersecurity is an IT department, it is not that is why it is not rectified. if you think AI is a technology department, no it is a whole system. Every person involved is liable. This is a 4-dimensional product, powerful than human intelligence so the data law should be developed according to it and be explainable. Policy Measures Regarding Environment or Regulating How It Works Out? Our government has prepared the groundwork of this privacy apparatus which is at the center of the personal data protection bill. How will the apparatus work and by apparatus it means the data commissioner under the act and how they will implement. In Europe, the last 25 years the European data protective directive be implemented quite differently and subtle in which say Germany differs from the UK. Second, the crude analogy, the way we have seen certain laws across our country has been implemented by right-minded officials with a sense of duty and how it has been done. Will the data protection authorities stand up to powerful fiduciaries that remain to be seen and that is at the core because of the relative independence of these authorities their abilities to tap on technical expertise and ability to crack the whip and enforce the law and the powerful provision of Information Technology ACT has been underutilized? Questionnaire . Ecological strategies are technological diplomacy or sustainable development? Can we some developments on those grounds when both are transfused together? In recent judgments, the importance of not just diplomacy between and closer the operations between the nations to chalk out some privacy considerations or AI considerations but the strengths too also underscore the need for unification across different jurisdictions. The solution for that is closer cooperation and multi-stakeholder consultation. In Pakistan, the government released their online harm rules which are their mode of intermediatory guidelines that govern social media platforms and other intermediaries similar to Germany. The tech companies like google


AI & Glocalization in Law, Volume 1 (2020)

Facebook got together and threatened to exit from Pakistan and the government ultimately bowed to them. The importance of state holder conversation is not just between stakeholder consultation not just between private players and the government in domestic jurisdictions but also across jurisdictions because we cannot survive with the system where there is parity across jurisdictions. Secondly, whether our data protection authority when it is established after the PDP becomes an act can stand up to powerful data fiduciaries and even to government as well because there were wide-ranging exemptions to the state when the state uses AI the wideranging exemptions to the state also put the privacy of individual at risk. I think there is a strong need for data protection authorities to stand up against the government. How do we see the trends in India per se the disruptive technology and what can we learn from it generally? From an example of a project, one project is based on virtually giving a mental status from your social media data called scrapping for a good thing, and another project where the real-time of other persons in a video. Here the commission already is in play. How do You protect it? We need a law here to protect these things otherwise technology will change everything and will be used in a bad way. What are the challenges in the Indian context In AI application of crime forecasting? The two major risks are reinforcing institutional and economic bias in the system. Secondly privacy concerns. Reinforcing socio-economic models, we use a predictive police model to reinforce certain socio-economic biases by identifying the crime hotspots. In the absence of robust data, protection mechanisms can be inherently violative of the constitutional right to privacy. Conclusions. Inclusion of all the government will be a good decision but it’s tough for our country as once the laws are created the time for reformation is too long. So, the courts have historically has been excellent at the allocating responsibility and figuring out the consequences of the harms. The court will interpret up to the legislature to step up and make laws and where the laws don’t turn out as they hoped to quickly repeal them make new ones. The executive in the middle to perform the roles to look at how the law can be enforced and using tools apart from the tools of investigation in cybercrimes. Data protection needs to be developed in favor of the subject. The future for us is bright and optimistic and we learn quickly from our mistakes by the revision to our law and our standard operating procedures rather than pinning the blame. To conclude, two principles in developing this technology and two principles underlying any legal framework. Technology should be developed and enhanced with a socially beneficial motive. At the development stage, the technologists should be cognizant that their technologies should not lead to re-enforcing unfair constitutional bias already exist. On the legal regulatory, the two


principles need for the adequate accountability framework that the law incorporates and secondly the incorporating privacy by designs. These principles will be a good starting point to go ahead with development and enhancing the technologies

Millions discover their favorite reads on issuu every month.

Give your content the digital home it deserves. Get it to any device in seconds.