Page 1


{ Artificial Intelligence & Policy in India Volume 2

} (2021) Abhivardhan & Aryakumari Sailendraja, Editors

© Indian Society of Artificial Intelligence and Law, 2021.


2

Artificial Intelligence and Policy in India (2021)

Year: 2021 Date of Publication: January 10, 2021 ISBN (online): 978-81-947131-2-8 ISBN (paperback): 979-85-871348-2-9 Editors: Abhivardhan, Aryakumari Sailendraja. All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher and the authors of the respective manuscripts published as papers, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, write to the publisher, ad-dressed “Attention: Permissions Coordinator,” at the address below. Printed and distributed online by Indian Society of Artificial Intelligence and Law in the Republic of India. First edition, Artificial Intelligence and Policy in India, Volume 2, 2021. Price (Online): 250 INR Price (Paperback): 10.8 USD (Amazon.com) Indian Society of Artificial Intelligence and Law, 8/12, Patrika Marg, Civil Lines, Prayagraj, Uttar Pradesh, India – 211001 The publishing rights of the papers published in the book are reserved with the respective authors of the papers and the publisher of the book. For the purpose of citation, please follow the format for the list of references as follows: 2021. Artificial Intelligence and Policy in India, Volume 2. Prayagraj: Indian Society of Artificial Intelligence and Law, 2021. You can also cite the book through citethisforme.com (recommended). For Online Correspondence purposes, please mail us at: editorial@isail.in | executive@isail.in For Physical Correspondence purposes, please send us letters at: 8/12, Patrika Marg, Civil Lines, Allahabad, Uttar Pradesh, India - 211001


Artificial Intelligence and Policy in India, Volume 2

3

Preface The Indian Strategy on AI & Law Programme is a policy research programme started by the Indian Society of Artificial Intelligence and Law in December 2019. The purpose of the programme is to emphasize on the policy gaps and considerations behind AI-centric governance and policymaking in India, wherein the focus of the field has been more on government affairs at Indian, Global and comparative levels, based on diplomatic and digital ties between India and other D9 countries. The book presents the Works produced in the research programme since July 2020, which encumbers preliminary and some advanced analysis on the recent developments in the arena of AI Policy and Governance. This book also includes the initial publications in the Civilized AI Project which was started in October 2020. The Programme estimates on AI and Law Governance in their updated frontiers – 1. AI and International Law 2. AI and IPR Governance 3. AI Education 4. AI and Judicial Governance 5. AI and Privacy 6. AI and Cybersecurity I would extend my gratitude to Akash Manwani, Chief Innovation Officer at ISAIL, Kshitij Naik, Abhishek Jain and the Programme Coordinators for their inputs and suggestions for the policy research programme.

Abhivardhan Director Indian Strategy on AI & Law Programme.


Artificial Intelligence and Policy in India (2021)

4

Table of Contents Reports. 1. 2. 3. 4. 5. 6. 7.

Recommendations Report on Trends Involving AI Ethics Boards and their Comparative Developments Comments on the Indian Artificial Intelligence Stack by the Department of Technology, Government of India Recommendations Report on AI Education with respect to the National Education Policy, 2020 released by the Government of India Indian Strategy on AI & Law, 2020 - Responsible AI Recommendations Report Indian Strategy on AI & Law, 2020 - AI Diplomacy Report: Critical Review 2020 Indian Strategy on AI & Law, 2020 - TikTok Ban Report 2020 Indian Strategy on AI & Law, 2020 – Recommendations on AI Governance in the Indian Judicial System

Discussion Papers. 8.

International Algorithmic Law: Emergence and the Indications of Jus Cogens Framework and Politics Abhivardhan 9. The Entitative Nature of Artificial Intelligence in International Law: An Analytic Legal Model Abhivardhan 10. AI Ethics in a Multicultural India: Ethnocentric or Perplexed? A Background Analysis Abhivardhan & Dr Ritu Agarwal 11. The Disruptive Unison of AI and Blockchain: A Critical Review Arundhati Kale 12. AI and its Industrial Impacts in the Legal Sector: A Critical Review Avani Tiwari Policy Analyses. 13. Indo-US Relations and the National Security Commission on Artificial Intelligence (NSCAI’s) Interim Report and the Third Quarter Recommendations Dev Tejnani 14. Policy Analysis on AI and the Weaponization of Genetic Data Dev Tejnani Miscelleanous Works. 15. AI and Fintech Governance in India: A Critical Review Bikram Bhadra


Artificial Intelligence and Policy in India, Volume 2

Page intentionally left blank

5


Artificial Intelligence and Policy in India (2021)

6

Research Team Indian Strategy on AI & Law Programme Executive Team. Abhivardhan Director, Indian Strategy on AI & Law Programme Akash Manwani Deputy Director, Indian Strategy on AI & Law Programme Ateka Hasan Programme Coordinator, Indian Strategy on AI & Law Programme Paranjay Sharma Programme Coordinator, Indian Strategy on AI & Law Programme Sarthak Tripathi Programme Coordinator, Indian Strategy on AI & Law Programme Prof Suman Kalani Assistant Professor SVKM’s Pravin Gandhi College of Law Chief Research Expert Indian Society of Artificial Intelligence & Law Editorial Team. Kshitij Naik Chief Managing Editor at The Indian Learning [eISSN: 2582-5631] Abhishek Jain Chief Managing Editor at

Indian Journal of Artificial Intelligence and Law [e-ISSN: 2582-6999] Aditi Sharma Managing Editor at The Indian Learning [e-ISSN: 2582-5631] Mridutpal Bhattacharya Managing Editor at The Indian Learning [e-ISSN: 2582-5631] Varun Nair Junior Associate Editor at Indian Journal of Artificial Intelligence and Law [e-ISSN: 2582-6999] Darshna Gupta Junior Associate Editor at Indian Journal of Artificial Intelligence and Law [e-ISSN: 2582-6999] Sameeksha Shetty Junior Associate Editor at Indian Journal of Artificial Intelligence and Law [e-ISSN: 2582-6999] Simran Thandi Junior Associate Editor at Indian Journal of Artificial Intelligence and Law [e-ISSN: 2582-6999] Dev Tejnani Project Coordinator, The Civilized AI Project


Artificial Intelligence and Policy in India, Volume 2

{Reports}

7


Artificial Intelligence and Policy in India, Volume 2

8

1 Recommendations Report on Trends Involving AI Ethics Boards and their Comparative Developments Saakshi Agarwal1 & Sameer Samal2 1Research 2Junior

Contributor, Indian Society of Artificial Intelligence and Law Research Analyst, Indian Society of Artificial Intelligence and Law research@isail.in;

Synopsis. This is a special recommendations report on the trends that involve AI Ethics Boards and their comparative developments in certain global north countries across the globe, in the Americas and Europe, especially. Artificial Intelligence and its related technologies are relatively new for the legislations and regulatory bodies of most countries, and pose critical challenges for the entire legal system. Therefore, it is necessary to establish widely recognized legal and ethical principles to govern and regulate these technologies. The ethical issues associated with the design, development, deployment and use of big data, artificial intelligence, and machine learning and dynamic. We do not have an existing exemplary field of data ethics which is robust and comprehensive enough to address these emerging issues. Reliance can be made on ethical principles, such as responsibility and liability while developing an ethics code. However, concerns relating to strict compliance, monitoring and enforcement still exist. In this context, a collaborative approach between social and ethical perspectives can be beneficial. This approach might consider various areas of concerns, multiple perspectives, and areas of expertise. This report aims to shed light on the intricacies involved in the governance of AI, and explores the idea and nature of AI ethics boards or AI ethics oversight committees within corporate entities. These committees can be an important component of organizational capacity for avoiding and addressing risks, managing ethical concerns, shaping data collection, and promoting responsible data uses. In furtherance of the same, this report explores the AI ethics boards/ AI ethics oversight committees in play at various companies throughout the Euro-pean Union, the United Kingdom, and the United States.

The Context of United Kingdom As countries race towards the adoption of artificial intelligence, the United Kingdom expects to hold a prominent position among the world leaders in the


Artificial Intelligence and Policy in India

9

development of artificial intelligence and its effective regulation. Certain primary requisites, such as leading AI companies and start-ups, vigorous research culture, and a strong legal frame-work are part of the UK’s AI ecology. This section of the report will trace incorporation of AI Ethics Boards in corporate entities that are involved in the development of artificial intelligence. The basic set of principles that these boards use can also be of assistance to the Government in drafting a general code for AI governance. Report by the Select Committee on Artificial Intelligence, House of Lords This report outlines the prevailing status of AI in the United Kingdom by inquiring into various aspects of its development. The basics of artificial intelligence including its general engagement, design and development, risk assessment and the plan to shape its future course have been covered. The report recognizes the need for an ethical frame-work within AI companies to ensure safe development of artificial intelligence. It has been observed that many AI deploying companies are trying to establish their own ethical code of conduct for research and development. However, it is necessary for the existence of a common draft code for wider awareness, consistency and better coordination among the members of the UK’s AI-ecology. The report recommends that a core set of ethical standards and widely recognised principles should be adopted by companies in the form of an AI ethics code. The recommendations are: 1. Development of artificial intelligence for the benefit and common good of humanity. 2. Establishment of principles of intelligibility and fairness as the basis for AI operation. 3. Protection of data rights and privacy of individuals and communities. 4. Right to education for all citizens to ensure human development alongside AI. 5. Prohibition of autonomous AI tools that are capable of hurting or deceiving humans. In continuation of the above-mentioned principles, Lord Bishop of Oxford has proposed ‘The Ten Commandments of AI’, that have also been partly adopted in the report. Five of the principles have been adopted in this report as part of the core recommendations. The other five commandments that have been excluded are: 1. AI should never be developed or deployed separately from consideration of the ethical consequences of its application. 2. The application of AI should be to reduce inequality of wealth, health, and opportunity.


10

Artificial Intelligence and Policy in India, Volume 2

3. AI should not be used for criminal intent, nor to subvert the values of our democracy, nor truth, nor courtesy in public discourse. 4. The primary purpose of AI should be to enhance and augment, rather than replace, human labour and creativity. 5. Governments should ensure that the best research and application of AI is directed toward the most urgent problems facing humanity. The report does not specifically discuss AI ethics boards or committees, but it has recommended the essential underlying principles for the functioning of such committees. These recommendations may be incorporated as ethics code within companies that develop AI-enabled products or services. Report by Accenture in association with Northeastern University Ethics Institute The report titled, ‘Building Data and Ethics Committees’, aims to discuss the benefits of a committee-based approach to govern data and AI ethics in corporate entities, describe the essential components of a committee-based approach, and identify the questions that the organization needs to clarify in the process of developing the data and ethics governance committee. 1. The need for data and AI ethics- Currently, legal compliance for privacy regulations is the dominant regulatory framework for data collection and its use. The pace of innovation in the field of technology has always been much higher than the legal and regulatory mechanism that can govern it. As a result, legal guidance regarding data collection and management lags behind innovation. A robust data ethics capacity can help companies in managing and forecasting the risks and liabilities associated with data negligence and its misuse. Moreover, it will also assist companies cater responsible development and shape good governance. 2. Essential elements for building organizational capacity in ethics and dataCertain elements essential for building organizational capacity to anticipate and address legal and ethical issues surrounding artificial intelligence are: (a) Appointing Chief Data/ AI Ethics Officer(s)- appointment of persons in the aforementioned role with ethics as a dominant part in their responsibilities. (b) Assembling ethics advisory groups- these organizational level advisory groups will focus specifically on data and AI ethics. (c) Incorporating ethics-oriented risk and liability assessment- incorporation of such assessment into decision making and internal governance structures.


Artificial Intelligence and Policy in India

11

(d) Training for employees- providing training and establishing guidelines for employees to ensure responsible data management and practices. (e) Including members responsible for representing legal, ethical and social perspective on technology research and development projects. (f) Establishing ethics committees- these committees will be capable of providing guidance not only on data policy, but also on decisions regarding data collection, use, storage and overall management. 3. Ethics Committee- Committee-based oversight model has proved to be successful in the following areas of research: (a) (b) (c) (d)

Protecting human subjects in medical research. Providing policy perspectives in clinical and medical contexts. Hospital ethics committees. Institutional Animal Care and Use Committees.

4. Committee-based oversight models have certain features in common that can be applicable in the development and deployment of data and AI ethics committees within organizations. A well-built ethics committee may contain the following features: (a) Bringing people together with a range of expertise- legal, technical, and ethical. (b) Being responsive to rapid changes and advancements in technology and ethics. (c) Capable of developing standards to be used in decision-making processes. 5. Roadmap/ Checklist for Building an Ethics Committee- There are no existing data or AI ethics committees that can serve as successful models. Therefore, to build a well-structured and thoughtfully designed ethics committee it is essential to refer to the structure and elements from the above-mentioned committee examples. While no single element is perfect, together they provide a comprehensive roadmap for building data or ethics committees. (a) Why is the committee being created? (b) What should guide the committee in decision making? (c) What are the basic values that the committee is meant to protect and promote? (d) What are the primary guiding principles in support of the abovementioned values? (e) What is required in-practice to satisfy the core principles?


12

Artificial Intelligence and Policy in India, Volume 2

(f) Inclusion of experts from legal, technical, ethics and society in the committee. (g) What should be the selection process and tenure of the committee members? (h) What should be the ethics committee review process? (i) When should the committee be consulted? (j) What are the standards by which the committee must make judgments? (k) What should be the preferable timeline for committee reviews? (l) How should the committee be audited and evaluated? The report provides strong reasoning for suggesting a committee-based model for the effective governance of data and AI ethics in companies developing and deploying such technologies. Further, the report also provides a comprehensive roadmap for the building and establishing such committees. Machine Intelligence Garage’s Ethics Committee: Ethics Framework The Ethics Framework was created by the Machine Intelligence Garage’s Ethics Committee in the year 2018 with the aim to assist individuals and companies that develop and deploy AI-enabled products and services. This Framework emphasises on concepts based on questions rather than principles because questions help to illuminate the position of principles in practice. During the research phase of this project, Machine Intelligence Garage’s Ethics Committee found a multitude of references, such as the Framework by the HighLevel Expert Group on Artificial Intelligence of the European Commission in 2019, the OECD’s Principles on Artificial Intelligence, and the Beijing AI Principles. As a result, the Ethics Framework by the Machine Intelligence Garage’s Ethics Committee aligns heavily with such previously published material. This Framework has proposed certain essential concepts with a corresponding list of questions for each, these are: 1. Clarity on the intended benefits of the product or service- The benefits of the AIenabled product or service should outweigh its potential risks, as it is important to critically evaluate the risks associated with the product or service with its expected benefits. These benefits should also be evaluated on the basis of its targeted user groups while addressing its effect on the non-user groups. The list of corresponding questions under this point is as follows:

(a) What are the intended applications of the product or service? (b) Who or what might benefit from the product or service? (c) Are those benefits common to the application type or specific to the implementation choices?


Artificial Intelligence and Policy in India

13

(d) How to monitor the products or services to meet these goals and intended applications? (e) How to evaluate these benefits? (f) Can the benefits of the products or services be demonstrated? (g) Are these benefits certain or they might change over time? 2. Forecast and manage the associated risks- It is essential to forecast and consider the risks associated with the product’s intended uses and also with its forceable uses. These risks have to be evaluated similarly to its benefits, assessing on the basis of its impact on its intended users as well as on various communities, society and the environment. The list of corresponding questions under this point is as follows: (a) What might be the risks of foreseeable uses of the product, including its misuse? (b) What potential groups are at risk? (c) Currently, is there a process or method to classify and assess these risks? (d) Are these benefits common to the application type or specific to the implementation choices? (e) How likely and significant are these risks? (f) Is there a mechanism to mitigate these and the potential risks? (g) How do external parties or the employees report these risks and is there a process to handle such reports or issues? 3. Responsible data use- Ethical and legally sound usage of data is necessary for any individual or company that develops AI-enabled products or services. Compliance with data protection legislations such as the UK Data Protection Act, 2018 or the EU General Data Protection Regulation, 2018 can be an appropriate start. There are various considerations under the realm of data protection that require the developers’ attention. The list of corresponding questions under this point is as follows:

(a) How was the data obtained and was consent obtained as well? (b) Is the training data appropriate for its intended usage? (c) Is the data anonymised or de-identified? (d) Is the data collection or usage in proportion to the issue being addressed? (e) Are the potential biases evaluated in the dataset? (f) Is there a mechanism to assist the developer in dealing with errors in the data?


14

Artificial Intelligence and Policy in India, Volume 2

(g) Can information regarding the purpose and process of data processing be communicated clearly? (h) What mechanisms are in place to ensure data security? (i) Are there appropriate systems to timely audit and delete the data, once it is no longer required? (j) Can individuals remove themselves from the datasets? (k) Is there a publicly available privacy policy? (l) Can individuals access data about themselves? (m) Is the data available for research purposes? 4. Being trustworthy- For a product or service to be trusted it needs to be understood by all the stakeholders who might be affected by the same. The developers should be able to explain the purpose and limitations of the products or services to their customers, so that the users are not confused or misled. Companies should establish procedures to report, investigate, and resolve issues as despite best efforts, things may go wrong. The list of corresponding questions under this point is as follows: (a) Are there sufficient tools and mechanisms to ensure transparency, reliability and auditability? (b) Is the nature of the product or service expressed in the form that can be easily understood by the intended users, third parties and the general public? (c) Are all the potential errors and limitations shared with all the stakeholders? (d) Does the company actively engage with all its members to effectively address issues and concerns? If not, why not? (e) Is accountability determined in all possible situations? Are the accountable individuals equipped with the skills and knowledge required to take such responsibility? (f) Are there adequate mechanisms for complainants to raise concerns with the company?

The Context of European Union A prominent development in the European Parliament has been discussed in this section that addresses certain issues regarding AI’s ethical governance by all the involved stakeholders. This briefing document for the members and staff of the European Parliament is prepared and addressed to assist them in their parliamentary work.


Artificial Intelligence and Policy in India

15

European Parliament Briefing- EU Guidelines on Ethics in AI

Policy-makers and developers across the world are looking for ways to avoid and address the risks associated with the development of artificial intelligence. In furtherance of the same objective, the European Union looks forward to leading the race by establishing a ‘framework on ethical rules for AI’. The underlying principle of these guidelines is that the EU must develop a ‘human-centric’ approach towards the development of AI. These guidelines are addressed to all the involved stakeholders that design, develop, deploy, use or are affected by AI. It is pertinent to note that these guidelines are non-binding and completely voluntary in its implementation. Stakeholders may voluntarily opt to implement these guidelines while designing, developing, deploying or using AI systems in the European Union. The key ethical requirements for developing an ethical and trustworthy AI are: 1. Privacy and data protection- strict compliance with the General Data Protection Regulation is mandatory for all the stakeholders involved in the designing, development, deployment, and use of AI systems. 2. Technical robustness and safety- it is important to have secure, robust and reliable systems and softwares. This requirement revolves around cybersecurity and its ancillary safety practices. 3. Transparency- this is of paramount importance to ensure that the AI system is not biased. Transparency is required to be established in the entire AI industry through all the stages of design, development and deployment. 4. Diversity, non-discrimination and fairness- while desiging and developing AI algorithms, it is important for the developers to focus on eliminating bias, if any, from their algorithms. The stakeholders that may be directly or indirectly affected by these systems should be consulted while designing and implementing these tools. 5. Societal and environmental prosperity- AI-enabled systems must be developed to encourage environmental responsibility and sustainability of the AI systems. The companies deploying such tools should assess the impact of these tools on the environment and the society while designing them. For instance, assessing the energy consumption levels and exploring more sustainable methods to generate the same, can be a responsibility arising from this guideline. 6. Accountability- adequate mechanisms must be placed to ensure responsibility and accountability of the outputs generated by such AIenabled systems. 7. Human agency and oversight- respect for fundamental rights is an essential element and a prominent principle in these guidelines. Certain measures have been prescribed under these guidelines to ensure this requirement reflects in practice:


16

Artificial Intelligence and Policy in India, Volume 2

8. A fundamental rights impact assessment must be undertaken to ensure that the artificial intelligence enabled system does not impact EU fundamental rights. 9. The intended users should be able to satisfactorily interact with AI systems. 10. There should always be human oversight wherein humans should have the capacity to override a decision made by an AI-enabled system. Implementing these guidelines will attract certain challenges that the stakeholders have already warned about. These guidelines have been widely criticised due to its lack of clarity in many aspects. Thomas Metzinger, professor of theoretical philosophy at the University of Mainz and a member of the Commission’s Expert Group on AI, warns that “the guidelines are short-sighted, deliberately vague and do not take long-term risks into consideration”. Furthermore, considering that these guidelines are non-binding, regulatory oversight to ensure their implementation is being questioned. Most of the principles listed in this document do not provide adequate mechanisms for enforcing compliance requirements with voluntary commitments. Without little to no incentive and the non-binding nature of these guidelines, experts fear that AI developing organizations might not adhere with these principles.

The Context of the United States Cognizant had undertaken a study in 2018 regarding the position of AI in businesses in order to gauge upon their attitude, expectations and plans with respect to AI. The subjects of this research were 975 executives across several industries in the USA and Europe. Their observations and inferences. Businesses are rapidly adopting AI. However, its implementation is in the nascent stage. They have an overall enthusiastic approach towards them and see the potential that AI would hold for their ventures in the future. Investment in technology by the companies- especially the larger companies- is on an upward trend. They intend to put the efficiency and productivity increasing capabilities of AI to their advantage and get first-mover benefits vis a vis their smaller rivals. This makes it all the more pertinent that the ethics facet of AI usage be discussed. There exists an overestimation of the method of implementation of the AI in the respective companies. They have little awareness regarding the strategy employed by their companies in the ethical implementation of AI. Only about half of the subjects expressed that their companies have policies and procedures for the identification and consideration of the ethical aspects of AI either in their initial design or post their launch into the public. Thus, a lacuna between the theoretical claims and the actual interaction of the companies with the AI.


Artificial Intelligence and Policy in India

17

An inflated sense of optimism in the responses by the subjects show that there is a general lack of understanding of the nuances and the challenges of the ethical questions that are posed as this technology continues to penetrate deeper into varied systems. As per their research, the net effect on the labour market would not be insignificant as certain kinds of jobs would be replaced by other kinds of them. About 21 million new potential jobs could be created by AI in the coming 10 to 15 years, with novel positions such as Ethical Sourcing Manager.

Suggestions 1. A more active initiation by the companies to redirect their focus on the ethical governance structure of their AI, which is not just limited to reduced costs through better innovation. 2. They should develop strategies which could enable the AI technology to function cordially with the humans, with the interests of the humans at the forefront. A “human-centric” view. 3. There seems to be a strong advocacy for promoting transparency in the AIdecision making for it to be trustworthy for the consumers. 4. The AI must strive to produce error free results by refraining from data-driven biases and the results so achieved must be personalized as per the requirements of the user. 5. A responsible AI which is perceptive of the ethical dilemmas that such a technology could raise. It is imperative that companies constantly monitor that their AI systems operate ethically and constantly keep updating themselves with the upcoming research about the same. These non-technical aspects are as critical to the successful and sustainable development of AI technology as the technical aspects. 6. AI technology would function best when collaborating with humans and their activities and decisions are augmented. A study like this could boost the companies’ comprehension and trend of this technology. This would also assist them in taking more mindful and targeted actions as per the specifications of their respective companies. It promotes education about AI technology that is collective in nature so as to develop an ethical operation across companies and industries, so as to elevate a widespread course of principles in their implementation. Suggestions for Indian Companies

Businesses across the globe are exponentially adopting AI technology. They have an overall enthusiastic approach towards them and see the potential that AI would hold for their ventures in the future. Investment in technology by the


18

Artificial Intelligence and Policy in India, Volume 2

companies- especially the larger companies- is on an upward trend. With the rapid digitisation of the Indian economy, we are looking at increasing utilisation of this technology. The constant studies and research postulate that there are multiple aspects that have to be considered when employing this technology. As per a research undertaken by Cognizant, there exists an overestimation of the method of implementation of the AI in the subject companies. They have little awareness regarding the strategy employed by their companies in the ethical implementation of AI. Only about half of the subjects expressed that their companies have policies and procedures for the identification and consideration of the ethical aspects of AI either in their initial design or post their launch into the public. Thus, a lacuna between the theoretical claims and the actual interaction of the companies with the AI. There is a serious requirement for an understanding of the nuances of the ethical questions that are posed as this technology continues to penetrate deeper into varied systems. Below are suggestions for the Indian companies in order to enable them to utilise the maximum efficiency and productive capabilities that this technology entails. AI Ethics Committees. According to the experiences of the various companies across jurisdictions, the first step in the process must be to establish a committee dedicated to the analysis and evaluation of AI Ethics. An AI Ethics committee would maintain a system of checks and balances, weighing the decisions taken by the companies with respect to the technology against the principles of safe, secure and user-friendly technology. The paucity of information on this subject matter makes it pertinent to have an Ethics committee dedicated solely for this. Companies like Accenture have become increasingly aware of the sensitivity the use of AI technology poses. In their attempt to make their AI more responsible, they have laid a considerable emphasis on setting up such a committee for the purpose of conducting studies and explorations to gauge the ethical nuances that do and could potentially arise from the use of this technology. ─ The Advantages of having a separate Ethics committee would not only bring in the diverse perspectives of many people but also provide a definition to their responsibilities. ─ The committee would be required to be responsive to the advancement and the novel application of the technology. ─ It would be the governing repository of knowledge that constantly develops standards, cases and precedence. ─ While dealing with information and data that belongs to the consumers, the threshold must be much higher than minimum legal compliance. The values of the organisation should be adhered to by the AI technology and the risks must be effectively managed.


Artificial Intelligence and Policy in India

19

An Ethics committee’s role would be to constantly monitor the above, along with responsible development and shaping good governance. It is for these reasons that an Ethics committee would prove to be an essential component of the companies employing this technology for these reasons. Transparency and Trust. Companies like Axon and IBM strongly advocate for companies to observe and promote the ethical principles of transparency and trust while employing their AI technologies. The companies should go beyond the narrow view of constant innovation in order to reduce costs and instead, take active initiative in the establishment of an AI ethical governance structure. They should be able to ensure that the users can trust the technology with their data. Their privacy should be maintained by devising ways in which the client’s data belongs to the client itself. In spite of collaboration with the government, the companies should be mindful of not sharing their clients’ data with the government agencies. Taking from the massive criticism that Google faced for its Project Maven, a valuable lesson for the Indian companies is to develop AI technology that doesn’t cause harm and is socially beneficial. Albeit its continued collaborations with Government agencies, Google decided to not participate in building software that would be used in weapons or unreasonable surveillance. Thus, the companies must focus on approving of only those technologies wherein the social benefits would substantially outweigh the risks and incorporate the safety constraints. Building a system of accountability would go a long way in ensuring transparency in the decisions made by the AI. The focus should be on building a responsible AI which is perceptive of the ethical dilemmas that such a technology could raise. It is imperative that companies constantly monitor that their AI systems operate ethically and constantly keep updating themselves with the upcoming research about the same. These non-technical aspects are as critical to the successful and sustainable development of the AI technology as the technical aspects. The AI must strive to produce error free results by refraining from data-driven biases and the results so achieved must be personalized as per the requirements of the user. Constant testing and innovation through unambiguous training methods is the key for an AI system with minimal biases. Human Collaboration. An important element of integrating AI technology is to remember that it has to function in collaboration with humans. AI mechanism does not replace human intelligence, it simply aids and augments it. Thus, while devising the AI principles the companies must develop a “human-centric” view. Their strategies should enable the AI technology to function cordially with the humans, keeping the interests of the humans at the forefront. The intended purpose must be to make the benefits of AI pervasive throughout the strata and available to all levels of users.


20

Artificial Intelligence and Policy in India, Volume 2

A paramount function of Technology companies that handle public data and information is to create and maintain the trust of their users with respect to their information. This sense of security or lack thereof could make or break the company. Thus, in the longer run paying attention to AI Ethics would have far reaching advantageous consequences for the company and the impact they have on their users. In short, technological integrity is the friend the companies need even if they don’t want.


Artificial Intelligence and Policy in India

21

2 Recommendations Report on Trends Involving AI Ethics Boards and their Comparative Developments Sindhu A1 1Research

Intern (Former), Indian Society of Artificial Intelligence and Law research@isail.in;

Synopsis. Here are the comments on the Indian AI Stack submitted to the Department of Telecommunications, the Government of India submitted by the Indian Society of Artificial Intelligence and Law..

Summary of the Draft The proposed Indian Artificial Intelligence Stack (AI Standardization Committee, Department of Telecommunications, 2020) seeks to remove the impediments to AI deployment by proposing to set up a six-layered stack, each handling different functions including consent gathering, storage, and AI/Machine Learning (AI/ML) analytics. Once developed, this stack will be structured across all sectors, including data protection, data minimization, open algorithm frameworks, defined data structures, trustworthiness and digital rights, and data federation (a single database source for front-end applications), among other things. The Proposed Indian AI stack hinges on the five main horizontal layers; (I) The Infrastructure Layer: - Ensures setting up of a common Data controller including multi cloud scenarios- private and public; - Ensures federation, encryption and minimization at the cloud end; and - Ensures proper monitoring and data privacy of the data stored. (II) The Storage Layer: - Ensures that the data is properly archived and stored in a fashion for easy access when queried; and


22

-

Artificial Intelligence and Policy in India, Volume 2

Ensures that the Hot Data/ Cold Data/ Warm data are stored in appropriate fashion to ensure fast or slow data access.

(III) The Compute Layer: - Ensures proper AI & ML analytics; - Certain template of data access and processing to ensure open algorithm framework is in place; - Process ensures Natural Language Processing and Decision tree; - Deep learning and Neural networks; - Predictive models and Cognitive models; - Analytics includes; o Data engineering and sandboxing o Scaling and data ingestion o Technology mapping and Rule execution (IV) The Application Layer: - Ensures that the Backend services are properly and legitimately programmed; - Develop proper Service Framework; - Ensure proper Transaction movement; and - Ensure that proper logging and management is put in place for auditing if required at any point of time. (V) The Data/ Information exchange layer: - Provides for End Customer Interface; - Has Consent Framework for data consent from/to customers; Provision for consent can be for individual data fields or for collective fields. Typically there could be different Tiers of consent be made available to accommodate different tiers of permissions. - Provides various services through secure Gateway services; Ensures that Digital Rights are protected and the Ethical standards maintained; - Provides for Open API access of the data and has Chatbots access; and Provides for various AI/ML Apps. And one vertical layer; (VI) The Security and Governance Layer: - This is a cross cutting layer across all above layers that ensures that AI services are safe, secure, privately protected, trusted and assured. Encryption at different levels and Cryptographic supporting is an important dimension of the security layer. Tackling Algorithmic bias in the following ways:


Artificial Intelligence and Policy in India

23

1. Openness in AI algorithms 2. Centrally controlled data 3. Proper storage framework for AI so the data is not incomplete or wrong etc 4. Changing the ‘culture’ of coders and developers

Issues with the Draft 1.

The security layer specifies cryptography and encryption as essential measures to ensure security in the system. The draft also points out that it needs to develop suitable encryption methodologies.

-

There is a lack of an adequate encryption policy in India. Owing to the ongoing encryption debate in India (MOHANTY, 2019), would such a measure be safe from orders for the interception and decryption of information from law enforcement agencies? These agencies are also empowered to demand the same under section 69 of the Information Technology Act 2000 and search-and-seizure provisions like Section 91 of the Code of Criminal Procedure 1973. If the Personal Data Protection Bill is enacted, then it provides leeway for the authorities to demand access to data according to the exemption under the Bill. Since there is no current implementation of an Encryption policy, it aggravates these concerns. Whether this would in any way affect the encryption measures proposed by the committee in this draft? - Has the draft considered the provisions of the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules 2009? It is advisable for the draft to mention what policies would apply to them and whether certain exemptions will be applicable to the data under the AI Stack. 2. The draft specifies that in the absence of a clear data protection law in the country, EU's General Data Protection Regulation (GDPR) or any of the laws can be applied. This will serve as interim measure until Indian laws are formalised. -

However, the draft seriously overlooks certain dissimilarities between the Personal Data Protection Bill (PDPB) of India and the General Data


24

Artificial Intelligence and Policy in India, Volume 2

Protection Regulation (GDPR) of the European Union. If the current draft focusses on being in accordance with the requirements of GDPR, then once the PDPB is enacted, it will have to go through several changes. The following reasons dictate why being compliant with GDPR doesn’t necessarily mean being compliant with PDPB:

(i) While the GDPR doesn’t govern anonymised data at all, the PDPB allows governments to compel the disclosure of non-personal data and anonymised data. (ii) The definition of ‘personal data’ under the PDPB itself is broader than the definition under GDPR. Under the PDPB, the definition of ‘sensitive personal data’ is also broader as compared to its definition under the GDPR. Therefore, PDPB imposes a higher degree of standard while processing sensitive personal data as opposed to the standards under GDPR. (iii) The lawful bases for processing personal data under the GDPR and PDPB are different. Under the GDPR there are six lawful bases for processing personal data namely- Consent, Performance of a contract, Legal obligation, Legitimate interests, Life protection and vital interests and Public interest. However, under the PDPB there are seven lawful bases for processing personal data, namelyConsent, Legal obligation, Medical emergency involving a threat to life or severe threat to health, Providing medical treatment or health services, Protecting the safety of individuals during a disaster, Employment purposes and “Reasonable purposes” (to be defined by the Data Protection Authority). Under the GDPR all the grounds are placed on an equal footing, however, the PDPB classifies consent as the primary grounds and treats the other six grounds as an exception. (iv) Under the GDPR, data localization is not essential unless international data transfer requirements are not met. While the PDPB mandates that ‘critical personal data’ (to be defined by the government) must be stored and processed in India, except under emergency circumstances or where the government has approved the transfer. ‘Sensitive personal data’ must be stored in India, but a copy of such data may be transferred outside of India in accordance with explicit consent. (v) The PDPB definition of consent is considerably more flexible than the definition under the GDPR. The PDPB also proposes a new type of entity to help manage the consent of data principals i.e. 'consent managers' which is not available under the GDPR. (vi) In terms of security compliance although the PDPB and GDPR are broadly based on the same principles, In the GDPR, while all Data Controllers have to


Artificial Intelligence and Policy in India

25

undertake Data Protection Impact Assessments prior to processing some kinds of personal data ( subject to limited prescribed exemptions), under the PDPB, only 'significant Data Fiduciaries' are required to do so where their processing involves (a) new technologies; (b) large-scale profiling or use of sensitive data; or (c) any other activities that carry a significant risk of harm as may be specified by regulations (Wimmer, et al.). (vii) The GDPR mandates data be kept in an identifiable form and certain exceptions such as public interest have been clearly laid down for increasing the storage period under the GDPR. However, the PDPB mandates that data shall not be retained beyond the period necessary to satisfy the purpose for which it is collected and has to be deleted once the purpose is fulfilled. Even if such data needs to be retained beyond the necessary period, the PDPB demands explicit consent from the data principal. (viii) The GDPR requires Data Controllers to notify the Data Protection Authority of a breach within 72 hours, only if it is likely to result in a “high risk” to individuals. However, the PDPB requires Data fiduciaries to notify the Data Protection Authority of a breach as soon as possible (regulations to decide the exact time period) even if it is “likely to cause harm to any data principal (Roshan, et al., 2020).” Apart from this, there are significant differences in other requirements such as Audit requirements, collection of personal data and processing of personal data belonging to children, registration of ‘Significant Data Fiduciaries’, additional provisions for social media intermediaries. Therefore, if the current Indian AI Stack is developed in consonance with the GDPR regulations, it will risk being non-compliant with the PDPB. Once the PDPB is enacted, then the draft will have to undergo significant changes to ensure compliance with the Indian Data Protection Laws. 3. The Infrastructure layer provides for the setting up of a ‘common data controller’ (an entity that determines the purpose and means of processing personal data) including both public and private clouds. -

It is not clear from the draft whether this entity- ‘common data controller’ is similar to the Data Controllers under the GDPR. Data Controllers as under the GDPR do not exist under the PDPB, instead the PDPB established ‘Data Fiduciaries’. The functions carried out by the Data Controllers under the GDPR are similar to the functions carried out by Data Fiduciaries under the PDPB; however, the deliberate use of the word


26

-

Artificial Intelligence and Policy in India, Volume 2

‘fiduciary’ under the PDPB as opposed to ‘Controller’ indicates that Fiduciaries have a higher level of duty and care. Therefore, in the proposed AI Stack, the ‘common data controllers’ might not be the same as ‘data fiduciaries’ under the Indian Data Protection law once enacted, even though they are carrying out similar functions. The draft also fails to specify the obligations and powers of this authority, it merely specifies that the entity will be responsible for determining the purpose and means of processing personal data. It is also not clear whether this duty of determining means of processing personal data will overlap with the duties of Data controllers and Data fiduciaries under respective Data protection laws.

4. The Storage layer in the proposed draft ensures that the data is properly archived and stored in a fashion for easy access when queried. The draft also proposes a classification of Hot Data/ Cold Data/ Warm data according to the relevance of data and its usability. -

Although GDPR doesn’t demand deletion of data once the purpose for which it was collected is exhausted, however, this could be inconsistent with PDPB provisions that demand that data shall not be retained beyond the period necessary to satisfy the purpose for which it is collected and has to be deleted once the purpose is fulfilled.

5. The draft calls for all the government and private sector players, including manufacturers, service integrators, cloud service providers etc, to come together and coordinate in the development of the India AI stack in order to seamlessly cater to all sectors. -

-

This warrants a closer look at whether all government employees have the required skill sets to make this AI project successful and whether the digital literacy rates among the government employees is high. The Compute Layer ensures proper AI & ML analytics and embedding AI or ML in national systems is a piece that has to come from the government, not merely private tech companies to make it successful and Digital literacy holds immense significance (Chawla, 2020). Government officials and Policymakers need to be data literate to make data-driven decisions. Therefore, there is a need for AI literacy/education and skill rejuvenation.

6. Under measures to tackle AI bias, the paper proposes the need to centrally control data using a single or multiple cloud controllers because the data from which the AI learns can itself be flawed or biased, leading to flawed automated AI decisions.


Artificial Intelligence and Policy in India

27

However, the draft is silent on how the data will be controlled centrally and does not prescribe any procedural guidelines. It is also not clear whether a separate entity will be created for this sole purpose. 7. Under measures to tackle AI bias, the paper proposes to change the culture of coders and developers. The paper stated that there is a need to change the “culture” so that coders and developers themselves recognise the “harmful and consequential” implication of biases, the paper said, adding that this goes beyond standardization of the type of algorithmic code and focuses on the programmers of the code. Since much coding is outsourced, this would place the onus on the company developing the software product to enforce such standards. The draft clearly places the onus only on the companies developing the software product to enforce standards on coders and developers to ensure that they do not impose their biases onto the algorithm. However, the draft fails to specifically lay down these standards that the companies developing the software product need to incorporate. The draft is also silent about whether there is a separate appointed authority to check if such measures are adequately incorporated by the companies and whether these companies will be penalized/reprimanded if they fail to comply with this requirement. It is advisable for the draft to create an authority to ensure compliance with this requirement by periodically reviewing the company’s standards and supervising if the standards are being implemented efficiently. This becomes important because tackling AI biases is one of the prominent objectives of the proposed Indian AI Stack. Another indispensable requirement is the revamp of current training and skill development programs by governments to promote digital literacy to be more inclusive and far-reaching. 8. The draft proposes to implement the provisions on the General Data Protection Regulation (GDPR) in order to ensure effective data protection standards. However, there are certain rights granted to Data Principals under the GDPR such as the Right of rectification, right to be forgotten, right to withdraw consent and right to restriction of processing. Research has proven that these rights are largely inconsistent with AI systems. The Proposed AI Stack seriously overlooks the implementation of these rights under the GDPR which are also present under the PDP Bill. One right under the PDP Bill and the GDPR that is largely inconsistent with


28

Artificial Intelligence and Policy in India, Volume 2

AI systems is the Right to be Forgotten (Right of Erasure); primarily because AI systems are not taught to forget the way humans are. It has been observed that when data or memory is deleted, it doesn’t automatically disappear from the system however, the data is redirected onto a ‘linked list’ which will eventually be processed and then made part of available software memory to be re-used later. Practical implementation of the right to be forgotten in such situations may not also mean that one was complaint with the letter of the law in the traditional sense (Green, 2020). Whether the right of erasure under these data protection laws entails making the data disappear or making it unavailable is yet unclear. Some studies have suggested methods such as the ‘SISA training’- short for Sharded, Isolated, Sliced, and Aggregated training for better enforcement of these rights under the GDPR. The method involves dividing the training data into multiple disjoint shards in a manner to ensure that each training point is included in one shard only. These shards are then trained in isolation which effectively limits the influence of a point to the model that was trained on the shard containing the point. Finally, if there is a request to unlearn or delete a training point then only that shard or training point needs to be retrained (Machine Unlearning, 2019). 9. The draft proposes 4 ways to tackle algorithmic bias- open algorithms, centrally control data, proper storage framework and change culture of coders and developers. However, there are other reasons for algorithmic bias that the draft ignores. Placing complete onus on the ‘culture’ of the coders and developers is not the ideal way of approaching AI bias. Biases creep into AI for several reasons including insufficient training data sets or lack of diversity in these data sets, lack of oversight in collection of data and sampling, lack of regular audits and reviews of policies etc. The Proposed draft fails to address all of these concerns and instead places the burden mostly on the coders and the developers.

References 1. AI Standardization Committee, Department of Telecommunications. 2020. Indian Artificial Intelligence Stack. Tec.gov. [Online] 02 09 2020. [Cited: 01 10 2020.] https://www.tec.gov.in/pdf/Whatsnew/ARTIFICIAL%20INTELLIGENCE%20%20INDIAN%20STACK.pdf. 2. Chawla, . 2020. What Does India Need In Place To Implement Nationwide AI Systems Across Sectors? Analytics India Magazine. [Online] 10 September 2020. [Cited: 01 October 2020.] https://analyticsindiamag.com/what-does-the-government-of-india-need-in-place-toimplement-nationwide-ai/.


Artificial Intelligence and Policy in India

29 3. Green, . 2020. GDPR: The Right to Be Forgotten and AI. Varonis. [Online] 29 March 2020. [Cited: 01 October 2020.] https://www.varonis.com/blog/right-forgotten-ai/. 4. Machine Unlearning. Bourtoule, , et al. 2019. 2019, 01 December 2019, arXiv e-prints, Vol. arXiv:1912.03817. 5. MOHANTY, . 2019. The Encryption Debate in India. Carnegie Endowment. [Online] 30 May 2019. [Cited: 01 October 2020.] https://carnegieendowment.org/2019/05/30/encryptiondebate-in-india-pub-79213. 6. Roshan, and Srinivasan, . 2020. Comparative Analysis: General Data Protection Regulation, 2016 And The Personal Data Protection Bill, 2019. Mondaq. [Online] 12 March 2020. [Cited: 01 October 2020.] https://www.mondaq.com/india/privacy/903076/comparative-analysisgeneral-data-protection-regulation-2016-and-the-personal-data-protection-bill-2019. 7. Wimmer, , Maldoff, and Lee, . Indian Personal Data Protection Bill 2019 vs. GDPR. IAPP. [Online] https://iapp.org/media/pdf/resource_center/india_pdpb2019_vs_gdpr_iapp_chart.pdf.


Artificial Intelligence and Policy in India, Volume 2

30

3 Recommendations Report on AI Education with respect to the National Education Policy, 2020 released by the Government of India Neerja Seshadri1, Ritansha Lakshmi2, Dyuti Pandya3, Paranjay Sharma4, Avani Tiwari5, Yash Raj Verma6, Shubhangi Chaudhary7 & Prachi Puranik8, Authors Aryakumari Sailendraja9 & Nayan Grover10, Editors 13478 Research

Member, Indian Society of Artificial Intelligence and Law Indian Society of Artificial Intelligence and Law 6 Editor, Internationalism 9 Chief Operations Officer, Indian Society of Artificial Intelligence and Law 2510 Editor,

executive@isail.in

Synopsis. This is a Recommendations Report on the model of AI Education with respect to the National Education Policy (NEP), 2020 released by the Government of India. The Report includes an Executive Summary by Abhivardhan, the Chairperson, ISAIL & Akash Manwani, Research Analyst at Internationalism. The team is grateful to Prof Suman Kalani, Chief Research Expert, ISAIL, Adetola Jesulayomi, Research Member, Kshitij Naik, Nodal Advisor and Yash Raj Verma, Editor at Internationalism for their inputs, contributions and suggestions.

Executive Summary The Report is a preliminary attempt to construct some base and discourse to discuss and improve the limited constraints of the National Education Policy, 2020 released by the Government of India. It must be admitted that the policy released should have had come very earlier, and would have been a 1991-Rao moment, as it was in economics and finance in those times. The reforms proposed in the policy cannot be implemented unless the bureaucratic potential of India is settled and reasonable. However, it is the political will and motivation of the Government to bring the policy on table. The criticisms I hold on the policy have not been discussed in the report since the report is specific to AI Education. Yet,


Artificial Intelligence and Policy in India

31

it must be admitted that the problems of implementation stem from the very need to indigenize the values of civilizational resilience and technical liberalism in constitutional governance. The Constitution of India, 1950 therefore must not be seen as a restrictive, socialist document, which does not invite or attract the intercepted and intersectional influence of technology into our lives. However, by the hope that we can confer towards the Constitution, it is pretty clear that we can come up with innovative and stronger solutions in the matter of Disruptive Tech and Education strategies. Technology is too political, as EAM Dr S Jaishankar rightly points it out – and it is therefore important to understand that infusing AI into the Indic civilization and the Indian education system can be a good measure when the logistics, aesthetics, focus on economic libertarianism, not socialism & robust governance is if not assured, gradually implemented. I welcome the recommendations given in this report authored by our members and congratulate them for this. Abhivardhan Chairperson & Managing Trustee Indian Society of Artificial Intelligence and Law

Additional Executive Summary The recent NEP 2020 policy has actively propagated digital savvy methods to enhance computational thinking by introduction of Artificial Intelligence (AI), Big Data Analytics, Machine Learning and so on. The larger intention is to assist the development of cognitive ability of the students and render them necessary skills which are at par with international standards. AI is not only being considered as a medium of teaching and carrying out other administrative tasks, it is being considered as a separate discipline altogether. Simplification, accessibility, familiarization and inclusivity will form the cornerstones for entry of AI into the mainstream education industry. Flexibility in learning and multidisciplinary nature of academic studies shall help in employable skill rejuvenation through developing software capabilities while continuing University education. AI in education is becoming a part of national vision for several countries. China, UK and the US have already made significant developments in this area. Realizing the benefits of intelligent systems, the educational institutions around the globe are adopting AI for research as well as curriculum composition. AI can be accessible 24x7, learners get self-paced learning, extra teaching time and innovative methods to help them. Progressive ideas like introduction of Coding in the middle stages of education in NEP reflects a well-thought policy. It will not only enable mathematical and computational thinking, but also assist in creative use of digital technologies. As simple as Smart classrooms can have so many areas to accommodate an AI-like device. Some of the instances are Data Storage, tutoring, Advance Learning Systems, Grading Tests, Global classroom, noting attendance of students,


32

Artificial Intelligence and Policy in India, Volume 2

analyzing facial expression and so on. New innovative methods of social and emotional learning can also be introduced by effective use of AI. It is acknowledged that AI has much to contribute in the education sector. It can determine early signs of student’s performance through academics, give real time feedback, predict future risk of failure, suggest areas which need to be worked upon which basically help Universities to take appropriate remedial actions in the interest of the students. Ability of AI to handle big data and process it could give a personalized experience to students based on daily performance and examination results. Along with being helpful for the student community, AI could also prove beneficial for administrative tasks, answering FAQs, making assignments and so on. It could tremendously reduce the burden on teachers while appropriating interventions through personal attention for students. The intelligent systems can provide modern solutions and promote individualistic experiences, multilingualism and skill rejuvenation. The legal and policy discussion in AI revolves around ethics, data protection, regulations, cyber security, due diligence soft- ware, natural language processing and so on. Skill rejuvenation is at the core of the Policy. Skills help in navigating through unprecedented problems of today. Michael Chui, a Partner at McKinsey Global Institute rightly states “prepare today’s young people for a world of constant uncertainty”. Akash Manwani Chief Innovation Officer Indian Society of Artificial Intelligence and Law

Introduction National Education Policy 2020, which is released on July 29 is said to be the first education policy of the 21st century and it aims to address the many growing developmental imperatives of India. The focus of NEP 2020 is to improve the quality of education, curriculum and roping in new technologies while keeping India's traditions and value systems intact. It aims to bring our students up to date with the new employment landscape, both nationally and globally, while ensuring they respect and optimally use the traditional learning of the rich Indian heritage. One of the major objectives is to transform pedagogy into imparting more of practical learning, which will be experience-based, rather than theoretical knowledge. This is to bridge the gap between what is taught and that which is required in the market. A major underlying objective behind the policy is to achieve Goal 4 of the 2030 Agenda for Sustainable development adopted by India in 2015, i.e. “to ensure inclusive and equitable quality education and promote lifelong learning opportunities for all”. The new policy seeks to reform the educational ecosystem to ensure that quality education crosses all geographical, historical, social and


Artificial Intelligence and Policy in India

33

economic barriers and reaches every student across the country, especially those from weaker and vulnerable sections of society. It intends to inculcate a curriculum that involves novel technological fields like applied mathematics, computer science and data science in concurrence with multidisciplinary skills across sciences, social sciences, and socio-economic which would help fulfil the requirement of skilled worked force in future.

Role of Artificial Intelligence in NEP 2020 Artificial Intelligence could become a promising technology to transform the education system of India.AI is omnipresent and has made inroads in all walks of life including education. While AI may not replace the job of teachers, they combine use with other educational technologies as well as the teachers would give optimum supports. AI is gathering a momentum to ease and aid the teachers and educational institutions as it has immense potential, such as administration, learning, reasoning, problem-solving, tutoring, grading, using language assessments and predicting the requirements of every student differently. Thus, AI can significantly help to make the quality education which NEP calls for, available for all students across the country. This analysis specifically focuses on how AI can be used as a catalyst in achieving the objectives of the new education policy. Various case studies showcasing the use of AI technology to optimally utilise the available data and other resources to achieve the objectives have been presented. We have conceptualised and analysed all the possibilities of AI in all 4 parts of the National Education Policy (NEP) 2020. School Education In this School Education part of National Education Policy Draft, Authors mainly discussed the three subheads that is: 7. Personalise Learning & Predictive quality, of AI for NEP to fulfil its objectives, As, The NEP 2020 aims at improving the quality of education, restructuring curriculum and roping in new technologies while keeping India’s traditions and value systems intact. The author in this section tries to address the objectives of NEP and incorporating AI approaches to make it not good only on paper but in reality. This section talks about re-establishing teacher’s role and framing an eligibility criterion and introducing Teacher Eligibility Test to ensure a better quality of education de- livered to the students. And how taking over some of the tasks of teachers by AI will enable them to bring in more human capabilities such as mentorship, emotional sup- port, interpersonal skills, to work more on students with difficulties and one-on-one communication with student’s quality etc with some promising AI tools as an inspiration.


34

Artificial Intelligence and Policy in India, Volume 2

8. Importance of inclusive education approach and how Artificial intelligence can prove to be extremely beneficial in providing holistic and ubiquitous access to ap- propriate learning opportunity to all the students of all the geographic background including students with physical deformities and emotional disabilities. 9. Restructuring the school curriculum to enhance experimental learning and creative thinking of students such as the introduction of Coding, Design Thinking, Holistic Health, Organic Living, Environmental Education, etc. And how these skills are very important for India's future and leadership role in the numerous upcoming fields and professions. In the end, the authors give insights into the aspects of promoting multilingualism and how this will help in keeping India's traditions and value systems intact. Higher Education This section starts by pointing out all the major changes that NEP 2020 brings in the Higher Education sector. Then it enumerates the underlying basis on which the policy recommends Artificial Intelligence to be studied as a university course and the im- portance of learning its integration with multidisciplinary courses and professional areas like healthcare, agriculture, etc. The analysis supports this with proper reasons and also presents different possible approaches and appropriate methods of implementation to achieve the best outcome. Some global case studies are also studied to get a better understanding of the concept. Light is thrown on different tools associated with AI, like Smart classrooms, AI-enabled feedback, etc, which can improve the learning experience and make education more accessible for all. Lastly how Artificial intelligence can resolve issues specific emerging in the Higher Education sector in India is explored and recommendations on the same have been offered after evaluating the ground situation. Critical analysis and implementational challenges Apart from Primary and Higher Schooling, there are other areas, that policy interfaces, must ponder and focus upon. Numerous multidisciplinary fields are directly impacted by the change in educational policies, the first and foremost of which is professional education. The areas like adult education and life-long learning, where adults have an option to reach out to the training centres with the help of social workers, and Indian Languages, Arts, and Culture, to teach the rich heritage and culture so that the future generations do not dissolve the rich Indian culture in the future, are also mended by this policy. It cannot be questioned that these areas help in shaping various crucial sectors in a country. Therefore, it cannot be disseminated from the educational policy. It would not be new to


Artificial Intelligence and Policy in India

35

introduce technology-integrated classrooms. Not only using these technologies reasonably and fairly but also preventing its misuse would be the mission of the National Educational Technology Forum (NETF). Lastly, ensuring equitable use of technology for online and digital education. The policy has correctly acknowledged that with technology comes risk, and therefore proper training is suggested for all the teachers. A detailed discussion regarding the same has been provided in this policy analysis.

Recommendations Propositions

&

Conclusions:

Basis

and

In this part, the authors have critically analysed NEP Report revolving around the use of AI within the education system and provided some recommendations which can help the government in the process of improving overall learning outcomes for all and to identify the way in which the use of AI may ensure that India’s cultural values are preserved in the technological transition and moreover foster these values for the betterment of the education system and society overall. And ended with a conclusion stating "Ideas are useless unless used” to make NEP not good only on paper but in reality.

NEP 2020 and School Education in India Teachers The National Education Policy, 2020 has a framework for improving the quality of education by increasing the professional eligibility criteria for teachers. There is a minimum requirement of a 4-year Bachelors in Education degree and the clearance of the Teacher Eligibility Test. However, despite all measures to standardize and formalize the quality of teachers in rural and urban regions, the role of teachers with the introduction of artificial intelligence technology remains largely unclear. (Ministry of Human Resource Development, Government of India, 2020) Artificial Intelligence can prove to be a beneficial tool in the foundational, preparatory, middle and secondary level. With the rise of various artificial intelligence-based software in the market, the government must take necessary steps to either invest in these technologies to be incorporated at subsidized rates at schools or consider indigenization of such technology. There is a need to incorporate such innovative technologies as laborious tasks of teachers can be reduced which will provide them with a significant amount of time to deal with aspects that software cannot necessarily cover including a focus on interpersonal skills, team building activities and increased progression of their careers with adequate time for research. (Lieberman, 2020) The apt method to be incorporated would be a hybrid system comprising of classes by teachers and exercises on


36

Artificial Intelligence and Policy in India, Volume 2

artificial intelligence-powered platforms. Indigenization of such platforms can also be very beneficial when it comes to making such technology accessible even in rural areas. Creating mobile-friendly applications is also a way forward as smartphones have become accessible in the majority of places and it could be used by the whole house- hold. The National Education Policy also recommends that teachers must be given professional development courses regularly. This should include changes in artificial intelligence tools to keep all teachers aware of technological changes. Some new applications which have promising results globally which can act as inspiration are as follows – • Dragon Speech Recognition of Nuance is speech recognition software and transcription software which proves to be beneficial for students and faculty. This technology can be used to help special students who have disabilities as it helps with word recognition and structure. Teachers can use this software to reduce monotonous work like typing notes which can be made easier by reading the requisites to the artificial intelligence software. (Nuance) • Cognii is an artificial intelligence-based platform aimed to improve critical • thinking abilities which also provides real-time feedback. The platform has one on one tutoring and can be customized to suit the needs, various individuals. This will prove to be beneficial while training secondary students for preparatory examinations to pursue higher education. (Cognii) • Century Technologies is a platform which tracks student progress and tailored to meet every student's needs. This platform also provides information about teacher-student communication gaps along with feedback for both parties. (Century) • KidSense is an artificial intelligence-based platform which deals with speech recognition among children. In situations where it is difficult to track the vocabulary of children, this software proves to be a beneficial tool. (Kidsense) • Carnegie Learning is an artificial intelligence platform for mathematics which tailors questions based on the level of expertise of each individual. (Carnegie Learning) Since mathematics is considered to be one of the essential subjects in the policy, this tool is essential to be incorporated in Indian education. • Kidaptive’s is an adaptive learning platform which challenges students based on their strengths and weaknesses. The platform also assesses the future course of action that needs to be taken to improve grades. (Kidaptive)

Equitable and inclusive education. The National Education Policy primarily emphasizes on the importance of inclusive education approaches primarily concerning Women, Socially and Economically Back- ward classes and children with disability. The policy also stresses the importance of including students from all geographic backgrounds. Artificial intelligence can prove to be extremely beneficial in ensuring the same.


Artificial Intelligence and Policy in India

37

There have been suggestions made that this technology can be used to track the reason for school dropouts. This may prove to be a step ahead in ensuring that education reaches all individuals irrespective of the societal impositions on them. Predictions show that the dropout rates are currently at 4% at the primary level and 20% in higher education. (Arora, 2020) Artificial intelligence can be used to tutor these students as some of the main reasons for dropping out of school is financial inadequacy and social stigma which puts children in a precarious situation. This technology can be used to track the reason for dropping out of school in different areas and address the issue directly. In situations where girls drop out of schools, the government can take measures to launch awareness campaigns in the area to address the issue. Further in geographically remote areas, the government can also ensure that education reaches the homes of students by ensuring that each family has access to at least one electronic device which can help in the education of children. The improvements in Kasturba Gandhi Balika Vidyalaya is also a welcome step. (Ministry of Human Resource Development, Government of India, 2020) However, it needs to be taken into consideration that these schools cannot completely accommodate all students. In these exceptional situations, the government can consider homeschooling options by use of artificial intelligence with a requirement of taking tests every quarter in the nearest educational institutions. To ensure inclusive learning, due regard must also be given to children with special needs. Artificial intelligence can play a major role in this regard as there are tools which can help students with physical deformities and emotional disabilities. Children with dyslexia and autism can be effectively helped out under the Samagra Shiksha program of Ministry of Human Resource and Development. (Department of School Education and Policy). In addition to the regular artificial intelligence tools which is made avail- able to all students, efforts can be taken to use speech recognition and transcription tools to help special children. Presentation Translator also opens up possibilities for students who might not be able to attend school due to illness or who require learning at a different level or on a particular subject that isn’t available in their school. (McNeill, 2018) Further, artificial intelligence can be used to detect gender bias and discrimination in the acceptance of children in schools. This approach would help in reducing gender disparities and provide equitable opportunities for all (Arora, 2020) The pattern detection approach can be used to enhance inclusiveness. Curriculum. The new curriculum proposed by the education policy primarily deals with providing a wide array of experiences to students at the school level to meet the global standards of education. It involves the simulation of critical and creative thinking by experiential learning. Artificial intelligence is being considered to be taught to students as a subject with the inclusion of coding-related subjects in the Middle School level. The curriculum also suggests Design Thinking, Holistic


38

Artificial Intelligence and Policy in India, Volume 2

Health, Organic Living, Environmental Edu- cation, Global Citizenship Education (GCED), etc. to be incorporated into education. (Ministry of Human Resource Development, Government of India, 2020) While the National Education Policy deals with the inclusion of these subjects, due regard to incorporation of the use of artificial intelligence at different levels has not been laid out. It needs to be taken into consideration that the applications of artificial intelligence in each field and industry must be introduced at early stages to ensure that students understand the impact of this technology on each industry because it has potential to revolutionize our lives. Further, this technology can be used to train every student in their mother tongue irrespective of where that student is geographically located if an artificial intelligence technology is fed with all the requisite datasets of different languages with time. It can also include cultural facts to ensure that every student is aware of the culture and heritage of classes of the society they hail from to ensure that students have access to their foundations.

NEP 2020 and Higher Education in India With a focus on holistic development, innovation, and technology, the recently un- veiled National Education Policy 2020 (NEP) is a much-awaited intervention that will invigorate India’s higher education system (SG, 2020). Several changes have been pro- posed to make India a knowledge superpower by equipping its students with the necessary skills through improved school and higher-education programs. Key-changes proposed in Higher Education. • By 2035, Gross Enrolment Ratio in higher education to be raised to 50% Also, 3.5 crore seats to be added in higher education sectors. • The structure and length of degree programs shall be adjusted to promote holistic and multidisciplinary education. A flexible curriculum is being introduced of 3 or 4 years with multiple exit options and appropriate certification within this period. • Higher Education Commission of India (HECI) will be set up as a single overarching umbrella body for entire higher education, excluding medical and legal education. • Establishment of a National Research Foundation to promote strong research culture and building research capacity across different universities. • Bachelor’s programs will be multidisciplinary and there will be no rigid separation between arts and sciences. • National Testing Agency will conduct a common college entrance exam for admissions to various universities, twice a year. And this will be implemented for the session commencing from the year 2022.


Artificial Intelligence and Policy in India

39

• The UG programs will be credit-based and an Academic Bank of Credits will be available to facilitate the transfer of credits. Foreign universities will be permitted to operate in India. • A robust system of graded accreditation will be established, which will specify phased benchmarks for all HEIs to achieve set levels of quality, self-governance, and autonomy (Replacing the UGC and AICTE with Higher Education Commission of India) • For the master’s level, HEIs would offer the option of a 1-year master’s degree. A two-year program can be offered as well where the second year would be devoted entirely to research for those who have completed an undergraduate degree of three years. • For doctorate, the students who have completed their master's would be eligible for pursuing a PhD. • The new policy has instituted for the Scrapping of the MPhil program.

Scope of artificial intelligence in the higher education sector. The policy goes into providing a dynamic and proactive introduction of research and teaching programs in fields of national importance including the emerging field of Artificial Intelligence. It endorses AI to be a discipline in itself and taught as a course in all universities. Integration of AI & Multidisciplinary Courses. The universities will soon offer programs in core areas and multidisciplinary fields that would entail the higher degree education of Masters and PhD within its curriculum the talk about having multidisciplinary fields approach i.e. ("AI + X") and professional areas (healthcare, agriculture, and law) is something that is needed. The following methodology will be a novel conceptual arena in the field of AI studies, as it will focus on offering a distinctive and universal understanding of the evolving technologies. This approach can be combined with a deep understanding of AI with computer science, economics, law, and sociology. The acute reasoning here would lie in familiarizing the students with the concept of simplifying AI and evaluating the whole concept of AI through other approaches. A structured possibility would include answering questions such as: • What is the necessity of regulations in the application of existing legal concepts and protocols? [Law + AI] • How will AI change the future of the whole job industry? [AI + Management + Sociology] If we are to take an example of this approach, Amsterdam’s AI courses are linked to computer sciences and Nijmegen University has adopted an approach of offering AI courses in alignment with psychology. Another course offered by a university in Utrecht offers an AI course with a philosophical component. The new degree structures open the


40

Artificial Intelligence and Policy in India, Volume 2

possibilities of allowing the students to switch disciplines and thereby opening the entries in different horizons ranging from mathematics, psychology, informatics, sociology combined with the study of AI. If India were to adopt such measures, it will create a new avenue for students in understanding the dynamic and complex nature of AI. The creation of multidisciplinary academic groups will help students gain software skills whilst being in the university environment.

Artificial Intelligence in the field of Education There are many other ways where students can get familiarized with the idea of AI in their daily student life, those being: Smart classrooms: Smart classrooms are largely technologically enhanced settings that are believed could have the capacity to increase the opportunities learners have to actively engage and participate in the edification and learning exercise, via the use of technological tools and devices (Ikedinachi A. P. WOGU, 2019).AI can become extremely handy in the teaching-learning process (Xie, 2019). The report as predicted in the Technavio's "AI Market in the US Education Sector of 2018-2022" has a nearly 48 per cent growth rate for AI tools laid over the next three years in the United States. IBM and Google's AI initiatives offer personalized learning strategies that schools can adopt to increase automation and reduce human bias and error (Bonderud, 2019). Data Storage: AI Systems can enable computerized storage of data. This will ensure the data stored is safe. The details related to the students such as their personal information, their report cards, results of the examination, etc. will all become more accessible. (2020) Tutoring: AI software can result in being a personal tutor for students. These intelligent tutoring systems will be present all the time for clearing concepts or giving extra exercises. It will be an out of class learning system according to the requirement of each student. Currently, teachers and an institution cannot provide individual attention to every student. However, when content is created and graded by AI, it would ensure personalized paths of learning for the children by identifying weak points for the students and providing recommendations accordingly. Advanced Learning System: With AI, schools and universities can enable Virtual reality programs for the students and give them real-life experience without the expense of money and time. They can also provide virtual experience by installing virtual boards and Grading, Tests, etc. An AI-based software will enable the real-time assessment and it can keep a track of the daily tasks assigned to the student and whether they are being completed on schedule or not. It can also provide tests and their grading with a reduced chance of error. The performance of a student can easily be tracked, and a measure of improvement or degradation can be assessed by the software.


Artificial Intelligence and Policy in India

41

Global Classroom: With an AI system, it will be possible to create a universal class- room where students from all over the world can interact and participate in activities. This will create exposure and make students learn about different cultures and the learning environment in different regions and countries. AI-driven programs can give students and educators helpful feedback: AI can not only help teachers and students to craft courses that are customized to their needs, but it can also provide feedback to both about the success of the course as a whole. For example, China proposed its national AI strategy for education as part of the techno- logical vision. Huijang (Shenggao, 2018), a private company working on digital education, developed an image and voice recognition software that is capable of under- standing student facial expressions to give AI feedback online. Liulishuo, also known as LAIX is an education company that teaches English to 600,000 students at the cost of a single teacher. It uses AI to create virtual educators (Khan, 2019). Master Learner developed a “Super teacher” capable of answering 500 million real-time questions asked by students who are preparing for the Gaokao university entrance examination (Jing, 2017). Artificial Intelligence role in regulating Higher Education. Understanding the usage of intelligence in the education sector is an altogether different ordeal. Digital Machineries such as AI, the Internet of Things, and many other advances in information and computer technology provide opportunities to improve the whole ordeal of the education process. AI has the probable future to bring substantial modification to colleges and universities in manifold ways: 1. AI gives advanced institutions the ability to expect more matriculated movements, optimization of the employment efforts, and higher academic performance. This can be casted-off to enable the college admission processes faster and much more customized. 2. AI programs can be used for registration of courses and collecting student information and fees. Furthermore, it can also help students with the application process to help students be ready when they arrive on campus. 3. The AI can help in identifying the early warning signs regarding the students who will struggle academically. The Stanford University report has shown that there have been projects developing a model to predict the risk of failure and provide real-time feedback with learning outcomes. Rose Luckin has provided examples of how AI can help in student experiences, citing the case of the University of Derby that introduced a monitoring system that predicts which students might be at risk of discontinuing their education. The AI software allows the university to intervene before this takes place. India can try adopting this engagement analytics software and thereby reduce the dropout rates.


42

Artificial Intelligence and Policy in India, Volume 2

4. AI can help the higher education management field in providing collaboration for student services. 5. Big data and machine learning can be used for the analysis and evaluation of stu- dents' daily performance and examination results. Natural language processing can be effectuated to automatically generate labelled learning materials; deep learning can be put forth in analyzing student profiles and learning materials, therefore automatically selecting appropriate content for students from a vast pool of learning resources. 6. AI technology can help in providing a better remote linkage between highquality teachers and students, thereby breaking time and space barriers and facilitating educational resource sharing among different branches of the school. Defining personalizing education means learning at a self-oriented pace. 7. Chatbots powered by AI agents can help provide the students and the teachers to analyze individual learning. Classroom dynamics can include various sensors and cameras to monitor the engagement to provide real-time or post hoc feedback and suggestions. This method will be essential in discovering the behaviours of the students and thereby supplementing the teachers with the data on better learning. A chatbot can introduce student engagement. One of the examples would be that botsify. This is an educational chatbot that represents learning subjects to students in the form of images, text, and videos (Wolhuter, 2019). The interesting part is that the lessons are in a conversational format. This provides personalized learning and increased interaction. We know that the best way to integrate a chatbot would be to have a blend of machine learning and fundamental meaning for maximizing outcomes through natural language processing. In 2017, WSSU became the first to implement an AI-driven chatbot to help students get ready to arrive on campuses, the interactions between the students and the chatbots to understand the areas where students were struggling and how help could be provided.

Major challenges in implementation and Critical analysis of the New Education Policy in India The National Education Policy brings in changes that were much awaited in the Indian Education system from a long time to make it at par with international education systems, however, there are some major challenges that we have identified that the government could face while implementing the NEP into the current system. There are both Quantitative and Qualitative challenges that the government will face while it implements the NEP some of which we identify and list down here. The quantitative challenges that we could identify were that the policy mentions that the government wants to Double the Gross Enrolment Ratio in higher


Artificial Intelligence and Policy in India

43

education by 2035 which would mean opening one new university every week which is an impossible task, which is a similar case with Schools as well the policy mentions introducing over 2 crore children into the school system who are currently not enrolled into schools which would mean the opening of fifty schools per week for the next 15 years considering various factors which in the current situation is not possible. Even if the government can do so and open multiple schools in each district by increasing the education budget (which we will come back to later) the problem is these schools won't be of any use without Headmasters and Teachers considering that the Government can set 50 schools per 7 days it will have to hire 50 headmasters every 7 days plus teachers to teach in these schools, let alone hiring new teachers even the current teachers that are teaching in Government schools have started to quit due to non-payment of salaries lack teaching facilities etc, and this will make hiring new teaches every 7 days even harder. 6% of the GDP needs to be spent on education is what the Kothari commission had advised in 1968 which was reaffirmed in 1986 and then in 1992 when the policy was reviewed which clearly shows a lack of will for public investment in education, instead, the current expenditure on education has decreased from 4.14% in 2014-15 to 3.2% in 2020-21which is a cause of concern and with the COVID-19 situation it is expected to drop further, the Government should rather focus on a more realistic number and focus improving the existing schools and developing better school slowly rather than just running for the big number. Also with the COVID-19 pandemic, the execution of the policy will be delayed by a substantial amount of time. One of the other problems in Implementation of the NEP is that it brings a huge cultural and organizational shift in the current education system and the current teachers that are teaching in schools are trained according to the traditional education system, the NEP has very limited to say about the training of the existing teachers however even if the policy is a 20-year plan the shift in between should not hurt student’s education. One of the major cause of concern that the NEP raises is that it has left out the total process of a Parliamentary discussion and oversight, given the fact that it has been introduced when the parliament is not functioning, which has prevented the parliament from critically examining and suggesting amendments to it which is an important part of the process considering the fact that the policy applies to the whole of India, Another major problem in the Implementation of the NEP is that it is silent on the RTE Act, NEP talks about the universalization of school education from 3-18 years the problem is that not linking the policy to RTE would mean it won't be binding on the states and union territories, there is no point in universalization when education is not made a right, The NEP also moves towards centralization of the education system which will probably be a major issue when trying to implement the NEP ant State and district levels. Implementation of the NEP would also mean first solving the legal issues


44

Artificial Intelligence and Policy in India, Volume 2

surrounding it, the draft Higher Education Commission of India Bill has been languishing in the Ministry for over a year, the other legal issue in implementing NEP would be that the Central and State Universities Acts will have to be amended which will further push the implementation of the NEP. The fact that the New Education Policy encourages foreign universities to open up in India is great but it will need to keep a check on that commoditization of education is not done here in India because then inviting foreign universities would do more harm than good, the government should also focus on improving government Universities. The biggest problem that the government will face while implementing the NEP is the 'Digital Divide', while the government has given a lot of stress over the use of technology in all aspects of education from Planning, Teaching to assessment the government has failed to realise that there is a huge 'Digital divide' that exists in India a majority of the learners will be excluded if the extreme emphasis is given on the use of technology as only 24% of Indian households have internet facility, 11% households have a functional computer, and a little over 15% of rural households have access to the internet. which would mean a herculean task for the government to implement the policy where its emphasis on teaching children Coding Etc. Government departments and Ministries work in silos like environment which means there is very less cooperation amongst them, the NEP focuses on Vocational training from 6th Grade onwards, which means Skill development, labour ministry and the Education ministry will have to Cooperate, however, the policy does not comment on any such plan for Coordination These are some of the qualitative and Quantitative challenges that the government might face while it implements the NEP.

Final Recommendations Submitted 1. NEP 2020 advances dynamic instructional method, development of core capacities and fundamental life skills, including 21st-century aptitudes, experimental learning at all stages, low stake board tests, comprehensive advancement card, change in appraisal to advance basic and higher-order thinking among students, mainstreaming of professional training and reforms in teacher education. 2. As the draft signifies that there is a digital divide and from the stated facts only 24% of Indian households have internet facility, the government should come up with some effective policies to make available the resources to every corner of the country to bring a digital revolution in the realm of education. 3. As the conundrum of access to the internet is faced by a large chunk of the population, the government should consider bringing in the private players which will adhere to the strict norms of TRAI so as no one gets an advantage of creating a monopoly market and the data tariffs should be designed in such a manner that is feasible to all sections of the society.


Artificial Intelligence and Policy in India

45

4. To establish a digital infrastructure, the government needs a huge amount of in- vestment. So, the government should either consider allocating a part of its annual budget to meet out the needs for the establishment of the digital infrastructure or it should invite private players and have control over their actions by introducing stringent norms. 5. As per the Kothari Commission, the allocation of at least 6% of the total GDP should be for education. But as per reports, it is only 3.2% for the year 202021. So, it is a high time for the government to look into the matter or else as per the current scenario, the country might face mobocracy over democracy (considering the unemployment rate). 6. The draft discusses the introduction of AI in the field of education. Even though it will support the economy and will lead towards a dynamic phase of society, the government ought to consider the knowledge and information that overall individuals have for utilizing such advanced technologies. And in the case of India all 135-crore people are to be taken into consideration. 7. Through the assistance of AI, the government may be fruitful in presenting diverse professional courses in the different regional dialects, yet the legislature should give a similar measure of weightage and incentive to individuals who were surveyed in regional language and the one in the English language while appointing occupations in the standard. 8. As many of the instructors are not familiar with these cutting-edge innovations and favor utilizing the conventional method of educating. So the government ought to likewise outline rules and direct workshops for such educators so they acclimate to the utilization of innovation and are prepared in a superior manner to keep up the degree of generally speaking competency. 9. As aforementioned, the AI can individually assess the students and can maintain a track of records and can envisage the success or failure of the opted course, the feasibility of the same in terms of personal growth is still a grey area and needs to be pondered upon. 10. The government is silent on the RTE (Right to free and compulsory education, Art. 21-A) should be a point of focus, as if the NEP is not covered within the ambit of Art 21-A, the whole policy can turn out to be bizarre. 11. Also, the said policy focuses on teaching coding to the students from the primary level, in spite of the fact that there is the absence of assets especially the accessibility of PCs both in private as well as government schools. So, the government should address the issue and come up with practical solutions to meet the needs required for training the young ones for a prosperous future. 12. India has a dropout rate of 17% as recorded in 2016, implementing AI in the system will promise to improve the early warning systems, and this is usually based on longitudinal datasets that keep emerging in the education sector. The AI solutions will help school principals to use the existing data in new ways and thereby design relevant interventions to predict and prevent dropout more efficiently. Studies say that early warning systems that leverage


46

Artificial Intelligence and Policy in India, Volume 2

machine learning are increasingly being used to boost high school graduation rates. 13. The Central Board of Secondary Education (CBSE) in India recently presented two teacher’s handbooks to help assimilate AI software in schools, with their target being that of 22,000 schools getting familiarized with AI and the technologies of the future. This has been taken into consideration for introducing the concept of AI for students of standard 8th and 9th and thereby introducing it in their syllabi through the first facilitator’s handbook. The curriculum is set to take around 112 hours and will be based on investigational methods layering social as well as technological skills. The second facilitator’s handbook is about integrating AI across the majority of the subjects to enhance the multidisciplinary approach in the teach- ing-learning process (BW Online Bureau, 2019). This almanac talks about how institutions can use AI technology for training teachers from class 6 to class 10 in coherence with the latest curriculum. With the advent of the new NEP, one can hope such an approach will finally be introduced in the educational sector. 14. The recruiting teams in colleges of India can focus their efforts on creating a set of algorithms that can predict the acceptance and enrolment rates. AI’s main assistance gives the impression of being timesaving. Utilizing AI to perform tasks concerning time sensitivity and making problem-solving more efficient, the administrative staff can shift their focus and their efforts on improving student experiences at their schools. 15. Including a Natural processing bot in our education system, could lead to lesser false positive outcomes when it comes to identifying the learning and the varied means, giving the accurate interpretation. Furthermore, identifying the failures and conflicts through a certain kind of modelling via statistics would enable a comprehensive form of communication for all users. The chatbot will be responsible for identifying the development gaps and further provide training data for future learning. 16. The schools/Colleges around can help themselves with the provision of an intelligent speech recognition system that would convert the teacher’s languages into subtitles on a large screen. This would be helpful for those students who are from a vernacular medium and need to cope up with English. 17. Personalized tutors gather data points at each juncture in the child's training journey, classification. Machine Learning models could be utilized to foresee the children at risk of dropping out and appropriate redressal systems can be set up. A culmination of these activities would contribute to higher education enrolment ratio and ensure a substantial proportion of adults in India achieve literacy, mandated in line with targets under the NEP. 18. Adopting Virtual Reality Learning mode makes learning entertaining, as students get to learn from the ease of their classrooms being delivered by a virtual model without the outgoings of money or any sorts of time travelling. The theoretical aspect will be in a virtual model. A Moscow-based company


Artificial Intelligence and Policy in India

47

has tested creative ways to use online education to exceed the existing cultural and linguistic barricades. Another example is that of a multinational Idea hack hosted by the centre for new technology and entrepreneurship. The virtual meetup centered around a gamification course almost witnessed an attendance of 200 students of higher- grade education from Moscow and three other continents.

Final Comments It is rightly said that "Ideas are useless unless used”. The proof of their values is in their implementation. Until then, they are in limbo." Same goes with the NEP 2020. Though the policy provides some reformative steps taken by government for creating a 'digital arena' with the introduction of smart education policy in cognizance with best AI software from all around the world, it does have some drawbacks which can be an obstacle while considering the general execution of the said policy in a brief period. The introduction of AI can be the best step taken by the government but lack of aware- ness and digital exposure among the 135crore people is the issue of major concern as all sections of society are not exposed to the trending technological advancements and the majority of them might not fit in the race. Apart from this, the NEP policy is not binding on the States and Union territories as it does not cover the proposed NEP under the ambit of RTE due to which a major section of the society will be deprived of the benefits which can be availed from the NEP. Also instead of purchasing expensive AI software from foreign technological giants, the government should consider developing home-developed software under the 'AATMABHARAT MISSION.' And finally, the lesser share of GDP towards education is a major issue and government should plan out proper ways to increase the budget for the education sector as 'youths are the pillars of nation and investment on the advancement of youths means advancement of the nation.

References 1. Arora, Abhijay. 2020. Indian Education Sector is Ripe for Disruption by Artificial Intelligence. NITI Aayog. [Online] January 14, 2020. [Cited: August 23, 2020.] https://niti.gov.in/indianeducation-sector-ripe-disruption-artificial-intelligence. 2. Bonderud, Doug. 2019. Artifical Intelligence , authentic impact: How educational AI is making the grade. Edtechmagazine. [Online] August 27, 2019. https://edtechmagazine.com/k12/article/2019/08/artificial-intelligence-authenticimpacthow-educational-ai-making-grade-perfcon. 3. BW Online Bureau. 2019. CBSE releases facilitators Handbook for Artificial Intelligence. BW Education. [Online] October 9, 2019. 4. http://bweducation.businessworld.in/article/CBSE-Releases-FacilitatorsHandbook-ForArtificial-Intelligence-AI-/09-10-2019-177293/. 5. Carnegie Learning. [Online] [Cited: August 23, 2020.] https://www.carnegielearning.com/. 6. Century. [Online] [Cited: August 23, 2020.] https://www.century.tech/. 7. Cognii. [Online] [Cited: August 23, 2020.] https://www.cognii.com/.


Artificial Intelligence and Policy in India, Volume 2 48 8. Department of School Education and Policy. Ministry of Education, Government of India. [Online] [Cited: August 23, 2020.] http://samagra.mhrd.gov.in/about.html. 9. Development, Ministry of Human Resource. 2020. National Education Policy. 10. s.l. : Government of India, 2020. 11. 2020. Future of Artifical Intelligence - Enabled classrooms . Myeducomm. [Online] February 3, 2020. https://www.myeducomm.com/blog/future-of-artificialintelligence-enabledclassrooms/. 12. Ikedinachi A. P. WOGU, Sanjay Misra, Patrick A. Assibong, Esther Fadeke Olu-Owolabi, Rytis Maskeliūnas, Robertas Damasevicius. 2019. Artificial Intelligence, Smart Classrooms and Online Education in the 21st Century: Implications for Human Development. ResearchGate. [Online] July 2019. 13. https://www.researchgate.net/publication/333043563_Artificial_Intelligence_Sma rt_Classrooms_and_Online_Education_in_the_21st_Century_Implications_for_H uman_Development. 14. Jing, Meng. 2017. China wants to bring Artifical Intelligence to its classrooms to boost its education system. Soutch China Morning Post. [Online] October 14, 2017. 15. ) https://www.scmp.com/tech/science-research/article/2115271/china-wants- bring-artificialintelligence-its-classrooms-boost. 16. Khan, Qasim. 2019. Is Artifical Intelligence advanced enough to replace teachers. 17. Equal Ocean. [Online] May 9, 2019. https://equalocean.com/analysis/2019050911044. 18. Kidaptive. [Online] [Cited: August 23, 2020.] https://www.kidaptive.com/. 19. Kidsense. [Online] [Cited: August 23, 2020.] https://kidsense.ai/. 20. Lieberman, Mark. 2020. Education Week. [Online] May 19, 2020. [Cited: August 23, 2020.] https://www.edweek.org/ew/articles/2020/05/20/how-educators-canuse-artificialintelligence-as.html. 21. McNeill, Sam. 2018. Artificial Intelligence in the Classroom. Microsoft. [Online] March 1, 2018. [Cited: August 23, 2020.] https://educationblog.microsoft.com/en- us/2018/03/artificialintelligence-in-the-classroom/. 22. Nuance. Dragon Speech Recognition for education. Nuance. [Online] [Cited: August 23, 2020.] https://www.nuance.com/dragon/industry/education- solutions.html. 23. SG, Ramanand. 2020. NEP 2020, a big push to make India a knowledge superpower. Business Line On Campus. [Online] August 2020. https://bloncampus.thehindubusinessline.com/b-learn/nep-2020-a-big-push-to- make-indiaa-knowledge-superpower/article32296354.ece.. 24. Shenggao, Yuan. 2018. Portraits of success: Huijang a pioneer in online education. 25. China Daily. [Online] November 10, 2018. http://www.chinadaily.com.cn/cndy/2018-11/10/content_37236230.htm. 26. Wolhuter, Samantha. 2019. AI in education: How chatbots are transforming learning. We are Brain. [Online] July 26, 2019. 27. https://www.wearebrain.com/blog/ai-data-science/top-5-chatbots-in-education/. 28. Xie, Echo. 2019. Artifical Intelligence is watching China's students but how well can it really see. South China Morning Post. [Online] September 16, 2019. https://www.scmp.com/news/china/politics/article/3027349/artificial-intelligence- watchingchinas-students-how-well-can.


Artificial Intelligence and Policy in India

49

4 Recommendations Report on the Draft Paper for Responsible AI by NITI Aayog dated 21.07.2020 Ankita Malik1, Nav Dhawan2 Research Intern, Indian Society of Artificial Intelligence and Law; Student, School of Law, Christ University, Bangalore, India 2 Research Intern, Indian Society of Artificial Intelligence and Law Student, Jindal Global University, Sonepat, India executive@isail.in

1

Synopsis. The recommendations report drafted by the Research Members of the Indian Society of Artificial Intelligence and Law has been submitted to the NITI Aayog, Government of India on the call of request for recommendations of the Draft Paper published by the body on July 21, 2020 on Responsible AI. The report includes an Executive Summary by Mr Abhivardhan, Chairperson & Managing Trustee. The report is published on August 10, 2020.

Executive Summary A Responsible AI would have a central understanding of procedure and modalities when it comes to a nation-state. In matters related to the juristic persona of AI, it is highly recommended that nation-states – especially developing states must have concrete and replenishable decision-making bargain, so that the global participation and perambulatory discussions and confidence-building measures in terms of global approaches towards AI are embedded in plurilateral values, to reform the digital consequences of AI-based limited globalization. The paper espouses considerations into the same scope of what a Responsible AI can be, and how India can adopt the same in a global capitalist scenario. I must congratulate Ankita Malik and Nav Dhawan for their stupendous efforts to contribute and support for the same initiative and provide recommendations with a reasonable degree of research. Abhivardhan Chairperson & Managing Trustee Indian Society of Artificial Intelligence and Law


50

Artificial Intelligence and Policy in India, Volume 2

Introduction The Draft for Discussion for Responsible AI for All by NITI Aayog was recently placed in the public domain for discussion, inviting comments and recommendations upon the same. This stems from the shift in emphasis of the government towards development of resources in order to exploit the potential of Artificial Intelligence. The document lays down the economic and sectoral potential of AI wherein there is a predicted boost to the growth rate by 1.3% by 2035. Considering this potential, the government is aiming at accelerated adoption of AI into the various domains, focusing on AI deployment in the mechanisms of the government while maintaining a coordinated approach from the private sector, startups & academia. The national strategy is aimed toward ‘Total Innovation’1 which includes an array of policy innovation, technological innovation, organizational innovation and so on. While the benefits of the amalgamation of AI into the system are substantial, the perils caused by the same cannot be ignored. The problems which India faces in the current times has many parallels to draw from the international front, as many nation states are grappling with similar problems. It ranges from the impact on labour markets, financial systems & inequality to human rights, privacy, dignity & bias.2 The working document further tries to analyze the challenges at hand by categorizing them on the basis of their impact i.e. Direct & Indirect impact, into System Considerations & Societal Considerations. The first part of this article aims to analyze the former, while the second part would analyze the latter along with the various recommendations contained therein.

System Considerations The AI Black Box Problem, Exclusionary Risk and Machine Bias.

One of the foremost aspects, which are required for building trust within the artificial intelligence systems, is for users, endpoints and other parties involved, to understand how artificial intelligence comes to a certain conclusion. If the methodology followed for the decision-making process is known, it ultimately affords more reliability and credibility to the decision. However, the problem arises when the there is a lack of information with regard to how the particular NITI Aayog, ‘Working Document: Towards Responsible AI for All’, <https://niti.gov.in/sites/default/files/NITIAyog_Presentation.pdf> accessed 7 August 2020 2 Scientific Foresight Unit (STOA), European Parliamentary Research Service, ‘The Ethics of Artificial Intelligence: IssuesInitiatives’,<https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_ST U(2020)634452_EN.pdf> accessed 7 August 2020. 1


Artificial Intelligence and Policy in India

51

AI reached a particular conclusion. This can be categorized within the scope of the “Black Box” problem wherein, due to the deep neural network operating, with the help of artificial neurons, processes such complex data that it makes it close to impossible to ascertain how the decision was concluded. 3 Another reason for the same is the issue of dimensionality, for which support vectors are used by the AI and the same cannot be visualised by humans. This also can lead to the black box problem.4 Therefore, as rightly pointed out in the working document, there might be instances where spurious correlations could be in course, which would not create errors in the dataset, however, it could prove to be a problem during deployment, especially since the correlation cannot be discerned, thereby creating a policy problem. This is based on the recognised principle that there needs to be transparency for credible functioning and deployment. In terms of completely opaque decision-making process, the severity of the problem is graver as the weaker black box problem can still be reverse engineered.5 However, various legal principles and doctrines would fail if the same is not regulated. Taking the example of the intent doctrine, the intent of the party plays a key role in various aspects of liability as well as understanding the decision-making process. However, in cases of a stronger black box problem, the programmer may not be able to predict or recognise how and why a decision was made by the AI. This can in turn also result in incorrect decisions, which cannot be traced, thereby leading to exclusion of certain members of the society from benefits that they are entitled to by the State.6 This can be seen from an example wherein the Artificial Intelligence used, acted in a discriminatory manner wherein the primary aim for it was to predict recidivism in order to aid in the process of decision making in terms of grant of bail. However, the result concluded by the AI could be discriminatory or biased on a racial basis to a large extent.7 This also stems from the problem of Machine Bias. Most issues within the systems considerations have an undeniable association and can have overlapping impacts. Machines are not inherently biased, the bias results from the datasets used and an inquiry into the datasets used, offers an explanation in to the bias that a machine has incorporated over a period. IBM has been developing methodologies which would help in reducing machine bias even when these training datasets are not available, where the a three level rating system is used for ascertaining the Yavar Bathaee, ‘The Artificial Intelligence Black Box & the Failure of Intent and Causation’, 2018, 31 Harvard Journal of Law & Technology, <https://jolt.law.harvard.edu/assets/articlePDFs/v31/TheArtificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf> accessed 8 August 2020 4 Ibid [934] 5 Ibid [934] 6 NITI Aayog, Working Document [n1] 11. 7 Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica, ‘Machine Bias’,<https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing> accessed 8 August 2020 3


52

Artificial Intelligence and Policy in India, Volume 2

level of relative fairness by rating the same on the parameters of presence of bias, inheritance of bias and capability to introduce bias regardless of the data.8 In order to solve a problem, the problem must first be understood. The legality with regard to AI and the black box problem cannot be solved entirely by technological solutions. Therefore, the first step should to be for policy makers to understand the various aspects of cross-disciplinary uses of opaque systems. One such example is the course of ‘The Ethics & Governance of Artificial Intelligence’9 wherein it focuses on analysing the nuances of algorithmic decision making, autonomous systems and seeks to find a balance between regulation and innovation. Apart from education in this domain, another solution would be working towards a system wherein the technology is tweaked in a manner so as to produce largely explainable results. This draws from the Explainable AI program being run by the Defense Advanced Projects Agency, wherein it seeks to create prediction accuracy while explaining the rationale behind the decisions and simultaneously aiming to develop human interface into the process. 10 In terms of transparency and its achievability, a source to draw from is the General Data Protection Regulation (GDPR).11 Article 13 of the GDPR provides for a system of consent along with a system of checks and balances by also maintaining accountability upon the controller. Further a combination of the Articles 14 and 13 provides plausible solution for the problem of reliability in terms of deployment due to the black box problem.12 The Accountability Dilemma

The working document recognises that assigning accountability for the harm, which has arisen, from a specific action of the Artificial Intelligence is a challenge. One argument in such a situation is with regard to the application of liability principles in cases of assigning accountability. The very nature of AI makes it difficult to simplify its internal function and, in the future, the most certain and available trajectory would be to develop more complex functions to achieve solutions to more complex problems. Therefore, if restrictions are imposed on the manner in which the AI functions, it would inadvertently act as a break on the IBM THINKPolicy Blog,‘Bias in AI: How we Build Fair AI Systems and Less-Biased Humans’, <https://www.ibm.com/blogs/policy/bias-in-ai/> accessed 7 August 2020 9 Harvard Law School, ‘The Ethics & Governance of Artificial Intelligence’, <https://hls.harvard.edu/academics/curriculum/catalog/default.aspx?o=71157> accessed 7 August 2020 10 Dr. Matt Turek, Defense Advanced Research Project Agency, ‘Explainable Artificial Intelligence(XAI)’, <https://www.darpa.mil/program/explainable-artificial-intelligence>, accessed 8 August 2020. 11 General Data Protection Regulation, (EU) 2016/679 12 Ibid, Art. 13 &14 8


Artificial Intelligence and Policy in India

53

wheels of progress. Therefore, the policy needs to take into consideration that if the black box problem cannot be completely eliminated, then pre-existing rules may be customised for the same. One such possibility is that of the principle of Vicarious Liability. This principle cannot have a blanket uniform application wherever credibility has been compromised. However, the same can be applied with a pro rata approach. In terms of circumstances where the AI was designed to achieve a specific task or the probability of externalizing failure on others is higher, would attract a stricter application of the liability principle. For situations where the probability is lower, the test of foreseeability could help determine as to whether the consequence, which occurred, was apparent or reasonably foreseeable. This approach has a certain level of ambiguity in order to provide space for a case to case application. This is especially precedented in cases which attract criminal sanctions. The severity of criminal liability may lead to debilitating effect on the advancement of more powerful Artificial Intelligence, if the system engineers would be subject to criminal sanctions, without considering the circumstances revolving around the decision/act of the AI. India’s Readiness for AI incorporation

India’s overall capacity and readiness plays an important role in determining where policy would require the most amount of work. India ranks 17th in the Government AI readiness index compiled by Compiled by Oxford Insights and the International Development Research Centre.13 As the working document states, there are laws which exist such as the Consumer Protection Act14 or for a more specific example, SEBI’s circular on AI/ML applications15 however, the intent of most of these legislations and laws was focused around solving the problems relating to security and privacy. However, even though the problems remain similar, their nature has seen a shift with regard to AI. The privacy issues, which existed earlier, are different from the privacy issues of AI, therefore, the law will face difficulty in case pre-existing rules without customization are applied. Therefore, the regulations should be developed with an ‘adaptive & anticipatory’ approach.16 Oxford Insights and the International Development Research Centre, ‘Government Artificial Intelligence Readiness Index 2019’, <https://www.oxfordinsights.com/ai-readiness2019> accessed 9 August 2020 14 Consumer Protection Act, 2019 (India) 15 Securities & Exchange Board of India, Reporting for Artificial Intelligence (AI) and Machine Learning (ML) applications and systems offered and used by market intermediaries, SEBI/HO/MIRSD/DOS2/CIR/P/2019/10 16 UNESCAP, ‘Artificial Intelligence in Asia and the Pacific’,<https://www.unescap.org/sites/default/files/ESCAP_Artificial_Intelligence.pdf> accessed 9 August 2020. 13


54

Artificial Intelligence and Policy in India, Volume 2

Societal considerations Artificial Intelligence & the Impact on Jobs

In 2016, AI was dubbed as the “fourth industrial revolution” by the World Economic Forum, that has drastically transformed the way we live, interact and work17. Majority of the tech giants and digital natives are deploying AI to augment the human ability to perform tasks more precisely and efficiently with the decision control held exclusively by humans or shared with machines. Rise of AI is expected to boost economic growth; however, multitude of current business models will be disrupted and millions of existing jobs will be automated. Advancement of AI will affect the job prospects of large and young population of India. There are 17 million new entrants into the workforce year on year in comparison to the 5.5 million job created18. It shows that the employment scenario in India is worrisome and rapid AI advancements are expected to provide further hurdles to this situation. A Teamlease Service Study estimates that 52-69% of repetitive and predictive roles in sectors such as IT, manufacturing, transportation, financial services, packaging are exposed of being automated in the next few years. A 2013 study by Carl Frey and Michael Osborne conveys that “middle skill” jobs that require manual and routine cognitive application would be completely automated in the coming years19. Thus, India will have 69 percent of its jobs in formal employment at risk of being automated by 2030. It is predicted that AI will have a severe impact on jobs in the short-term. However, in the coming years, the impact will start to even out as there would be new role substitutions and new job creations. A study from McKinsey Global Institute recons that subject to various adoption scenarios, automation will displace jobs ranging from 400 to 800 million around the world by 2030, which would require as many as 375 million people to switch job categories entirely20. A report by EY states that by the year 2022, 9% of the estimated 600 million workforce would be employed in jobs that do not exist today, whereas 37% would

Davos Klosters, ‘World Economic Forum Annual Meeting 2016 Mastering the Fourth Industrial Revolution’ (WE Forum, 2016) <http://www3.weforum.org/docs/WEF_AM16_Report.pdf> accessed 9 August 2020 18 ‘Future Jobs in India’, (FICCI, 2017) <http://ficci.in/spdocument/22951/FICCI-NASSCOM-EYReport_Future-of-Jobs.pdf> 19 Carl Frey and Micheal Osborne, ‘The future of Employment: How susceptible are jobs to computerization?’ (Oxford Martin, 2013) <https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf> accessed 9 August 2020 20‘What the future of work will mean for jobs, skills and wages? (Mckinsey, 2017) <https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-thefuture-of-work-will-mean-for-jobs-skills-and-wages> accessed 9 August 2020 17


Artificial Intelligence and Policy in India

55

be deployed in jobs that have a radically altered skill requirement21. It is expected that certain types of jobs will shrink as they get automated, production efficiency will correspondingly increase, creating a demand for other types of jobs related to it. According to an Accenture report, AI possesses the potential to add US $957 billion to India’s economy by 203522. How can a developing economy like India with a huge population adapt for such a change and reap the benefits? The Indian government should be proactive to collect data relating to the employment scenario to better prepare for AI. Estimates of employment variables could be prepared through household surveys, surveys of business/enterprises, administrative sources and data from government schemes23. The Government could also conduct periodic economy-wide skill gap analysis to prepare the labour market for future, dominated by AI, bots, etc24. This would allow for data driven policies by tracking new developments in job scenario of the country. With over 50% of the India population under 25, an appropriate step would be to expose the young workforce to AI interfaces and machine learning. Online training shall be encouraged and education curriculum shall be evolved to prepare the students for the impact of AI. The current education system seems to be providing an output of low-employable students25. Revamp of the curriculum, teach training and improvements in infrastructure are necessary in these times, the new national education policy (NEP 2020) is a step in the right direction. The massive consumer market of India can be leveraged to access latest technology through assertion of technology transfer during FDI deals. Global firms should be encouraged to transfer advanced technology, create joint ventures and assist Indian firms, which would allow Indian firms to compete in areas of artificial intelligence, robotics, etc.26 The government could boost employment in the areas which are less vulnerable to automation. Education and Healthcare are such sectors which involve high human engagement which cannot be automated easily. Jobs in sectors like arts and entertainment are interpersonal and creative, and not under immediate threat of automation. Labour intensive industries such as tourism could be pushed with some aggression as India ranked 34th in the Travel and Tourism Competitiveness Report 201927, with much improve more improvement to grow. These sectors n 20 Rekha Menon, Madhu Vazirani and Pradeep Roy, ‘Accelarating India’s Economic Growth with Artificial Intelligence’ (Accenture 2017) <https://www.accenture.com/_acnmedia/PDF68/Accenture-ReWire-For-Growth-POV-19-12-Final.pdf#zoom=50?> accessed 9 August 2020 23 Ila Patnaik, ‘Narendra Modi and jobs: It’s all about data, and how it’s calculated’ (The Print, 2019) <https://theprint.in/ilanomics/narendra-modi-and-jobs-its-all-about-data-and-how-itscalculated/205205/> accessed 9 August 2020. 24 n 20 25 ibid 26 ibid 27 Lauren Calderwood and Maskim Soshkin, ‘The Travel & Tourism Competitiveness Report 2019’ (WE Forum, 2019) <http://www3.weforum.org/docs/WEF_TTCR_2019.pdf> accessed 9 August 21

22


56

Artificial Intelligence and Policy in India, Volume 2

could be boosted to increase the said sectors’ capacity to create jobs. The government should set-up career counselling centres to enlighten the unaware youth about the changing dynamics and new possibilities in the job scenario. Providing a boost to the start-ups can be very beneficial in these changing job scenario as small enterprises, instead of big factories, seem to be an area which can be exploited for proving more jobs in the future. India has a large unorganized and informal sector, start-ups can provide business models to address with the inefficiencies in various sectors and end up generating vast job opportunities. Higher procurement of goods and services from domestic start-ups by the Government could boost entrepreneurship. Moreover, entrepreneurship courses provided by universities28 could also lift the start-up culture in India. The Government’s Start-up scheme29 is a step in the right direction and its implementation shall be aggressive to reach even the least advantaged groups. Centers for excellence (CoEs) could be established that will use technology tools to develop standards and provide counselling for the youth for emerging jobs of future. Such centers must be established in every government university30. Psychological Profiling & Malicious Use

Psychology has never been considered an exact science. There is no admission of total precision in the results and there exist numerous conflicting approaches. There have been advancements in AI being made to provide as acute as possible psychological profiling of humans. However, the results have not always been fruitful. Machine learning algorithms are susceptible to bias. Northpointe’s software is used in courts across the country and used to determine how likely a person is to commit crime in the future. According to a report by ProPublica, black people are almost twice as likely as white people to be labelled higher risk but they did not actually end up re-offending, while white people are twice as likely as black people to be labelled lower risk but went on to commit crimes31. The primary source of this bias is the training data. AI’s prediction is as good as the data that it is fed. Bias can lead AI to make decisions which enforce systemic 2020 28 ‘Indian Universities, Colleges to Soon Start Entrepreneurship Courses to Foster Startup Culture in Students’ (Inventiva, 2018) <https://www.inventiva.co.in/stories/inventiva/indian-universitiescolleges-to-soon-start-entrepreneurship-courses-to-foster-startup-culture-in-students/> accessed on 9 August 2020 (Start Up India) < https://www.startupindia.gov.in/content/sih/en/startup-scheme.html> accessed on 9 August 2020 30 n 20 31 Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin, ‘How we analyzed the COMPAS Recidivism Algorithm’ (Pro Publica, 2016) < https://www.propublica.org/article/how-we-analyzedthe-compas-recidivism-algorithm> accessed on 9 August 2020 29


Artificial Intelligence and Policy in India

57

discrimination. In this way, AI has the potential to disrupt democratic norms. What could be then the solution? Adequate legal response would be to pass adequate data protection law. The Srikrishna Committee provided a framework to begin the dialogue on algorithmic bias32. Individuals can be provided with the right to the logic of automated decisions. Such a right will potentially balance organisational interests with the need for algorithmic transparency. Data collection of applications like Aarogya Setu should be transparent and limited for its use, to avoid situation like Cambridge Analytica scandal. It was stated in the MIT Technology Review that Aarogya Setu poses significant risks to the privacy of the user compared to similar apps in other countries and days later Union announced it would make the source code for the application public. This was a step in the right direction by the Government. Equable Principles

AI raises concerns due to its potentially disruptive impact as discussed earlier. While these issues are significant, they can be addressed with correct planning, oversight and governance. An ethical framework seems necessary to curb the possibility of harmful AI; especially as intelligent technology becomes more prevalent in the products and services, we utilize daily. The draft provides different groups that shape the future of AI. Each of these stakeholders have certain impact on the development and usage of AI. The draft proposes to develop principles for beneficial use of AI across these stakeholders, based on the considerations of these stakeholders. Various stakeholders are mentioned along with their indulgence with the AI. However, there are some other stakeholders such as NGOs which have an impact on AI as well and have not been considered in the consultation process of the principles pertaining to Responsible AI. Non-state actors like NGOs should be included in the list of stakeholders. NGOs strive to protect and create awareness about the rights of the distressed in a society. Hence, they serve a noble purpose and are an integral part of the society. Many NGOs in India have been use AI to harness large amounts of data such as Akshaya Patra, which strives to eliminate classroom hunger by organizing midday meals, is using data analytics to serve children and utilize their funds more effectively. NGOs can also assist those members of the society who are affected by AI but do not have resources to voice their concerns. They are deploying AI to

‘A Free and Fair Digital Economy Protecting Privacy, Empowering Indians’ (MeitY, 2018) <https://www.meity.gov.in/writereaddata/files/Data_Protection_Committee_Report.pdf> accessed 9 August 2020 32


58

Artificial Intelligence and Policy in India, Volume 2

solve logistic problems such as figuring out the optimum route to deliver food33. NGOs also spread awareness on issues related to AI. The Public Voice have even come up with guidelines to enhance the use of AI, as part of their awareness scheme34. At the same time, there must be strong background checks to ensure that the participation of NGOs and Public Trusts is not tantamount to the national security imperatives in manner whatsoever. The principles in the draft are formulated after consultation with stakeholders and considering the AI case studies from India and around the world, the Indian Constitution and International Standards for AI. The consultation process could have included academics, scholars and reformers, who with their expertise in the field of AI can provide specialist insight on the issue of Responsible AI and guide the formation of principles with respect to it. The draft provides 7 principles to deter harmful impact of AI. There are no single set of principles which are followed around the globe. Many organizations have come up with such principles relying on different kind of information and data. But the ultimate aim of these principles is usually; ethical, transparent and accountable use of AI in a manner which is consistent with user expectations, societal laws and organizational values35. Another principle which can be explored is Principle of Fair Competition; Organizations that develop or use AI should design AI systems in a manner which ensures consistency with overarching ethos of subsisting competition regimes to promote free and vibrant competition. The AI systems shall be developed in a “compliance by design” manner36. Encourage Research into Responsible AI

The Government of India embarks on the path to facilitate research into Responsible AI. It intends to finance start-ups and research projects pertaining to Responsible AI tools such as explainable AI models, privacy preserving techniques, etc. Ministry of Electronics and Information Technology and National Association of Software and Services Companies (NASSCOM) has even launched a one stop digital platform for sharing of resources such as research 33‘Accenture

Labs and Akshaya Patra Use Disruptive Technologies to Enhance Efficiency in MidDay Meal Program for School Children’ (Akshaya Patra, 2017) <https://www.akshayapatra.org/accenture-labs-and-akshaya-patra-use-disruptive-technologiesto-enhance-efficiency-in-midday-meal-program-for-school-children> accessed on 9 August 2020 34 ‘Universal Guidelines for Artificial Intelligence’, (The Public Voice, 2018) <https://thepublicvoice.org/ai-universal-guidelines/> accessed on 9 August 2020 35 Dominic Delmolina and Mimi Whitehouse, ‘RESPONSIBLE AI: A Framework for Building Trust in Your AI Solutions’ <https://www.accenture.com/_acnmedia/PDF-92/Accenture-AFSResponsible-AI.pdf> accessed on 9 August 2020 36 “Responsible AI Policy Framework’ (ITECHLAW, 2018) <https://www.itechlaw.org/sites/default/files/ResponsibleAI_PolicyFramework.pdf> accessed on 9 August 2020


Artificial Intelligence and Policy in India

59

papers, case studies in the field of AI37. The government intends to host an International conference on ‘Responsible AI’. To mitigate the burden of heavy investments with respect to AI, private entities could be encouraged to partake in providing investment to start-ups and universities for AI research by providing tax benefits to their other endeavours. Self–assessment guide for Responsible AI

A Self–assessment guide for Responsible AI would allow AI developer or operator to evaluate the ethics level of an AI system. A step-by-step guide is provided in the draft. It would allow development and usage of AI to take place within the aforementioned principles. Ethical Committees

Often, organisations face a difficult task when it comes to ethical data collection, usage and sharing38. A strategy to advance responsible AI is instituting Ethical Committees. Ethical Committees are accountable for enforcement of the principles pertaining to Responsible AI. Certain duties of the Committee are assessing the “potential of harm” and potential benefits, evaluate plans and formulate a recommendation indicting if the AI solution shall be approved, it should ensure easily accessible and affordable grievance redressal system for AI’s decisions, etc. The composition of the Committee is also provided in the draft. The composition table has failed to mention the background of the Chairperson. It would be unwise to have a Chairperson without requisite qualification, expertise and experience to be able to oversee the operations of the Committee.

Conclusions AI represents a new way of working. With the advancement of AI, multitude of changes within the organisation and the society will come about. While the advancement of AI has many benefits such as efficiency in workplace, cheaper costs, etc., it also raises concerns due to its potentially disruptive impact. The Draft for Discussion for Responsible AI by NITI Aayog pertains to address just this disruptive impact of AI. With the formulation of principles and EC, it is expected that the AI’s negative aspects can be tamed. The concise Draft Paper throws a light on the pro-active stance of the government to exploit the potential (AI) <www.ai.gov.in> accessed 9 August 2020 Ronald Sandler and John Basl, ‘Building data and AI ethics committees’ (Accenture, 2019) <https://www.accenture.com/_acnmedia/PDF-107/Accenture-AI-And-Data-Ethics-CommitteeReport-11.pdf> accessed 9 August 2020 37

38


60

Artificial Intelligence and Policy in India, Volume 2

of AI. The world is bracing for a massive AI influx and India is gearing up to take the full advantage of the scenario.

Recommendations 1. IBM has been developing methodologies which would help in reducing machine bias even when these training datasets are not available, where the a three level rating system is used for ascertaining the level of relative fairness by rating the same on the parameters of presence of bias, inheritance of bias and capability to introduce bias regardless of the data. 2. In order to solve a problem, the problem must first be understood. The legality with regard to AI and the black box problem cannot be solved entirely by technological solutions. Therefore, the first step should to be for policy makers to understand the various aspects of cross-disciplinary uses of opaque systems. One such example is the course of ‘The Ethics & Governance of Artificial Intelligence’ wherein it focuses on analyzing the nuances of algorithmic decision making, autonomous systems and seeks to find a balance between regulation and innovation. 3. Another solution would be working towards a system wherein the technology is tweaked in a manner so as to produce largely explainable results. This draws from the Explainable AI program being run by the Defense Advanced Projects Agency, wherein it seeks to create prediction accuracy while explaining the rationale behind the decisions and simultaneously aiming to develop human interface into the process. 4. In terms of transparency and its achievability, a source to draw from is the General Data Protection Regulation (GDPR). Section 13 of the GDPR provides for a system of consent along with a system of checks and balances by also maintaining accountability upon the controller. Further a combination of Section 14 and 13 provides plausible solution for the problem of reliability in terms of deployment due to the black box problem. 5. These system considerations can be overcome through the development of Artificial Intelligence working on multimodal explanations. Under this system, textual rationale generation and attention visualization are used to build upon explanatory strengths. The artificial intelligence does not merely answer the question asked or solve the problem posed but also provides evidence of it through visual pointing. This combined with a system where more than one AI system is used of arriving at a conclusion further reduces the scope for ambiguity and provides a platform, which may allow policy makers to map the point where the discrepancy between the two systems arose. 6. The policy needs to take into consideration that if the black box problem cannot be completely eliminated, then pre-existing rules may be customized for the same. One such possibility is that of the principle of Vicarious Liability.


Artificial Intelligence and Policy in India

61

This principle cannot have a blanket uniform application wherever credibility has been compromised. However, the same can be applied with a pro rata approach. In terms of circumstances where the AI was designed to achieve a specific task or the probability of externalizing failure on others is higher, would attract a stricter application of the liability principle. For situations where the probability is lower, the test of foreseeability could help determine as to whether the consequence, which occurred, was apparent or reasonably foreseeable. 7. Considering that there exist laws and the problems remain similar, their nature has seen a shift with regard to AI. For example, the privacy issues, which existed earlier, are different from the privacy issues of AI, therefore, the law will face difficulty in case pre-existing rules without customization are applied. Therefore, the regulations should be developed with an ‘adaptive & anticipatory’ approach. 8. The Indian government should be proactive to collect data relating to the employment scenario to better prepare for AI. Estimates of employment variables could be prepared through household surveys, surveys of business/enterprises, administrative sources and data from government schemes. The Government could also conduct periodic economy-wide skill gap analysis to prepare the labour market for future, dominated by AI, bots, etc. This would allow for data driven policies by tracking new developments in job scenario of the country. 9. With over 50% of the India population under 25, an appropriate step would be to expose the young workforce to AI interfaces and machine learning. Online training shall be encouraged and education curriculum shall be evolved to prepare the students for the impact of AI. The current education system seems to be providing an output of low-employable students. Revamp of the curriculum, teach training and improvements in infrastructure are necessary in these times, the new national education policy (NEP 2020) is a step in the right direction. The government should set-up career counseling centers to enlighten the unaware youth about the changing dynamics and new possibilities in the job scenario. 10. The massive consumer market of India can be leveraged to access latest technology through assertion of technology transfer during FDI deals. Global firms should be encouraged to transfer advanced technology, create joint ventures and assist Indian firms, which would allow Indian firms to compete in areas of artificial intelligence, robotics, etc. 11. The government could boost employment in the areas, which are less vulnerable to automation. Education and Healthcare are such sectors, which involve high human engagement, which cannot be automated easily. Jobs in sectors like arts and entertainment are interpersonal and creative, and not under immediate threat of automation. Labour intensive industries such as tourism could be pushed with some aggression as India ranked 34th in the


62

Artificial Intelligence and Policy in India, Volume 2

Travel and Tourism Competitiveness Report 2019, with much improve more improvement to grow. These sectors could be boosted to increase the said sectors’ capacity to create jobs. 12. Providing a boost to the start-ups can be very beneficial in these changing job scenario as small enterprises, instead of big factories, seem to be an area, which can be exploited for proving more jobs in the future. India has a large unorganized and informal sector, start-ups can provide business models to address with the inefficiencies in various sectors and end up generating vast job opportunities. Higher procurement of goods and services from domestic start-ups by the Government could boost entrepreneurship. Moreover, entrepreneurship courses provided by universities could also lift the start-up culture in India. The Government’s Start-up scheme is a step in the right direction and its implementation shall be aggressive to reach even the least advantaged groups. 13. Centers for excellence (CoEs) could be established that will use technology tools to develop standards and provide counselling for the youth for emerging jobs of future. Such centers should be established in every government university. 14. Adequate legal response would be to pass adequate data protection law. The Srikrishna Committee provided a framework to begin the dialogue on algorithmic bias. Individuals can be provided with the right to the logic of automated decisions. Such a right will potentially balance organizational interests with the need for algorithmic transparency. 15. Data collection of applications like Aarogya Setu should be completely transparent and limited for its use, to avoid situation like Cambridge Analytica scandal. It was stated in the MIT Technology Review that Aarogya Setu poses significant risks to the privacy of the user compared to similar apps in other countries and days later Union announced it would make the source code for the application public. This was a step in the right direction by the Government. 16. Non-state actors like NGOs should be included in the list of stakeholders. NGOs strive to protect and create awareness about the rights of the distressed in a society. Hence, they serve a noble purpose and are an integral part of the society. Many NGO’s in India have been use AI to harness large amounts of data such as Akshaya Patra, which strives to eliminate classroom hunger by organizing mid-day meals, is using data analytics to serve children and utilize their funds more effectively. NGOs can also assist those members of the society who are affected by AI but do not have resources to voice their concerns. They are deploying AI to solve logistic problems such as figuring out the optimum route to deliver food. NGOs also spread awareness on issues related to AI. The Public Voice have even come up with guidelines to enhance the use of AI, as part of their awareness scheme. At the same time, there must be strong


Artificial Intelligence and Policy in India

63

background checks to ensure that the participation of NGOs and Public Trusts is not tantamount to the national security imperatives in manner whatsoever. 17. The consultation process could have included academics, scholars and reformers, who with their expertise in the field of AI can provide specialist insight on the issue of Responsible AI and guide the formation of principles with respect to it. 18. Principle of Fair Competition: Organisations that develop or use AI should design AI systems in a manner which ensures consistency with overarching ethos of subsisting competition regimes to promote free and vibrant competition. The AI systems shall be developed in a “compliance by design” manner. 19. To mitigate the burden of heavy investments with respect to AI, private entities could be encouraged to partake in providing investment to start-ups and universities for AI research by providing tax benefits to their other endeavors. 20. Chairperson shall have requisite qualification, expertise and experience to be able to oversee the operations of the Committee.


Artificial Intelligence and Policy in India, Volume 2

64

5 Indian Strategy on AI & Law, 2020 - AI Diplomacy Report: Critical Review 2020 Sameer Samal1, Sameeksha Shetty2 1

Junior Research Analyst, Indian Society of Artificial Intelligence and Law; 2 Research Contributor, Indian Society of Artificial Intelligence and Law research@isail.in

Synopsis. Diplomacy is the interaction between sovereign states and other international non-state actors through dialogues, negotiations, and other nonviolent means. Historically, diplomatic practices were the conduct of bilateral and official relations between sovereign states (Chas. W. Freeman, 2019). However, the notion of diplomatic and consular relations has transformed throughout the years and for this piece, the impact of Artificial Intelligence on diplomacy will be analyzed on two fronts; (i) AI as a diplomatic tool, and (ii) AI as a diplomatic topic. The juxtaposition of the impact of AI on diplomacy often creates confusion between the two aforementioned fronts. Dr. Corneliu Bjola, in his report, ‘Diplomacy in the Age of AI’, has studied at length the impact of artificial intelligence technology on diplomacy on both the fronts. Therefore, this piece will analyze the report and explore developments by China, Russia, and the United States to provide a proposal for its implementation in India.

Working paper by the Emirates Diplomatic Academy The working paper authored by Dr. Corneliu Bjola examines the concept of artificial intelligence and defines it as, “the activity by which computers process large volumes of data using highly sophisticated algorithms to simulate human reasoning and /or behavior”. The standard test to evaluate the ability of machines to act similarly to humans is known as the Turing test. The test was created by Alan Turing in 1950 and subsequently named after him. To successfully pass the test a computer algorithm or a machine should be capable of engaging a conversation with a human for a minimum of five minutes in such a way that the human interrogator could not be able to identify or distinguish the machine from a human being. Unfortunately, so far, no computer program or machine has satisfactorily passed this test. A statement by the former US Secretary of Defence, Donald Rumsfeld: “as we know, there are known knowns; there are things we know we know. We also


Artificial Intelligence and Policy in India

65

know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know […] it is the latter category that tends to be the difficult ones”, successfully captures the relation between this statement and the development of AI technology (Rumsfeld, 2002). The working paper analyzes these concepts to clarify the ambiguity surrounding AI and decision-making. The author has agreed based on contemporary developments that AI systems can provide decisional assistance from advice-giving positions. However, it is also known that the structure of these decisions determines their effectiveness. We also know that we do not yet know whether an AI system will be capable of supporting active decision-making wherein, it assists continuously and automatically based on a steady description and active loops. Moreover, we do not know that we don’t know if what is happening around us due to the AI revolution “represents the trunk of an elephant or the tail of a mouse”. To explore AI’s potential for assisting diplomacy, it is necessary to map the areas of diplomacy where AI could cause a significant difference. This will also involve assessing the nature of AI contribution, its impact, and the risks it presents for diplomatic work. In furtherance of the same objective, the author has distinguished two fronts regarding the impact of AI on diplomacy; (i) AI as a diplomatic tool, and (ii) AI as a diplomatic topic. On one hand, as a tool for diplomacy, AI has a specific and targeted impact by assisting and supporting the day-to-day tasks of diplomats and other consular services. On the other hand, as a topic for diplomacy, AI has a broad impact covering civil rights, security, and the economy of a sovereign state. The paper advances the TIID Framework to design AI for diplomatic activities and functions. TIID stands for, ‘Task’, ‘Innovation’, ‘Integration’, and ‘Deployment’ and proposes a specific sequence for designing a model. The flow of design begins with “an examination of the specific profile of the diplomatic task that is expected to be improved, continues with an evaluation of the type of innovation required for restructuring the service, a discussion of the level of integration of the physical and digital dimensions of the service, and concludes with an examination of the availability and suitability of the existing institutional configuration”.

AI as a diplomatic tool As the use and capabilities of artificial intelligences increases and diversifies, its expansion to different spheres and industries can be observed. The EDA Report analyses and suggests the potential of adoption of AI as a diplomatic tool and explicates the benefits, detriments and effects of its employment. The use of


66

Artificial Intelligence and Policy in India, Volume 2

Artificial intelligence as a diplomatic tool is inevitable as human resources cannot match the speed and wide-ranged reach of Artificial Intelligence to assist in diplomatic functions. With use of AI technology constantly expanding and global competition increasing, its employment is necessary for a country to be able to compete in the global arena. In this section, the capability of AI as a diplomatic tool and its capabilities will be analysed. In the EDA report, diplomacy was defined implemented in contrast to foreign policy which deals with the creation of a particular strategy developed in light of national interests as the processes such as representation, communication, negotiation by which a strategy is implemented (Bjola, 2020). To adopt AI in diplomacy, public diplomacy, bilateral and multilateral engagement and gathering and analysis of information are considered integral for successful and effective integration (Bjola, 2020). If shared or publicly available, AI tools can substantially contribute to levelling of the playing field at international negotiation tables (Hone, Hibbard, & Maciel, 2020). For example, The Cognitive Trade Advisor (CTA), a software designed to support diplomats in preparing for international trade negotiations, was launched at the 2018 World Trade Organization (WTO) Public Forum (Hone, Hibbard, & Maciel, 2020). Bilateral and multilateral engagement with likeminded allies to exchange views on these issues combined with on-the-ground analysis with the goal of better policy decisions (Bjola, 2020) is a beneficial approach for the integration of AI, especially for developing countries who might face challenges in developing AI as a tool on its own. During the cyber revolution, integration of technology-focused knowledge and skills into previously existing diplomatic practice was a challenging task and has been unevenly implemented to yield mixed results (Scott, Heumann, & Lorenz, 2018). AI, being the new technological change, its effectiveness will hinge on its all-round integration and institutional reform. The use of AI as a diplomatic tool is, therefore, a complex task which should be undertaken through a holistic approach for maximisation of benefits. A pragmatic and incremental reform strategy can be useful for controlled integration with frequent tests allowing for additions and changes along the process. A diplomatic technique can be adopted to integrate AI in diplomacy by freeing science and technological knowledge from its rigid national and institutional enclosures. (Wagner & Furst, 2018). Varied integration of AI including the adoption of AI as a diplomatic tool will enable faster development of AI even in national spheres as there will be a cumulative effect. Discussion about AI and diplomacy also highlights the need to mediate the interplay between the social and political world of the diplomat and the technical world of the computer


Artificial Intelligence and Policy in India

67

scientist and technologist. (Wagner & Furst, 2018) For effective development of AI software suitable to be a diplomatic tool, determination of needs, and setting goals and key performance indicators should be the first step leading to the creation of a project description with clearly specified objectives (Hone, Hibbard, & Maciel, 2020). This should be followed by research and testing including analysis of the market and technical capabilities online as advantages and shortcomings of the different software solutions can clearly be identified (Hone, Hibbard, & Maciel, 2020). AI systems can assist decision-making by serving as an assistant, critic, second opinion or consultant (Bjola, 2020). In the EDA report, it is predicted that AI software is likely to evolve to allow automation of routinized tasks and services, but will likely be kept out of strategic decision-making for technical and ethical reasons (Bjola, 2020). Analysing reports and treaties, for example in preparation for negotiations, can be a time-consuming task for diplomats (Hone, Hibbard, & Maciel, 2020). The use of AI in routinized tasks is a beneficial starting point as a structured software will enable the performance of these tasks and observance of the efficiency in such operations is relatively simple. Using AI for routinized tasks has evident benefits such as speed and efficiency, but can also have additional benefits such as alleviation of human bias and error. The analysis of texts at scale also has the potential to make the work of diplomats more effective and freer up time and resources (Hone, Hibbard, & Maciel, 2020). The EDA report suggests the use of Hybrid AI technology which combines minimal training data and programming abilities facilitating easy generalisation by deriving symbolic representation from supervised learning situations (Bjola, 2020). As human qualities are required for more complex diplomatic tasks such as reading and responding to physical and social cues and judgment cannot be performed by simple AI software. A multi-stakeholder approach including private companies, research institutions or civil society organizations will be effective as the use of AI is bound to have systemic implications that will alter many different areas of foreign policy work, ranging from economics to security and democracy (Scott, Heumann, & Lorenz, 2018). Diplomats need to be able to adapt to and comfortably deal with shifts in the way existing topics are discussed due to the implications of AI while also dealing with the emergence of new, AI-related topics on the international agenda (Hone, Hibbard, & Maciel, 2020). Development of human resources including hiring and training individuals with the requisite skills has to be undertaken in relation to AI technologies. Familiarisation with AI technology and its implications is necessary in the near future. The AI revolution is considered to be more powerful than the industrial


Artificial Intelligence and Policy in India, Volume 2

68

revolution due to the fact that it has inundated the personal lives of the people apart from impacting the industries (Amaresh, 2020). The use of AI in diplomacy seeks to provide a competitive edge in the international sphere to smaller, less competitive nations to enable them to secure preferential trade and investment arrangements, and progress into previously unimagined areas of international trade and diplomacy (Wagner & Furst, 2018).If developed strategically, it could grow to secure a significant advantage and can be utilized to obtain favourable agreements and status. It is said that AI has the potential to reshuffle the winners or leaders in global markets due to its transformational capabilities (Scott, Heumann, & Lorenz, 2018). AI assisted diplomacy should thus be considered an important objective by MFAs.

AI as a topic for diplomacy The potential of Artificial Intelligence as a topic for diplomacy is vast and complex. The impact of AI as a topic for diplomacy and international relations can be assessed based on three key themes – business and economy, defence and security, and human rights and ethics (Katharina E Hone, 2019). -

Business and economy: AI advancement and leadership will enable nations to concentrate economic gains and direct international influence in unprecedented ways as AI has tremendous opportunities in this area. However, unlike any other technological advancement AI has the potential to disrupt domestic and international economies. The service industry will be deeply affected that would create opportunities in research and development for developing nations. However, the labour industry might face challenges due to automation that will negatively impact the labour workforce in developing countries. These changes will invariably affect both domestic and international trade that might work against the nations that do invest into the development of this technology.

-

Defence and security: the use of AI in warfare and advance weaponry systems has the potential to shift the balance of power between nations. Considering that almost all the countries are actively researching into the military and defence utilities of AI, they will also have to strive for strategic stability in their regions by way of diplomacy. The involvement of AI in advance weapon systems will cause a paradigm shift in diplomatic relations between Russia, China, the US, Israel, India and other technologically advanced nations. Moreover, the emergence of Artificial Intelligence powered technology in terrorism will also impact international relations and diplomacy.


Artificial Intelligence and Policy in India

-

69

Human rights and ethics: increasing surveillance and other concerns regarding discrimination due to algorithmic bias in decision-making has the potential to effect human rights and other civil rights of citizens. The role of Artificial Intelligence in aiding discrimination is a well-researched topic and is one of the key area in the existing ethics debate (Access Now, 2018). The report was created in partnership with human rights organizations and AI companies wherein the following specific rights were examined to assess the impact of AI: (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix)

‘Rights to life, liberty and security, equality before the courts, a fair trial’- Article 9 and 14 of the ICCPR; Articles 3,6,7,8 and 10 of the UDHR. ‘Rights to privacy and data protection’- Article 17 of ICCPR; Article 12 of the UDHR, and Article 8 of the EU Charter of Fundamental Rights. ‘Right to freedom of movement’- Article 12 of the ICCPR. ‘Rights to freedom of expression, thought, religion, assembly, and association’- Articles 18, 19, 21 and 22 of the ICCPR and Article 18 of the UDHR. ‘Rights to equality and non-discrimination’- Articles 27 and 27 of the ICCPR. ‘Rights to political participation and self-determination’- Article 25 of the ICCPR, and Article 21 of the UDHR. ‘Prohibition on propaganda’- Article 20 of the ICCPR ‘Rights to work, an adequate standard of living’- Articles 6, 7 and 11 of the ICESCR; and Articles 23 and 25 of the UDHR. ‘Right to take part in cultural life and enjoy benefits of scientific progress’- Article 15 of the ICESCR, and Article 27 of the UDHR.

Sample plans by China, Russia, Canada and the US: The development and use of Artificial Intelligence as a diplomatic tool has the potential to alter the existing structure of foreign relations and influential power of a country. Failure or reluctance by a nation to develop AI and integrate it in its foreign affairs strategy, specifically as a diplomatic tool, may result in it being placed at a strategic disadvantage in comparison to other countries. In this segment, the plans and strategies adopted by few countries will be explored to analyse the effect of integration of Artificial Intelligence in diplomacy. Few nations including China, Russia, Canada and the United States of America have started exploring the possibility of AI assisting in diplomatic functions. The Russian government expressed that AI had the potential to determine the future rule of the world.


70

Artificial Intelligence and Policy in India, Volume 2

With respect to the role of AI in Chinese foreign policy, the development of AI as an assistant in decision making through inputs, suggestions and recommendations is being considered (Amaresh, 2020). China has introduced an AI foreign policy toolbox which it plans to use to be the world leader in AI by 2030 (Amaresh, 2020). The Chinese Academy of Sciences has also built machinelearning algorithms that are currently being implemented in the MFA (Amaresh, 2020). The integration of AI in various functions of the MFA will enable a smoother implementation of AI as a diplomatic tool. As China has already commenced efforts to integrate AI, they have an advantage over other countries. Along with China, the USA is also developing its AI infrastructure and capabilities. As per USA’s Strategic Plan of Information Technology for 2017-2019, American diplomats are using powerful AI technology to make policy changes, enhance transparency and promote awareness (Amaresh, 2020). Further, President Donald Trump has signed an executive order to establish the “American AI initiative” which includes the use of AI by the MFA (Amaresh, 2020). In fact, its capabilities are so advanced that in some cases they can predict social unrest and social instability three to five days ahead, according to the CIA (Abedi, 2020). The government of Canada has also laid a foundation for all-round incorporation of AI through various initiatives such as digital inclusion labs associated with governmental units and civil society (Amaresh, 2020). Canada has also invested in the Pan-Canadian Artificial Intelligence Strategy by setting up a research congregation in Montreal, Toronto and Edmonton (Amaresh, 2020). To effectively implement AI as a diplomatic tool, a holistic approach of integration in other industries and areas is important. In the EDA report, it is expressed how a strong domestic S&T culture is imperative to avoid the necessity of attracting and retaining foreign talent, which is already in short supply (Bjola, 2020). The use of AI in diplomacy could change the stakes and power that a country holds and change the global order as elucidated previously. However, the author of the EDA report warns that the rush to secure the first-mover advantage could lead to the pre-emptive deployment of unsafe AI systems (Bjola, 2020). Governance of AI systems is therefore imperative to ensure the minimization of security and ethics concerns that are likely to emerge with increasing capabilities and expanding adoption of AI technologies. The AI effect is the trend of people getting accustomed to a particular technology as AI brings a new technology into the common fold until newer technology emerges (Bjola, 2020). When AI use in diplomacy becomes a common phenomenon, it is essential for a country to have integrated it successfully to keep


Artificial Intelligence and Policy in India

71

up with the competition. Multi agent diplomacy will soon become the norm and the development of AI is therefore essential to ensure your spot in the game. A nation tends to make a move to which others respond, but ultimately, all nations want to triumph (Amaresh, 2020). Allocating resources for the development of AI in fields such as diplomacy is evidently important as it is an area where the capabilities of a country or its agents are judged in relation to that of others. This can be effectively done through a pragmatic and holistic approach in coordination with other nations with similar strategies.

Concerns and Challenges There are various factors and concerns that need to be assessed in relation with the use of Artificial Intelligence technologies in diplomacy. There are also certain inherent dangers connected to the expansion and development of Artificial Intelligence and some specific issues that arise when AI is used by governments. In the EDA report, economic disruption, security and autonomous weapons, and democracy and ethics are outlined as the three areas at the intersection of AI and foreign policy that should be monitored (Bjola, 2020). These factors if not considered and evaluated could result in considerable harm. The costs of monitoring these risks also need to be considered before implementing AI technology. The exit points of AI have also been laid down in the EDA report. The importance of slowing down or abandoning the use of AI in light of a scenario where AI is being implemented in a particular way or at a pace which has possible negative consequences is expressed (Bjola, 2020). The decision-making environment places several constraints on the ability of foreign policy makers to compare, assess and pursue preferred courses of action. Time constraints may lead to increased reliance on cognitive shortcuts, and search for satisfactory rather than optimal solutions (Bjola, 2020). The quality of the information processed by the software, attitude toward risk and ambiguity affect the quality of results and effectiveness (Bjola, 2020). Efficient systems need to be set in place prior to the integration of AI technology to ensure that these constraints are factored in and can be monitored. The EDA reports enumerated an array of views toward AI including optimistic, pessimistic and pragmatic views (Bjola, 2020). The Pragmatists believe that with careful planning and regulation and by considering its negative consequences, the


72

Artificial Intelligence and Policy in India, Volume 2

power of the machines and technology can be used to augment our skills (Bjola, 2020). As mentioned earlier in the report, a pragmatic approach is recommended to ensure that sufficient measures can be taken to respond to possible negative impacts of the use of AI. Caution has been expressed about an AI arms race that could be a self-fulfilling prophecy (Chivot & Höne, 2019). Competition between powers could produce a cascading and accelerating effect on technological development as military capabilities remain a symbol of power (Chivot & Höne, 2019). There is great danger that AI powered military systems and military led decision-making will undermine existing approaches to conflict containment and de-escalation (Wagner & Furst, 2018). Ethical considerations include the level of control diplomats may exert over AI-enabled platforms, the AI capacity to enable high levels of social control at reasonable costs, digital authoritarian state (Hone, Hibbard, & Maciel, 2020). Measures need to be taken by the international community as a whole to prevent such an occurrence. European Commission’s Consultation on the Draft AI Ethics Guidelines contain some principles such as fairness, inclusiveness, transparency and predictability, security and privacy, accountability, reliability and safety (Chivot & Höne, 2019). Further discussion along these lines and development of agreements and treaties to govern the use of AI are essential. Accountability in relation to use of AI technology is another concern. This is twofold- the first is attribution of responsibility to a particular person or group and second, enforceability and governance. In the case of use of AI in diplomacy, the attributability aspect isn’t a reigning concern as a country or its MFA are directly responsible for diplomatic decisions and the responsibility can therefore be attributed to them. However, governance and enforcement of penalties in the international sphere is difficult. Firstly, treaties and international regulations governing the use of AI as a diplomatic tool need to be put in place. Further, like in Public International Law, enforcement of penalties is not entirely possible. Countries can opt out of being signatories to treaties and all and can also fail to comply with sanctions imposed. The most effective method to counteract or minimize the risks arising from the expansion of AI to new spheres such as diplomacy, i.e., from the domestic to international, is to further expand its reach to create international cooperation. International cooperation enables the creation of new treaties or regulations for the controlled development of AI in diplomacy and other international activities as well as the governance of Ai development in light of the risks and security concerns. Thus, controlled use of AI technology and monitoring of possible constraints and other factors that affect the effectiveness and consequences of the use of AI needs to be mandated.


Artificial Intelligence and Policy in India

73

Conclusions The opportunities and challenges that it poses on various fields will have its direct impact over the use of AI as a tool or as a topic for diplomacy. Technologically advance nations have already begun implementing AI enabled systems to assist foreign diplomats in day-to-day activities as well as to analyze complex international relations among sovereign states. Indeed, the reluctance by a nation in implementing AI tools in diplomatic activities will position it at a strategic disadvantage in the international sphere. The failure to hold a steady ground for AI development in the domestic sphere will also prove to be a subject matter of concern. Artificial Intelligence has advanced into a dimension where governance and regulation on the basis of precedents will seem ambiguous. Therefore, it is necessary for nations to research and implement AI systems for diplomatic purposes with absolute certainty. References Wagner, D., & Furst, K. (2018, August 12). AI and the International Relations of the Future. Retrieved July 25, 2020, from International Policy Digest: https://intpolicydigest.org/2018/08/12/ai-andthe-international-relations-of-the-future/ 2. Hone, K. E., Hibbard, L., & Maciel, M. (2020). Mapping the Challenges and opportunities of artificial Intelligence for the Conduct of Diplomacy. Geneva: DiploFoundation. 3. Bjola, C. (2020). Diplomacy in the Age of Artificial Intelligence. Emirates Diplomatic Academy. 4. Scott, B., Heumann, S., & Lorenz, P. (2018). Artificial Intelligence and Foreign Policy. Stiftung Neue Verantworung. 5. Amaresh, P. (2020, May 13). Artificial Intelligence: A New driving horse in International Relations and Diplomacy. Retrieved July 23, 2020, from Diplomatist: https://diplomatist.com/2020/05/13/artificial-intelligence-a-new-driving-horse-ininternational-relations-and-diplomacy/ 6. Abedi, S. (2020, March 06). Diplomacy in The Era of Artificial Intelligence. Retrieved July 23, 2020, from Diplomatist: https://diplomatist.com/2020/03/06/diplomacy-in-the-era-of-artificialintelligence/ 7. Chivot, E., & Höne, K. (2019, February 04). Event Recap: The Impact of AI on Diplomacy and International Relations. Retrieved July 24, 2020, from Center for Data Innovation: https://www.datainnovation.org/2019/02/event-recap-the-impact-of-ai-on-diplomacy-andinternational-relations/ 8. Access Now.2018. Human Rights in the Age of Artificial Intelligence . s.1. :AccessNow, 2018. 9. Chas. W. Freeman, Sally Marks. 2019. Diplomacy. Encyclopaedia Britannica . [Online] 17 January 2019. [Cited: 26 July 2020.] hhtps://www.britannica.com/topic/diplomacy. 10. Katharina E Hone, Katarina Andelkovic, Natasa Perucica, Virdzinija Saveska, Lee Hibbard, Mirilia Maciel. 2019. Mapping the challenges and opportunities of artificial intelligence for the conduct of diplomacy. S.1. : DiploFoundation, 2019. 11. Donald H. Rumsfeld. 2002. DoD News Briefing – Secretary Rumsfeld and Gen. Myers. US Department of Defence. [Online] 2002. [Cited: 28 July 2020.] https://archive.defence.gov/Transcripts/Transcripts.aspx?TranscriptID=2636. 1.


Artificial Intelligence and Policy in India, Volume 2

74

6 Report on the Analytical Understandings Behind TikTok’s Ban by the Government of India Arushi Mittal1, Vedant Sinha2 & Sameeksha Shetty3 1 Research 3

Intern (Former), Indian Society of Artificial Intelligence and Law

Research Contributor, Indian Society of Artificial Intelligence and Law 2 Research Analyst, Indian Society of Artificial Intelligence and Law

research@isail.in

Synopsis. The Government of India on June 29, 2020 banned 59 mobile applications by invoking its power under Section 69A of the Information Technology Act, 2000 read with Information Technology (Procedure and Safeguards for Blocking of Access of Information by Public) Rules 2009. The ban was announced through a Press Release and the reason given in the Press Release for banning these apps is “protecting the sovereignty, integrity, defence, state security and Public Order of India”. The Report focuses on the Analytical Underpinnings of the Ban of the TikTok and other applications under the assessment parameters of (1) Proceduralism; (2) Constitutionalism; (3) Algorithmic Politics and Diplomacy; and (4) Economic Rejuvenation and Limitations with an Executive Summary by the Chairperson, ISAIL.

Executive Summary by the Chairperson Algorithmic Politics is on a deep yet unclear surge. Special reasons that have involved algorithms into politics and strategy are the extra-diversification of artificial intelligence and the congeniality of disruptive technology. Although, the use of technology is connected with the construct of ideology and policy – it is important that a nation- state’s ideological upbringings are estimated about how algorithms or for any part, any kind of technology has a sustainable and reasonable future. We therefore believe that the TikTok+59 apps-ban initiated by the Government of India should not be seen in direct connection with the border disputes and engagement between India and China, as happened on June 15th, 2020 near the Galwan Valley. India is a founding member of the Global Partnership on Artificial Intelligence, upcoming chair of the G20, the up- coming non-permanent member of the UN Security Council and has been invited by Russian Federation already to join the Eurasian Economic Union. Therefore,


Artificial Intelligence and Policy in India

75

India’s role in a multilateral world would surely be significant. In light of the digital South- South Cooperation and a lack of reinvented cybersecurity strategy despite the Government’s genuine efforts, this move, we believe – is central to India’s interest to safeguard the nexus between data belonging to the people within the cyber and territorial juris- diction of India, and the algorithmic tools, which involve cybersecurity and cyber identity concerns as well. This is a brief analysis about the ban of TikTok by our Research Interns, and we are open to any kind of criticism or comment. You can mail your comments to us at executive@isail.in. Abhivardhan Chairperson & Managing Trustee Indian Society of Artificial Intelligence and Law Proceduralism and Cyber Security • The procedural formalities for enforcing the ban are given in with Information Technology (Procedure and Safeguards for Blocking of Access of Information by Public) Rules 2009. • According to Rule 5, the power to exercise said ban can solely be exercised upon request and review from a competent Court or “Nodal Officer”. • In addition to this another Rule that needs to be complied with is Rule 7 of the Information Technology (Procedure and Safeguards for Blocking of Access of Information by Public) Rules 2009, which lays down that the request for banning a computer resource needs to be reviewed by a Committee comprising of a “Designated Officer as its chairperson and representatives, not below the rank of Joint Secretary in Ministries of Law and Justice, Home Affairs, Information and Broadcasting and the Indian Computer Emergency Response Team appointed under Sub-section (1) of Section 70B of the Act. • Rule 8 also comes into force and requires that the Designated Officer serve a notice to the intermediary against whom the request for ban is made and the intermediary is allowed a chance to file a reply or give clarifications. • Subsequently the Committee compares and contrasts its findings with Section 69A and approves the ban only if it finds merit in the request for ban. The Press Release does not mention under which Rules the ban has been sanctioned, so it might be contended that the government has failed to comply with the Legal Procedure. • Keeping the issue of procedural law aside, to avoid limited literal interpretations, the Indian Cyber Crime Coordination Centre in the Ministry of Home Affairs, as per the Press Release, had requested the Government of India to take action with respect to the applications and the cybercrime implications, which are duly involved. • It may be assumed through the text of the Press Release made available by the


76

Artificial Intelligence and Policy in India, Volume 2

Press Information Bureau it appears that the Press Release might have taken recourse to Rule 9 which allows a ban to be sanctioned in an emergency situation. Under this provision the Designated Officer under the Information Technology Act, 2000 may directly transfer the request for ban to the Secretary of the Department of Information Technology, who may sanction the ban if he finds the reasons for the ban satisfactory. This request is further directed to the Committee for ratification and sanction. The first issue which must be understood is that the term ‘emergency situation’ should not be equated with the Article 352 to 360 of the Constitution of India, 1950 in due interpretation because the proceduralist function of emergency provisions is yet to evolve despite the literature available. Also, there is a lack of policy paralysis research over as to how emergency reservations can be established through the state, what services would be under application, what services are affected and so on. It may not be an aesthetic issue for administrators because the executive wing of the Governments have the primacy over such matters to implement recourses. However, due to lack of approach and literature, it is clear that terms like ‘emergency situation’ cannot be construed to the constitutional framework related to the presidential proclamation of emergency. The second issue could be that under the Blocking Rules of 2008, individual grounds for blocking must be established, if the terms like Software are readily defined under the retrospective legal instruments under Indian Law. However, that still does not merit the argument if the ban itself was selective, as the Ministry of Home Affairs may render some clarification on the nature of the ban later. Nevertheless, the selective banning of apps would be subject to national security implications, which cannot be limited to how popular or academic opinions are made.

Constitutionalism • As this decision is unprecedented, there is not much literature available on it with respect to Constitutional Law. We do not believe that we find ourselves competent to subjectively implement the triangular representation and clustered representation of the Articles 14, 19 and 21 of the Constitution of India, 1950. However, keeping the subjective approaches of interpretation models aside, it is asserted that a ban of 59 apps would have obvious effects over the exercisable opportunity connected to the Article 19 – which can be connected to not only the 59 apps, but any particular service or instrument – which might be a subject-matter in issues related to the Part III of the Constitution.


Artificial Intelligence and Policy in India

77

• In Anuradha Bhasin vs Union of India [W.P(C). No.19716/2019-L], the Supreme Court explicitly laid down that the principle of restriction should be congruent with the proportionality standard. However, the standard of proportionality duly proposed – according to our estimates, cannot be formalistic, neither they can bear absolute and dictatorial control over affairs related to state action and the Part III of the Constitution of India, 1950. Thus, it would be quite indeterminate to decide as to how exact the proportionality and its dimensions would surely be. Algorithmic Politics and Espionage TikTok’s journey to relevance has been characterised by rough road and has mortally ceased to exist as a geo-political casualty between the mixed relations between the People’s Republic of China and the Republic of India. It had been a bag of mixed response till it flourished in India, with several proponents vying for it and in similar kind cauterising its use. However, the common response and concern across the world has been similar, i.e. it is a surveillance tool thinly veiled as a social media app. • TikTok was still placed way behind Facebook, google and likes, but experienced a growth, described in the words of Sheryl Sandberg as ‘worrying’. However the point of focus lies upon a different aspect, the manner of usage of the app. • TikTok is characterised by its exceptional algorithm that super-personalises the recommendation to each user. A basic conjecture can be sustained that such recommendation algorithms utilises a larger dataset than other social media platforms, therefore data privacy issues arises. However, such idea cannot hold much water as the amount of data generated by TikTok through user interaction is lesser than Youtube, Facebook and instagram. The point of data generation comes cleverly from a brilliant tactic of presenting shorter format of videos, with a higher retention of user and attention span, an average user spends 10 minutes worth of time at a stretch against instagram, which stands at 3, but still fails to meet up with the 20 minutes mark of Facebook. As the amount of data generated is proportional to the user interaction, the Individual within the same span of time is interaction with way more videos in TikTok, compared to Facebook or Youtube, thereby the metric for each video differs and therefore is cleverly generating data set that are further utilised in creating super personalised recommendation. The app is characterised by its feed page, which is super specialised contingent upon the following points of data collection: • User information: The videos liked, accounts followed, comments posted and the content created. • Video information: captions, the music used, the hashtags and the text


78

Artificial Intelligence and Policy in India, Volume 2

description. • Device and account settings: Language preference, country setting and device type • The history of Likes, shares, comments and percentage watched are factored in while deciding the recommendations. The cycle of content creation is relatively lesser than YouTube as the timing constraints for content creation, lowers the qualitative and increases the quantitative bar for the content, easy creation and sustained distribution to people of similar geographical setting, language preference, the device type, the songs they preferred, the account they followed, the content created. It became much easier to collate people into categories of preference due to the extremely short videos and the high number of videos watched over the time, thereby creating super specialised content for each individual. The very structure of TikTok content creation allowed such categorisation of people, as it made inherently easier to specialise recommendation which in author’s opinion is a bigger concern than the alleged data privacy issue. With the brilliant recommendation structure, the next point of concern is the fact of user retention. The efficacy of the recommendation algorithms can be understood from the fact that user retention on tiktok is over 10 minutes, 3 times that of instagram. With an impressionable audience dedicating a high amount of time and being swayed over by the content. It forms another point of concern. The interesting point out of the privacy policy of tiktok is the kind of Metadata and device information collected app and file names and types, keystroke patterns or rhythms, and platform. A Comparative analysis with the collection of interesting data in twitter: • Information such as browser cookie IDs, mobile device IDs, hashed email addresses, demographic or interest data, and content viewed or actions taken on a website or app. • Log Data includes information such as your IP address, browser type, operating system, the referring web page, pages visited, location, your mobile carrier, device information (including device and application IDs), search terms (including those not submitted as queries), and cookie information A proper and observable difference between the kind of data collected between twitter and tiktok is that the Twitter doesn’t collects data such as keystrokes and furthermore as stated explicitly within the policy statement of the twitter, prima facie comparison between the two gives an impression that users of twitters have Subsequent control over the data so provided as compared to tiktok. The following has brought into observation the following things: • They set up a local proxy server on your device for "transcoding media", but that can be abused very easily as it has zero authentication. Setting up a local proxy on the device which is remotely configurable. (Zakdoffman, 2020) • TikTok CDN transferred data over HTTP against HTTPS to move sensitive


Artificial Intelligence and Policy in India

79

data across internet, open to man-in-middle attacks which is a very big concern as it completely exposes the data principal to unsophisticated attacks that can easily executed. (Mysk, 2020) • Copying and pasting the clipboard with every keystroke. • Obfuscation against the viewing the function of the app. • Data is forwarded to Facebook as against GDPR. • Besides conventional trackers (Google Analytics), the highly controversial method of device fingerprinting is performed to assign a unique hash value for the browser so accessed. (Rufposten, 2020) Tiktok was undergoing evolution as a platform and could be compared to the 2010 levels of YouTube. The nature of the ease, the super specialised content, combined with an impressionable and compliant audience provided legitimate concerns of the platform holding the potential of acting as a propaganda machine highly effective in identifying the requisite audience for its delivery. It eliminates the need for the propaganda seekers to effectively reach out to different sources and places people of similar interest within the same vertical of super specialised recommendations. Thereby at a point where the platform would have achieved maturity and gained social acceptance amongst the wider audience, it would have functioned as a eligible platform perfected for propaganda. However, it is to be noted that the tiktok had not qualified to such levels of usage. Regarding the levels of privacy, the threats were believed to be consistent as there were several security and lax standards implemented with questionable methods such as fingerprinting, usage of HTTP instead of HTTPS. There are superlative degree across various parties, with the highest degree of reaction coming from American tech leaders and European tech proponents being very concerned with the privacy issues regarding tiktok and the regulatory compliance regime of the CCP which is touted to be an anti-thesis to data privacy. The CCP believes in the dual use of a state asset, that its civilian aspect should be co-terminus with the military application, it has been openly observed in many sate sponsored companies, espionage attempts of intellectual property from US universities. The development of the politics is interesting, as reciprocity from the CCP as to the open-ness of democratic economy is not observed, as evident within the Great firewall of China and disallowing the competitors. The further development has to be keenly followed, as the Indian government is keen on burning political bridges that allows the firms in China to freely operate with the nation. How Tiktok’s economic involvement affected india’s cyber economics The Ministry of Information Technology has banned 59 Chinese apps including Tiktok and ShareIt on 29th June, 2020 using section 69A of the IT Act. The reasons for this ban as stated in the press release include prejudice to the


80

Artificial Intelligence and Policy in India, Volume 2

sovereignty and integrity of India, defence of India, security of state and public order, privacy and data security concerns and bipartisan concerns (Delhi, PIB, 2020). This move by the Indian government is likely to affect India’s cyber economy as some of the applications that were banned were widely popular. The ban opens the market up for new apps that could cater to the demand of the people. The ecosystem's ingenuity is said to slow down as a result of the ban as apps in India are inspired by the adept engineering of Chinese apps (Bureau, 2020). With a rush to fil the gap left in the market by TikTok following the ban, the innovation of new, original AI technology might be affected. In effect, causing path dependence, the tendency to become committed to develop in particular ways, which leads to institutional inertia (Greener, 2017). The costs of this move need to be ascertained to gauge its effectiveness. Regulation should seek to minimize the sum of damage costs from security breaches and administrative costs as it otherwise risks interfering with properly functioning markets, introducing inefficiencies where none existed leading to a downturn in the market, and in extreme cases, a market failure (The Law and Economics of Software Security, 2006). In the present scenario, as most people are likely to comply with the move of the government, the administrative costs of this decision can be predicted to be comparatively low as opposed to a ban on apps without there being conflicts of this sort. This move by the Indian government has, however, dealt a blow to the confidence of Chinese investors and traders due to which the Indian economy could remain subdued for a considerable period of time (ANI, 2020). The imbalanced trading relationship between India and China was predicted to lead to tensions and conflicts as Indian attempts to insulate domestic industries from Chinese competition are bound to clash with Beijing’s appetite for a greater market for its exports (Mears, 2019). Thus, the banning of these apps could possibly lead to further conflict and economic inefficiencies, which can be resolved when reasonable bureaucratic reforms are put into place. The Ministry of IT has stated misuse of apps for stealing and unauthorized transmission of data of users to servers outside India as one of the reasons for the ban. (Bureau, 2020) This sets precedent of governmental action against privacy violations and data collection. Although this could hinder or slow the development of AI, protection of user privacy is necessary to ensure compliance with rights of individuals. In addition, such measures could improve consumer confidence towards AI based software and apps (PTI, 2020). An alternate and more effective solution to counter privacy concerns would be development of laws to prevent such breaches. This would be a more effective solution addressing privacy issues of all applications, not just Chinese ones. Thus, it can be observed that the primary motivating factor for this move is bipartisan concerns and not data privacy concerns. The apps, specifically TikTok, being banned has several implications. TikTok’s


Artificial Intelligence and Policy in India

81

popularity can be accredited to its artificial intelligence software that tracks user preferences and activity to dictate the content watched (Barret, 2020). The ban opens up opportunities for Indian apps to use similar consumer AI strategies to gain popularity. India has a large tech market which is growing at rapid rates. According to the Economic Survey 2019, India is the global leader in monthly data consumption, with average consumption per subscriber per month increasing 157 times from 2014 to 9.8 GB in June 2019 (Warsia, 2020). Further, the number of smartphone users is expected to double by 2024 (Madhukalya, 2020). Firms are also heavily investing in enabling technologies due to the rise in demand for tech services as phenomenon like work from home promise to become a norm in light of the COVID-19 pandemic. (Warsia, 2020) In the next two years, 50,000 startups are expected to develop in India, most of which are tech-based. (Warsia, 2020) India accounted for about 30.3% of TikTok downloads. (Madhukalya, 2020) The vastness of India’s tech economy coupled with the popularity of the banned apps shows the gap in the market created due to the ban. With the main player in the market being banned, Indian companies can use the opportunity to develop or promote product that use consumer AI. This could potentially lead to an increase in the use of consumer AI software in the Indian market. Apps like Chingari and Mitron are already being promoted by appealing to former TikTok users. However, Unless Indian apps are of a level that can compete with other international ones, Indian apps and services may not seek to gain from this move and it may have no positive impact on the Indian players in the market. India’s prohibition is said to be likely to give American companies and apps an edge in the global tech market over Chinese competitors. (Bloomberg, 2020). But as these apps have been banned only in India, Indian companies are likely to be the only ones to develop products to cater to the demand of the people in India suited to their needs and taste. Further, we can observe negative sentiment towards China’s tech revolution around the world with US trying to cut off Huawei. India having one of the largest populations in the world, can be a self sustainable market. If other countries also follow India’s lead to establish virtual embargo, Indian apps developed to compete in similar spheres that Chinese apps reigned could gain popularity due to India’s large population and head-start, although the chances of this occurrence are low. Although the banning of these popular apps may provide opportunities to Indian companies to integrate and develop AI and could create a positive impact, there are also implications of this decision on the media industry. TikTok is a globally used application and has become one of the most popular apps amongst the younger generation. Although its primary purpose is entertainment, it is also used as a medium for communication etc. References


Artificial Intelligence and Policy in India, Volume 2 82 1. ANI. 2020. Beijing says app ban will hurt Chinese companies, India may suffer more economic losses than during Doklam crisis. The Times of India. [Online] 01 July 2020. [Cited: 03 July 2020.] https://timesofindia.indiatimes.com/india/beijing-says-app-ban-will-hurt-chinesecompanies-india-may-suffer-more-economic-losses-than-during-doklamcrisis/articleshow/76736521.cms?utm_source=contentofinterest&utm_medium=text&utm_ca mpaign=cppst. 2. Barret, Eamon. 2020. A.I. in China; TikTok is justs the beginning. Fortune. [Online] 20 January 2020. [Cited: 03 July 2020.] https://fortune.com/longform/tiktok-app-artificial-intelligenceaddictive-bytedance-china/. 3. Basu, Naynima. 2020. Banning apps violates WTO rules, will affect employment of Indians: Chinese Embassy. ThePrint. [Online] 5th July 2020. theprint.in/diplomacy/banning-appsviolates-wto-rules-will-affect-employment-of-indians-chinese-embassy/451938/ . 4. Bloomberg. 2020. India TikTok Ban Threatens China’s Rise as Global Tech Power. Bloomberg Quint. [Online] 03 July 2020. [Cited: 04 July 2020.] https://www.bloombergquint.com/business/india-s-app-ban-threatens-china-s-rise-as-aglobal-tech-power. 5. Bureau, ET. 2020. India bans 59 chinese apps including Tiktok, Helo, WeChat. The Economic Times. [Online] 03 July 2020. [Cited: 05 July 2020.] https://economictimes.indiatimes.com/tech/software/india-bans-59-chinese-apps-includingtiktok-helo-wechat/articleshow/76694814.cms. 6. Delhi, PIB. 2020. Government Bans 59 mobile appps which are prejudicial to sovereignty and integrity of India, dfence of India, security of state and public order. Press Information Bureau. [Online] 29 June 2020. [Cited: 25 June 2020.] pib.gov.in/PressReleaseDetailm.aspx?PRID=1635206. 7. Greener, Ian. 2017. Path Dependence. Encyclopedia Britannica. [Online] 01 September 2017. [Cited: 05 July 2020.] https://www.britannica.com/topic/path-dependence. 8. Grover, Gurshabad. 2020. Why the TikTok ban is worrying. Hindustan Times . [Online] 5th July 2020. https://www.hindustantimes.com/analysis/why-the-tiktok-ban-is-worrying/story9Q7Gpv9t1Uxavd8hYJnjDO.html. 9. Madhukalya, Anwesha. 2020. '$100 billion? May be not!': How TikTok ban will impact ByteDance's valuation. Business Today. [Online] 01 Juy 2020. [Cited: 03 July 2020.] https://www.businesstoday.in/current/economy-politics/usd-100-billion-may-be-not-howtiktok-ban-will-impact-bytedances-valuation/story/408449.html. 10. Mears, John. 2019. One Mountain, Two Tigers: an emerging Sino-Indian trade-seccurity dilemma. 2019. 11. Mysk. 2020. TikTok vulnerability enables hackers to show users fake videos. Mysk. [Online] 5th July 2020. https://www.mysk.blog/2020/04/13/tiktok-vulnerability-enables-hackers-toshow-users-fake-videos/ . 12. PTI. 2020. India banning Chinese apps effective way to impose costs on China for its actions at border: Expert. The Economic Times. [Online] 03 July 2020. [Cited: 06 July 2020.] https://economictimes.indiatimes.com/tech/software/india-banning-chinese-apps-effectiveway-to-impose-costs-on-china-for-its-actions-at-border-expert/articleshow/76763938.cms. 13. Rufposten. 2020. Privacy Analysis of Tiktok's App and Website. Rufposten. [Online] 5th July 2020. https://rufposten.de/blog/2019/12/05/privacy-analysis-of-tiktoks-app-and-website/. 14. S. Sonkar, S. Aggarwal. 2020. Where does India's Ban on Chinese Apps fit into the global trade debate? TheWire. [Online] 5th July 2020. https://thewire.in/tech/india-china-apps-globaltrade-debate. Published 2020. 15. The Law and Economics of Software Security. Hahn, Robert W and Layne-Farrar, Anne. 2006. s.l. : Harv. J.L. & Pub Poliy, 2006, Vol. 30. 16. Warsia, Noor Fathima. 2020. Refuelling India's Tech Economy. Business World. [Online] 03 July


Artificial Intelligence and Policy in India 83 2020. [Cited: 04 July 2020.] http://www.businessworld.in/article/Refuelling-India-s-TechEconomy/03-07-2020293674/#:~:text=Refuelling%20India's%20Tech%20Economy&text=Economic%20Survey%20 2019%20indicates%20the,ever%20level%20of%20terabytes%20consumption.. 17. Zakdoffman. 2020. Delete this chinese spyware now. Forbes. [Online] 5th July 2020. https://www.forbes.com/sites/zakdoffman/2020/07/01/anonymous-targets-tiktok-deletethis-chinese-spyware-now/#709190135ccf .


Artificial Intelligence and Policy in India, Volume 2

84

7 Indian Strategy on AI & Law, 2020 – Recommendations on AI Governance in the Indian Judicial System Mehak Jain1 1Research

Intern (Former), Indian Society of Artificial Intelligence and Law research@isail.in;

Synopsis. Indian Courts have more than 3.5 crore pending cases. With a miserable judge to person ratio, speedy justice remains a desperate hope for most people in the country and the situation doesn’t seem to get better with the mere addition of few more judges. Undertrials face extensive sentences in the jails and comprise of more than half of the population of jails without even seeing the face of a court. In such a scenario, alternative methods for resolving disputes is the need to shape the face of the justice system as a whole. While ADR has been seeing growth in the country, onset of the pandemic has obstructed delivery of justice at an alarming rate. Moreover, merely ADR isn’t sufficient to meet with the demand and a total upheaval of the justice system is necessary to give the right to not only justice, but a speedy justice. This highlights the need to incorporate AI in the justice system. AI has been shaping almost every sphere for our life since its advent and its time that its employment by the Indian justice system to breathe a sigh of relief. Numerous countries around the world have been successful in including AI to supplement their judiciary and to provide an effective and an expedited trial. There exist concerns circling the use of AI as well, mainly because of the black box nature which AI uses to provide results. This paper underscores why and how AI can be implemented in our justice system; but moreover, it aims to explain how to use it judiciously and as merely an accessory rather than replacing human participation entirely. By the way of using metaphors, the paper likens every aspect of AI to a human vocation and throws light of how it can be used in an additional capacity. It also makes a case for ODR since it’s the need of the hour (considering the pandemic) and can vastly improve the current situation. Moreover, ODR has already been incorporated by most countries and more than a million cases have been successfully resolved without having the need of subjecting them to a human judge. The Estonian model of e-filing of paperwork is also ideal. Chatbots can also be used to make citizens aware of their rights and to guide them with respect to the procedure to be followed. AI is necessary and this fact is indubitable; the key concern is to make sure that it is implemented properly. It’s the appropriate method for easing the burden on the judiciary since it works in a transparent manner, is therefore reliable, cost-effective, and speedy. No technology has ever failed India; be it the advent of computers or the translation into an entirely digital economy. It remains at par with the leading nations of the world in all capacities and AI is one


Artificial Intelligence and Policy in India

85

bright future which will boost major sectors of the country but more importance, shall be able to provide sustainable justice for all.

Introduction

Indian Courts are overburdened with pending cases. There are 20 judges per 10 lakh people (LiveMint, 2019), and a speedy and an effective trial system is nothing but a utopian dream. On an average, Indian High Courts take up to 4 years to determine the verdict of a case (Thakur, 2017). A study reveals appalling data according to which a judge of the Indian High Court hears a case in less than 5 minutes on an average (FirstPost, 2016). Furthermore, data suggests that 63% of the prisoners in Indian jails are undertrials (BusinessStandard, 2019). Undertrials spend years being incarcerated without even getting a trial to ascertain whether they are guilty or not. Trials which should ideally be done with in days stretch over decades. Losing faith in pernicious circumstances such as these is inevitable, and the justice system is way past broken. Expedition of the process and reducing pendency is the need of the hour. Additionally, achieving this object by using the most economical way shall be the crème de la crème. And this is where Artificial Intelligence (hereinafter ‘AI’) needs to come into play. Even the Chief Justice of India, S.A. Bobde, has acknowledged the pressing requirement and the significance that these systems will play in transforming the face of the justice system. Artificial Intelligence is expected to affect all the significant spheres of human life in the near future, and the judiciary is not an exception. It can process magnanimous amounts of data in seconds and needs to be deployed, not to replace judges but to assist them instead. A number of jurisdictions in the world have already incorporated AI to their judicial systems and have seen promising results. This paper aims to examine the feasibility and the extent to which AI can be deployed to serve in the judicial system. An AI-based system specifically designed for a particular judicial task could prove to be very effective in assisting the judges with making decisions, thus enabling them to achieve their target smoothly and in ensuring speedy and efficient delivery of justice. By the way of examining and analysing the steps taken by other countries to embody the use of AI in their justice systems, and examining the prospective problems which might arise while dealing with these systems, the paper suggests solutions and ways in which AI can be used and subsumed to reinforce not only the right to justice, but also the right to speedy and effective justice. This paper is divided into four sections. By the way of analysing implementation of AI in the justice systems of a panoply of countries, the author assesses and analyses their feasibility to be incorporated within the Indian legal system. In the next section, some key concerns surrounding dependability of AI have been underscored to throw light on how AI should be included narrowly, which brings us to the third section of our paper. This section, by the way of using metaphors,


86

Artificial Intelligence and Policy in India, Volume 2

provides examples and ways in which AI can be used sparingly without compromising on its efficiency or risking the involvement of human judges or lawyers. The last section tailor-makes and lays out solutions and suggestion as to how AI should be finally deployed into the justice system to repair the damage and ensure lubrication and expedition of it in the future.

Case Studies United States of America.

The judicial system in the USA has incorporated AI systems and extensively use it to study the likelihood of recidivism, to determine whether an accused/convict poses a flight risk, and to determine other behavioural patterns. The former is done with the help of risk-assessment algorithms. These algorithms use details of the defendant as the input and give out a recidivism score, i.e. a single number predicting the likelihood that they might commit the offence again. The judge then factors in this number while deciding the strictness of the verdict and in determining the type of rehabilitation facility required for the defendant (McFadden, 2020). An example of such algorithm is the Arnold Foundation algorithm which has been rolled out in 21 states. It is used to predict the behaviour of the defendant in the pre-trial phase and works on the basis of pre-fed data. This method has been proven to be more accurate than a human judge. However, it cannot replace human discretion since every case differs and there are always additional factors which need to be taken into account while delivering the final verdict of the case. Moreover, this is where machine bias comes into play. More on this is elaborated in the next section. However, this underscores the point that AI can never replace judges; it should be meticulously included insofar it is used to provide them assistance. European Union. The Estonian judicial system has employed AI systems to reach the zenith of their efficiency and to remodel the face of the judicial system as a whole. After the successful integration of the e-Filing system, Estonia is now working towards developing an AI-based ‘robot judge’ to decide and render judgments in the Small Claims Court. Led by the chief data officer, who is a 28-year-old graduate student, the country aims to solve petty disputes (with claims valued below 7,000 Euros) while leaving the humans to solve complex matters. The parties shall upload relevant information and material documents to the system and a judgment shall be given by the ‘robot judge’. This judgment is open to appeal and the appeal shall be adjudicated by a human judge. (Niller, 2019) On the face of it, this might sound extremely impracticable and idealistic, but Estonia’s highly digitalised Government makes this possible since Estonia doesn’t have to start from scratch. It also put necessary safeguards such as citizens can access who viewed their information by logging into the government digital portal. Eesti Oigusburoo, a Tallinn-based law firm provides free legal aid to citizens via


Artificial Intelligence and Policy in India

87

a chatbot and generates simple legal documents as required (Vasdani, 2020). It also aims to match clients with lawyers depending on their circumstances and is looking to expand to Los Angeles and Warsaw. China. China in 2019 brought the visual of AI-powered judges to reality. Beijing introduced Xinhua (Liangyu, 2020), an artificial female with face, voice, and expressions based on a human female judge in the Beijing Judicial Service. Xinhua is claimed to be the ‘first of its kind’ and primarily assists the judicial system by dealing with basic repetitive casework. ‘She’ deals with litigation reception and online guidance to make the judicial system more inclusive, efficient, and widereaching to the citizens of the city. Xinhua is therefore an ideal model of how AI can be used to make the process more inclusive and how it should be used to assist rather than decide the verdict. AI is also being increasingly used in China to surveil and scan online media presence and social network comments and messages to gain evidence against potential offenders. Moreover, facial recognition softwares are being increasingly optimized to identify and convict offenders. Australia. A Split-Up system with rules-based reasoning and neural networks is being developed in Australian Family Law courts to predict outcomes. This system is used by judges to supplement their decision-making, especially in cases involving marriage and divorce. One way it does so is by highlighting what material assets must be added in the settlement and establishing the percentage each partner should receive from the common pool. This system takes 94 factors into account in calculating the percentage and making the analysis. The system seeks to be transparent and thereby uses the Toulmin Argument structures to depict how it reached its decision (Wu, 2019).

Concerns over AI Adjudication AI Adjudication gives rise to namely four types of concerns: incomprehensibility, datafication, disillusionment, and alienation. Incomprehensibility here relates to understanding, which is a distinctive aspect of human adjudication. Likewise, datafication relates to adaptation, disillusionment to trust, and alienation to participation. By emphasizing on the concerns which come bound with the ideal of an integrated AI-Justice system, the author wants to underline that AI comes with its blemishes and thus needs to be incorporated wisely and precariously.

Incomprehensibility. One of the most jarring concerns, when it comes to the talk about AI decisionmaking, is that it might function in ways the result of which might be too complex and hard for humans to understand (Metz, 2018). AI-based systems often use


88

Artificial Intelligence and Policy in India, Volume 2

machine learning techniques to arrive to a conclusion; however, machine learning here refers to deep learning techniques that lack coherent and explicit reasoning used by AI to make a decision. Simply put, it might be difficult to provide the basis and reasoning used by a system to give its output, i.e. the verdict decided by the AI, since it did that on the basis of intricate algorithms and thus might be so complex that it ends us being incomprehensible (The Intuitive Appeal of Explainable Machines, 2018). This gives rise to multiple concerns. First, the opaqueness and incomprehensibility of AI can reduce the accountability of the judiciary to the common public. Second, it could pose legitimacy or fairness problems for the individuals who were subject to AI adjudication. Third, human judges might obfuscate the grounds and limits of their precedential decisions today in order to preserve room for jurisprudential maneuvering tomorrow (Narrowing Precedent in the Supreme Court, 2014) (Judicialization and the Construction of Governance, 2002). While AI adjudication might one day replicate the way human judges function, it poses a problem now as AI adjudicators can’t match the level of transparency, accountability, and opacity the same way human judges do. 3.2 Datafication. Datafication, or emphasis on available data and its uses might have an unintentional and undue influence of the legal system. The reliance on existing data might reinforce bias against a particular group and shield the justice system from legitimate criticism. For e.g., criminal data reflects the deep-rooted, inherent and pre-existing racial bias in law enforcement (Digitalizing the Carceral State, 2019). Likewise, this argument can be stretched to the Indian subcontinent. The existence of a social structure entails analogous problems. Therefore, if AI systems rely on such data, they might end up exacerbating existing conditions and solidify already existing biases. Moreover, extreme reliance on data might undermine other moral aspects of a case decided by human judges such as determining the intent behind an offence, as it might be perceived as a hindrance in the pursuit of justice based entirely on data, algorithms, and calculations. Disillusionment. The development and use of AI adjudication is already prompting skeptical reconsideration of existing practices (Angwin, 2016). Human judges aren’t perfect, and are accompanied with many serious and often ignored deficiencies. For e.g., a judge might have cognitive biases, prejudices, and narcissistic behaviours (Mocan, 2018). While AI adjudication also comes with its own flaws, it might cast light on methods used by human judges and lead to disillusionment by having an effect on its perceived productivity, effectiveness, value, stature, and democratic legitimacy. This would erode the trust of public in the legal system. Disillusionment will lead to demolishing a previously set up social construct which carries with itself hope and faith. Disillusionment is only desirable when it reflects greater appreciation of truth (Developing Artificially Intelligent Justice, 2019).


Artificial Intelligence and Policy in India

89

Alienation. AI adjudication and human participation are inversely related. Increase in AI adjudication will inevitably have an impact on and diminish human participation into legal matters, at least in some facets. This might lead to alienation of humans from participating in the legal system. This alienation might extend to such an extent that a fully autonomous legal system without any human involvement might come into existence (Artificially Intelligent Law, 2019). Even though this visual is dystopian, more moderate and realist situations might rise. Most tasks such as drafting legal documents, learning about the procedure involved, assessing matters, resolving disputes, etc might shift en masse towards AI adjudication. The role of judges and lawyers will be limited and role of computer technicians and scientists, corporations, etc. shall increase. The most worrisome feature about alienation is that it will lead to a subversion of public wisdom and oversight. Even if other concerns as mentioned above are overcome, the problem posed by alienation threatens the persisting modes of public accountability and civic duties. All these concerns underscore the necessity of keeping AI-systems in the background and leaving the humans to perform majority of the tasks. AI-based system should not substitute the place of judges and lawyers in the legal system and must act as tools in the hands of the legal fraternity to ease their burden and making them more efficient. This brings us to the next section of our paper, which highlights how AI-based systems must be used to give assistance to the justice system but not take their place entirely.

‘AI should help rather than replace’: On AI complementation of the justice system Speaking at 79th Foundation Day celebration of the Income Tax Appellate Tribunal (ITAT), the Chief Justice of India said the use of technology in judicial functioning is a fascinating area and a significant breakthrough. However, he emphasized that the same must not be seen as the replacement of human discretion exercised in the justice system (PTI, 2020). Kerr and Mathen emphasise the necessity of designing the AI as assistive tool: Undoubtedly, if and when future lawyers or (perhaps one day) judges actually begin to delegate significant legal tasks or decision making to AIs, the profession will require that these AIs are utilized merely as assistive tools that help lawyers or judges carry

out their responsibilities, not as replacement for them (Ker). Librarian. What job does a librarian do? He/she researches and furnishes the requested literature. Likewise, just like a librarian, AI can be used to save time to accumulate all the relevant information pertaining to a case in a matter of seconds. The level


90

Artificial Intelligence and Policy in India, Volume 2

of abstraction may wary: the command might be as simple as entering the case name, or finding all existing precedents on a particular issue. Legal research can be vastly improved by the use of AI. Legal research isn’t merely finding the related precedents and case laws. Expertise in legal research resides in the connections and links drawn between the individual pieces of information ( Changing Order: Replication and Induction in Scientific Practice, 1992). Coming up with information is insufficient till it is structured and analysed properly. The benefit of using AI in this way is two-fold: it saves the trouble and time of scanning online databases and commentaries to find the relevant information, and it enhances transparency. Earlier, before the advent of the internet, judges used to rely on librarians and interns to carry out their research work. The judges did not know how these people came to the results and this could be enquired merely on a person-to-person basis. However, AI will improve transparency as it uses sophisticated algorithms to ensure bias-free information. Thus, by delegating research work to AI Systems, the complex task of determining the verdict can be done by human judges, ensuring time conversation and increased transparency in the work. Attorney-General. The Attorney-General of India is responsible for representing the Government of India and acting as it advisor in legal matters referred to them. They appear on behalf of the Government of India, and propose legal solutions to the matters referred to them. By making this analogy, the author intends to reiterate the point of Co-Robotics, since in this scenario, the AI and the judge shall work on the same case. It is different from the case of the librarian since it gives transparency to the general public of India about how the AI system is being used. The judge may accept the reasoning given by the AI or opt to dissent from it. But it shall be an assistive method which shall provide additional and counter-factual information which the judge may/may not have taken into account. However, this approach has its flip side (Artificial Intelligence in Court, 2018). It is imaginable that social dynamics might influence the judge’s use of AI submissions. Judges who tend to rely on AI reasoning might be viewed as inferior and judges who continue to dissent from its opinion might be undermined as well for being too authoritarian and disregarding the wisdom of such advanced technology. This remains a slippery slope which must be meticulously analysed before arriving to a final decision. Official with limited judicial powers. This analogy is similar to the Estonian government’s vision of a ‘robot judge’. According to this approach, a matter must be first referred to an AI-based system. It will give the verdict. However, if the same is appeal, the verdict given by the AI system shall be automatically invalidated and the matter shall be brought up anew by a human judge. If the human judge and the AI Official disagree over a case, it can be only due to one of three reasons: The AI made a mistake, the human judge made a mistake, or


Artificial Intelligence and Policy in India

91

the case allows more than one correct interpretation (Baker, 2011). Therefore, a distinction needs to be made between plain case and hard cases. Plain cases would allow the AI, functioning as black box, to decide cases on its own. In order to balance the lack of explanation, a subsequent decision by a human judge would invalidate the AI’s decision. However, when it comes to hard cases, there is more than one correct solution as there might be multiple interpretations and therefore, the reasoning matters the most. Regarding hard cases, a humangenerated decision that provides justification is worth more than a machinegenerated decision that lacks explanation, even if it is more likely to be correct.

Solutions and Suggestions Over the past few decades, with the advent of AI, distinct use cases have been proposed. They can broadly be classified into two categories: one which is litigant-focused (to help them figure out the system and make informed and stronger claims), and one which is court-focused (to increase its productivity and transparency). This subsection makes recommendations and possible solutions as to how AI can be incorporated into the justice system in the Indian Context by analysing pieces of literature and what they propose. Guide the litigant and help them figure out how the laws applies to them. It is proposed that AI can help understand whether a given case is in accordance with the law or not, and help them conduct effective and targeted research as to how it applies to them (Cornell, 2018). Determining the latter, i.e. the question of how law applies to them is a complex task since the question of law might be straightforward or qualitative. If a question relates is easy i.e. can be answered by algorithmic calculation (for e.g. whether a person broke a traffic rule), it can be answered by a little technical challenge (Tito, 2017). If a question is complex, such as determining the intent of a person behind an offence, while mere algorithmic programming won’t suffice, it might be conquered and answered by the help of neural networks and deep learning techniques. It is also proposed that chatbots are used to answer queries of the litigants and to make them more aware of their rights (Cook, 2018). Determine the credibility of evidence and quality of the claim. Some exploratory work has highlighted how AI could automatically classify and break down the chronological events in a case, in order that it could be understood computationally (Legal Docket-Entry Classification: Where Machine Learning stumbles). Machine learning could thereby find patterns in claims to indicate whether something has been argued well, whether the law supports it, and evaluate it versus competing claims. Provide free legal aid by substituting lawyers. This vision of AI intends to develop the AI-based justice system holistically by


92

Artificial Intelligence and Policy in India, Volume 2

offering guidance and helping the clients understand their rights and available forms of relief then can claim, informing them about the procedure, helping them to draft necessary documents, etc. (Advisory Systems for Pro Se Litigants, 2001)Chatbots can be used for achieving this aim, which provide instructions about the how and what, i.e. the procedure to follow, the documents required, and making the complainant aware of his/her rights. Making a case for online dispute resolution. Online Dispute Resolution(‘ODR’) is a method of dispute resolution which involves Alternative Dispute Resolution(‘ADR’) techniques such as mediation, negotiation, and arbitration to settle small-claims and medium-claims disputes online. While the Indian judiciary has taken to online filling and hearing of cases in light of the pandemic, ODR is a method which needs to stay for good to ease the burden on the Judiciary owing to the magnanimous volume of cases. Globally, ODR has witnessed a boom in its usage and is being increasingly used by countries such as the USA, China, some EU nations, etc. Millions of disputes have been resolved using this mechanism without having to file any case in the traditional court of law. ODR is predicted not only to ease the burden of the courts, but also to provide thousands of job opportunities to arbitrators and lawyers (Singh, 2020). It’s a tried and tested method and India should be ready to incorporate it into its justice system. The ease of usage of ODR is what has made it so popular and inducible to the main stream. It can be classified into two types, synchronous and asynchronous ODR. The former is where the parties communicate via emails and other such application (i.e. ones which are not conducted in real time) and the latter provide real-time communication. Asynchronous ODR is very popular and involves virtual meetings in chatrooms. Centre for Alternate Dispute Resolution Excellence (CADRE) and SAMA are some of its examples with reference to the Indian context. The objective is to contain and resolve disputes by using analytical and digital techniques. ODR is the need of the hour as it is an extremely viable option which guarantees efficiency, affordability, transparency and quick disposal of cases. In 2019, India spent an approximate of $6 billion on legal expenses; an amount which can be vastly cut down by employing ODR techniques as a way of dispute resolution. COVID-19 further underscored the necessity of the system. The advantages of ODR are manifold and it’s indisputable that ODR needs to be employed. However, what is required is to structure a framework for the successful deployment of ODR. Specific types of cases such as motor vehicle accidents, service disputes, insurance disputes, etc. should be sieved and sorted to be solved by ODR. ODR should aim to solve the ‘high volume, low-value’ cases replicating Hong Kong’s model of ODR so that the burden on courts is eased. Moreover, infrastructure and technology need to be developed in a way that it’s able to reach the masses. NALSA and state legal services authorities should be encouraged to


Artificial Intelligence and Policy in India

93

use and engage with ODR institutions. Lok Adalats should also be motivated to opt for it. Adopting ODR will up India’s ranking in the ease of doing business index, will enhance the enforceability of contracts, give a breath of respite to the judiciary, and democratize access to dispute resolution (NitiAayog, 2020).

Conclusions Striking a balance between substitution of the justice system by AI v. AI systems acting as an accessory to aid and ease the burden of the judiciary is of utmost importance. A successful incorporation by a number of cities around the world shows that the task isn’t Herculean and can be achievable, provided that it is implemented properly. India has proven itself to be at par with the leading nations of the world, be it with respect to its aeronautical capabilities or development of nuclear power, and it’s time the India utilizes the AI technology to resolve its existing troubles. Employment of AI brings with it a number or benefits, but most of all, it eases the burden on the judiciary to make it function more efficiently and make access to timely justice a reality rather than an ideal. When this is achieved, every person shall be entitled to the right to a fair, accessible, and speedy trial and the cause of ensuring effective and sustainable justice shall be successfully achieved.

Recommendations in Brief • Chief Justice of India, S.A. Bobde, has emphasised on the need for developing AI for judiciary while outlining the number of pending cases in different courts. He clarified that AI shall not replace human discretion and shall act as a mere accessory to help ease the burden on the Courts from the magnanimous volume of pending cases. • All these recommendations have one aim: to make the existing legal process more efficient and to provide relief to the judiciary from dealing with a huge volume cases. There are many tasks which can be easily done by AI, leaving the complex and creative tasks to humans. An ideal visual is one where AI simplifies the work for the human and they work in harmony to achieve the aim of speedy justice to all. • AI has been successfully incorporated in the justice system of a number of countries, including the USA, Japan, South Korea, China, etc. These models can be viewed as case studies and inspiration can be taken from them. • Estonia is one such country which is leading in AI Revolution. The development of the e-File system was started by the Government of Estonia in 2005. As soon as a citizen has securely authenticated themselves and accessed the e-justice platform, they can submit any kind of cases online. The data will be shared between institutions that are linked to the case and courts can start proceeding related documents. These interactions are based on the once-only policy which means that duplicates of


94

Artificial Intelligence and Policy in India, Volume 2

information are not allowed in state databases. The e-file platform also allows courts to send citizens different documents; while notifications ensure judges that all files have been successfully delivered. Every document is time-stamped and contains a secure electronic signature. Furthermore, classified information can be encrypted by the courts to make sure that no third party would be able to access the data. This helped the Estonian e-justice model gain the reputation of a reliable and trustworthy array of services. The e-justice system shows how entire countries can benefit from electronic solutions and contrary to popular belief, is an economical option as well, as the Estonian court system is running on one of the lowest per capita budgets across the entire European Union. (Nallapati, 2008) The judicial system in the USA is using AI to study the likelihood of recidivism and to determine whether an accused/convict poses a flight risk. It is done with the help of risk-assessment algorithms. These algorithms use details of the defendant as the input and give out a recidivism score, i.e. a single number predicting the likelihood that they might commit the offence again. The judge then factors in this number while deciding the strictness of the verdict and in determining the type of rehabilitation facility required for the defendant. An example of such algorithm is the Arnold Foundation algorithm which has been rolled out in 21 states. It is used to predict the behaviour of the defendant in the pre-trial phase and works on the basis of pre-fed data. This method has been proven to be more accurate than a human judge. However, it is pertinent to note that these scores should be seen as additional factors while determining the verdict and should not be depended upon entirely. (McFadden, 2020) South Korean law firm Yulchon has developed technology that provides low-cost compliance tools, including apps, for clients. The firm is also encouraging its lawyers to create new solutions themselves. This can be incorporated in India, and might also act as a tool to aid the cause of providing free legal aid. AI can also be used to fuel and make legal research more efficient. Startups like CaseMine and NearLaw are trying to reinvent legal research by using VisualSearch and the CaseRanking algorithm to show the most relevant cases quickly. The algorithm sorts and ranks over 300,000 case records across 20+ Courts/ Tribunals to come out with the top 50 cases. The unique approach identifies the key 0.01% of cases that are relevant to the user. Online Dispute Resolution (ODR) is another recommendation which is inarguably the need of the hour. ODR is a method of dispute resolution which involves ADR techniques such as mediation, negotiation, and arbitration to settle small-claims and medium-claims disputes online. It has noted success in many parts of the world and has successfully solved more than a million disputed without giving rise to a need of presenting matters before a human judge. India has recognised its importance but what’s necessary is that it is implemented properly and fully. For that, Lok Adalats and NALSA and other state legal services authorities could be asked to tie-up with and encourage ODR platforms such as SAMA and CADRE. Chatbots can be used to make citizens aware of their rights and guide them through the court’s procedure. Moreover, AI can also be used to draft legal documents since


Artificial Intelligence and Policy in India

95

their format is pre-determined and these systems can be trained to scan relevant information and use it to furnish documents required. • AI can be used to determine the credibility of evidence and analyse the quality of the claim. Some exploratory work has highlighted how AI could automatically classify and break down the chronological events in a case, in order that it could be understood computationally. Machine learning could thereby find patterns in claims to indicate whether something has been argued well, whether the law supports it, and evaluate it versus competing claims.

References 1. COLLINS, Harry. Changing Order: Replication and Induction in Scientific Practice. University of Chicago Press, 1992. 2. BRANTING, L. Karl. Advisory Systems for Pro Se Litigants. 2001. 3. ANGWIN, Julia. Machine Bias [online]. 23 May 2016. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 4. BUOCZ, Thomas. Artificial Intelligence in Court [online]. 2018 2(1). Available at: https://static1.squarespace.com/static/59db92336f4ca35190c650a5/t/5ad9da5f70a6adf9d3ee8 42c/1524226655876/Artificial+Intelligence+in+Court.pdf 5. SOLUM, Lawrence. Artificially Intelligent Law [online]. 2019. Available at: https://ssrn.com/abstract=3337696 6. BAKER, Stephen. How Could IBM’s Watson Think That Toronto Is a U.S. City? [online] 16 February 2011. Available at: https://www.huffpost.com/entry/how-could-ibms-watsonthi_b_823867 7. Press Trust of India. 67 per cent of prisoners in India's jails undertrials [online]. 12 April 2019. Available at: https://www.business-standard.com/article/pti-stories/67-per-cent-ofprisoners-in-india-s-jails-undertrials119041200857_1.html#:~:text=More%20than%2067%20per%20cent,statistics%2D2016%20rel eased%20this%20week. 8. COOK, Alex. AI In Legal And Access To Justice [online]. 10 July 2018. Available at: https://www.legalcurrent.com/ai-in-legal-and-access-to-justice/ 9. CORNELL, Janet G. Using Artificial Intelligence and Big Data to Develop Tools Used in Courts [online]. 16 December 2018. Available at: https://courtleader.net/2018/12/16/using-artificialintelligence-and-big-data-to-develop-tools-used-in-courts/ 10. Richard M. and Alicia Solow-Niederman. Developing Artificially Intelligent Justice. Stanford Technology Law Review. May 2019 (242). 11. ROBERTS, D. Digitalizing the Carceral State. Harvard Law Review [online]. 2019. Available at: https://harvardlawreview.org/wp-content/uploads/2019/04/1695-1728_Online.pdf 12. FirstPost. Five minutes: That's how long a judge in an Indian high court spends hearing a case, reveals study [online]. 7 April 2016. Available at: https://www.firstpost.com/india/fiveminutes-thats-how-long-a-judge-in-an-indian-high-court-spends-hearing-a-case-revealsstudy-2717350.html 13. Stone Sweet, Alec. Judicialization and the Construction of Governance. Comparative Political Studies. 1999. (31). 147-84. 14. NALLAPATI Ramesh and D. Christopher. Legal Docket-Entry Classification: Where Machine Learning stumbles [online]. Stanford University. 2008. Available at: https://wwwnlp.stanford.edu/pubs/D08-1046.pdf


Artificial Intelligence and Policy in India, Volume 2 96 15. Liangyu. Beijing Internet court launches AI judge [online]. 28 June 2019. Available at: http://www.chinadaily.com.cn/a/201906/28/WS5d156cada3103dbf1432ac74.html 16. PTI. There are 20 judges per 10 lakh people in India: Govt [online]. 6 February 2019. Available at: https://www.livemint.com/politics/news/there-are-20-judges-per-10-lakh-people-inindia-govt-1549457164121.html 17. MCFADDEN, Christofer. Can AI Be More Efficient Than People in the Judicial System? [online]. 4 January 2020. Available at: https://interestingengineering.com/can-ai-be-moreefficient-than-people-in-the-judicial-system 18. METZ, Cade. Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots [online] 9 June 2018. Available at: https://www.nytimes.com/2018/06/09/technology/elon-musk-markzuckerberg-artificial-intelligence.html 19. Mocan, Ozkan Eren & others. Emotionl judges and unlucky juveniles. American Economic Journal: Applied Economics [online]. 2018, 10(3). Available at: https://www.aeaweb.org/articles?id=10.1257/app.20160390 20. M. Richard. Narrowing Precedent in the Supreme Court. Columbia Law Review, [online]. 2014, 114(7). Available at: https://columbialawreview.org/wp-content/uploads/2016/04/Re.pdf 21. SINGH, Karan. India: Online Dispute Resolution (ODR): A Positive Contrivance To Justice Post Covid19 [online]. May 17 2020. Available at: https://www.mondaq.com/india/arbitration-dispute-resolution/935022/online-disputeresolution-odr-a-positive-contrivance-to-justice-post-covid-19 22. THAKUR, Pradeep. Some HCs take average of 4 years per case [online]. 18 December 2017. Available at: https://epaper.timesgroup.com/Olive/ODN/TimesOfIndia/shared/ShowArticle.aspx?doc=TOI DEL%2F2017%2F12%2F18&entity=Ar01203&sk=0060305B&mode=text# 23. SELBST Andrew and others. The Intuitive Appeal of Explainable Machines. Fordham Law Review [online]. 2018, 87(3). Available at https://ir.lawnet.fordham.edu/flr/vol87/iss3/11/ 24. TITO, Joel. How AI can improve access to justice [online]. 23 October 2017. Available at: https://www.centreforpublicimpact.org/joel-tito-ai-justice/ 25. VASDANI, Tara. From Estonian AI judges to robot mediators in Canada, U.K. [online]. June 2019. Available at: https://www.lexisnexis.ca/en-ca/ihc/2019-06/from-estonian-ai-judges-torobot-mediators-in-canada-uk.page 26. WU, Jonah. AI Goes to Court: The Growing Landscape of AI for Access to Justice [online]. 6 August 2019. Available at: https://medium.com/legal-design-and-innovation/ai-goes-to-courtthe-growing-landscape-of-ai-for-access-to-justice-3f58aca4306f


Artificial Intelligence and Policy in India

{ Discussion Papers }

97


Artificial Intelligence and Policy in India, Volume 2

98

8 International Algorithmic Law: Emergence and the Indications of Jus Cogens Framework and Politics Abhivardhan1 1

Chairperson & Managing Trustee, Indian Society of Artificial Intelligence and Law, India abhivardhan@isail.in

Abstract. The nature of cyberspace is currently narrowing from its hard power definitions to more delicate and rapid soft power notions in international law and relations. From the realpolitik assertions of cybersecurity and big data in a neoliberal economic world, the concept of international algorithmic law is proposed to emerge, considering two important trends in the field of international law and the politics of cyberspace and digital economics, i.e., the evolution of the politics and economics involving artificial intelligence ethics at a global level and the transition of the rules-based international order from neoliberalism and unipolarity/bipolarity to neorealism and multipolarity. The Discussion Paper as a part of the Indian Strategy for AI & Law, 2020, therefore analyses the proposed field of International Algorithmic Law and its political basis with some key cases. Further propositions include the supposed sources of international law based on the existential and structural purpose of the field, the diplomatic considerations and strategic constraints that constitute the extensive the political utility of the field, analyses of certain factors in proposition in connection with IAL, i.e., (1) Market Economics and Neoliberalism; (2) the ideology of Multilateralism; and (3) the Synergy of Algorithmic Diplomacy with Cybersecurity and its Politics in line with International Law related to Cyber Warfare. Conclusions are reflective. Keywords: Artificial Intelligence, Multilateralism, Algorithmic Policing, Multipolarity, Market Economics, International Law.

Introduction International Law in the fourth stage of globalization is a dynamic and a disproportionately generalized field in the realm of politics, law and society. Since 1990, its European-American foundations did transform and change (The Economist, 2020; Myers, 2014; Mandatory Multilateralism, 2019). The immense transformations in the international law model provided avenues to


Artificial Intelligence and Policy in India

99

the international relations conspiracy theories, realpolitik assumptions and diplomatic mentalities. Some of the common sovereign and non-sovereign or anti-sovereign mentalities in international law and diplomacy were developed (Mandatory Multilateralism, 2019) and have been relinquishing from the domain of international relations as always. For example, the Shia-Sunni mentality of the West towards Iraq, Iran and the West Asian countries, India's pacifist mentality towards Pakistan (South Asia Fast Track, 2018; Rajopadhye, 2020), Russia's apathy towards European institutions and even the attitude of NATO towards the Balkans have gradually changed so far. There still will be regional, transnational, transboundary and perhaps, cyberspatial and planetary issues to be dealt in the realm of international law, and so, this would be in its own gradual process. Now, the propositions made through this work, are that (1) the international community is at the verge of developing the field of International Algorithmic Law (hereinafter IAL) and Politics, which was prone to begin due to the role and rise of Artificial Intelligence; (2) that the field of IAL will attract major strategic and converging yet narrower avenues in the jus cogens framework attributed to customary international law; and (3) IAL will deeply influence and guide the ideology-policy dichotomy of the multilateral system that the rules-based international order currently holds.

What is International Algorithmic Law? The work proposes the emergence of this new field of International Law, especially within the branch of International Cyber Law or International Law and Technology or perhaps a hybrid offspring of both. Although this field might have to attract tertiary primacy of the fields like International Telecommunication Law, International Humanitarian Law, International Trade and Economic Law and Private International Law, fields like International Human Rights and Culture Laws and their subsidiaries will have a triggering role as usual in a multilateral system. However, the multilateral system may not exist the same way amidst ethnocentrism, transient populism, neorealism and the political thawing of the cold war diplomatic mentality, which has shown vicious signs amidst the COVID19 pandemic worldwide. Thus, as proposed, the field can be summarized as follows: The field of International Law, which focuses on diplomatic, individual and economic transactions based on legal affairs and issues related to the procurement, infrastructure and development of algorithms amidst the assumption that data-centric cyber/digital sovereignty is central to the transactions and the norm-based legitimacy of the transactions, is International Algorithmic Law.


100

Artificial Intelligence and Policy in India, Volume 2

Although the intention here is not to render any absolute definition, it is anyways proposed that may be a reasonable definition as aforementioned may arise. similar or implicitly congruent to the above. The field of IAL itself - like any other international law field is central to the understanding of (1) the role of intergovernmental bodies and transnational actors; (2) the unusual equilibrium of the coherence and inconsistency relationship between realpolitik visions and legal morality in international relations, and (3) the political parallelism behind the economics and politics behind the growth of algorithms. To avoid any generalizations, let us estimate how algorithms influence international relations and what factors can come into being that may endorse the creation or acknowledgment of a new discipline of international law. There may be academic considerations to the viability, weight and sensibility of an international law field, as beyond diplomacy, politics and strategy, the role of academic scholarship and research will be central to the structure of policy studies in international relations. How Algorithms Influence International Relations

In definitive contexts, the realm of Artificial Intelligence comes central to the utilitarian recourse of the term ‘algorithm’. Since we know that in the fourth stage of globalization, we need to render some basic constructs that create and reinvent new legal systems, we have to assume what are the current international legal outlooks that shape multilateralism, even if it is being contended that multilateralism is losing its pro bono relevance (The Economist, 2020). Generally, the theories of market economics/neoliberalism coupled with globalization influence our core policies in international diplomacy. From climate change to transitional justice, neoliberalism and globalization will be at the center of the status quo of the global order through some all-comprehensive understanding. Nevertheless, we have to also admit that the multilateral system itself lacks selfrejuvenation and requires incremental and slow rejuvenation. Partnerships and plurilateral engagements always complement multilateral regimes, and since we still do not have multilateral approaches in algorithmic diplomacy with regards to AI – we can still rely on public-private partnerships and B2B engagements with effective government checks and anthropological methods. In a series of the preliminary confidence-building measures in the multilateral systems, the United Nations Interregional Crime and Justice Research Institute (UNICRI) in early 2015 established a Centre for Research in AI and Robotics (UNICRI), the International Telecommunication Union organizes the AI for Global Good Summit every year, and various governmental and expert meetings on the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (hereinafter the CCW) have been happening for years. Even the EU and the Council of Europe (Council of Europe, 2020; European


Artificial Intelligence and Policy in India

101

Commission, 2020) proposed papers on issues related to algorithmic bias, algorithmic accountability, the life cycle of ML systems, automated discrimination and the role of AI in catering the Sustainable Development Goals (Cummings, et al., 2018). Although there have been significant challenges faced by multilateral bodies and experts that automation and activities beyond automation might not be able to prevent discrimination or failure to adherence of the individual approach to human rights (Desierto, 2020), legal principles competent within the proceduralist notions of natural justice, such as fairness, transparency and others cannot be automated (Wachter, et al., 2020) and ideological notions such as neo-luddism have influenced the AI-innovation discourse (Appleyard, 2014), we must not forget that the influence of algorithms will not be insignificant anymore. It is therefore proposed that Algorithms, in due interpretation, are required to be assumed as the fundamental and substantive units of AI (Gupta, 2019; European Commission, 2020 pp. 24-26) considering the role of algorithms in deciding and seeking the life cycle of an AI-based system. However, technical bifurcations are also essential in order to assure that due to the unitary nature of algorithmic activities, we are able to categorize between AI systems, analytics-based infrastructure and others carefully. In due inference from the approach adopted by the European Commission, we can also adopt the privacy by default & design principle carefully, and considerably understand the ‘many hands’ problem or the role of multiple actors, because beyond issues of legality and involvement – the bigger problem in emergence can be as how the operative liability and accountability framework vis a vis AI can be effectively instrumented (Council of Europe, 2020). AI is already being used by social media and other technology companies on matters related to individual profiling (Information Commissioner's Office, UK; Balduzzi, et al., 2010), automation of fairness (Wachter, et al., 2020), deep learning initiatives to influence omnipresence and omnipotence of customer experience (Adobe, 2020 pp. 3-5), diplomacy and trade (Bjola, 2020 pp. 5, 9, 13, 15, 27, 34; Prakash, 2019) and other relevant synthetic practices, such as combining the potential of blockchain/Internet of Things with AI (Cummings, et al., 2018) and so on. The paper therefore posits on the diplomatic considerations and strategic constraints behind the passive generation and passive evolution of International Algorithmic Law (hereinafter IAL), important factors such as (1) Market Economics and Neoliberalism; (2) the ideology of Multilateralism; and (3) the Synergy of Algorithmic Diplomacy with Cybersecurity and its Politics in line with International Law related to Cyber Warfare. Furthermore, based on realpolitik and ethical grounds of international relations and technology diplomacy (a) the probable reflections of the sources of international law, (b) the genesis of customary international law & the (c) the existential role of international organizations and state-centric actors (state actors, non-state actors and trans-


102

Artificial Intelligence and Policy in India, Volume 2

state actors. In the concluding portions, recommendations have been proposed in the paper with regards to the field. Features of IAL

It is proposed that the following might be the propositional features of International Algorithmic Law: 1. The de facto basis of IAL should be the political and economic transformation of algorithms under the theory of state actors and their categorization; 2. The legal basis of IAL can be the existing jurisprudence and legal literature in international law and technology. However, the approach involving IAL must be a transfusion of (a) the approach involving policies in the anthropomorphic manner to understand the limitations of technology diplomacy and politics; (b) the approach of AI Ethics based on real and uncompromised harmonization of ideological practices and cultural notations; and (c) the position to recognize that the scope of disruptive technologies is to switch and transform the equilibrium between rule of law & strategy dynamics of technological utilitarianism. 3. IAL can be the initial resort towards expanding and inventing horizons towards the metaphysical and meta-digital aspects of the concept of international law applicable to cyberspace – in the family fields of AI Ethics or AI and International Law, beyond (1) international data protection law; (2) international telecommunication law; (3) international cybersecurity law and other soft and hard powers-based international law fields; 4. The reasonability and proclivity of IAL can be based on considering algorithms as the dependent units of diplomatic practices (Bjola, 2020 p. 13) central to the transactional capacities to determine the expertise involved in the process of diplomacy39, thereby giving algorithmic activities some priority to the kinetics and metaphysics involving the role of data and its formulations; 5. Like the major international law fields, IAL also can be influenced by the influence of soft power and hard power complexes in international relations. Despite the fact that soft power might be a delicate asset to trigger the consequential order of the hard power policy developed by a state, as in the case of IAL, in proposition, it is possible. It is therefore important that the physiological aspects of algorithmic constitutionalism and governance must not be based on creating an absolute cleavage between individual and collective entities and their trans-entity relationships. Instead, a coherent pattern can be made where at an entity level, whether individual or collective, the frameworks of responsibility, foreseeability, accountability and liability must be 39

Dr Bjola explains in her paper for the EDA (Bjola, 2020 p. 13) with a proper diagram (Figure 3) about the levels of expertise and the role of transactional activities.


Artificial Intelligence and Policy in India

103

subjected to better assessments considering the life cycle of an AI system (European Commission, 2020 pp. 20-23) and its artificial empathy (which already makes its actionability foreign and complex to human motivations and the methods that involve mapping human modalities) for better observation and discernment; The further sections of the paper are based on the diplomatic considerations of IAL and the possible development of customary international law and peremptory norm mechanisms.

Diplomatic Considerations and Strategic Constraints related to IAL: Basis and Factors Distinctions in the Attributes and Factors involving AI with Diplomacy

It is suggested that the connection between AI and Diplomacy must not be entirely metaphysical, but some of its underpinnings can be attributed to limited metaphysical approaches to diplomatic ethics. Furthermore, to understand the probable attributes and factors that involve AI and Diplomacy, the best way proposed would be to adopt a realpolitik approach. Now, there may be contesting theories related to the realpolitik approach, but some basic propositions as given below can be helpful to assist us in determining the attributes and factors involving AI with Diplomacy: • There must be grounds kept open for newer geopolitical critics and estimates over the economic and aesthetic impacts of algorithmic activities and operations at diplomatic levels, so that better assessment is ensured; • It is advised that risk assessment with respect to algorithmic activities and operations should be regarded as a security asset for state actors. Multilateral or plurilateral assistance with respect to sharing information or intelligence on risk assessment is only possible when in the respective arrangements, whether multilateral or plurilateral, trust and reasonability in behaviorism is maintained, coupled with a cost-benefit analysis of power-competence dynamics of the state actors; • The precedented, redemptive and habitual behaviorism of state actors can have (1) civilizational; (2) judgmental; (3) secular realist; or (4) formalist/legalist understandings towards defining algorithmic activities. The above 4 qualities or understandings are common trends we can see in general, and so framework of behaviorism must be understood carefully40;

40

Here are certain instances as to the mainstream (non-exhaustive) observations behind strategy planning in diplomacy for state actors:


Artificial Intelligence and Policy in India, Volume 2

104

• Notions of competence can be relative for state actors. Now, for a neorealist global order, where multilateralism is essential, it would be necessary that technocratic notions of global governance, based on ruthless competition is replaced with proactive collaboration. The same can be possible when the preparticipatory notions of collaborative governance are met with (a) ensuring optimal competence among state actors to at least revere and graduate collaboration and support to preserve the ethos of the international legal principles; and (b) harmonizing or mitigating the risks of power dynamics at various subsidiary geopolitical levels. Now, achieving both of them cannot be a sided-approach, where either of them is achieved but the latter which is left is capriciously compromised. Therefore, leaving the binary approach or even its precepts, at least training and reproducing diplomatic relations in such a course, would at least assist state actors to gain some strategic leverage, but with better responsiveness and responsibility attached; The proposed attributes and factors to study and estimate the scope of AI with Diplomacy are as follows: Attributes & Factors

The multilateral and plurilateral juristic persona of AI • The above attribute is designed to understand the limitations and expanding nature of AI as a juristic entity. The persona of artificial intelligence, in any form possible would be linked with the data-algorithm nexus, and so, the character of interpretation of the AI as a juristic entity needs to be seen under two specific dimensions: (a) multilateralism; (b) plurilateralism; • When we would interpret the scope of algorithmic activities and operations with respect to multilateral arrangements and bodies, the focus of the de lege 1.

2.

3.

4.

civilizational – India cited civilizational grounds when it comes to bilateral relations with Iran amidst the Chabahar port controversy (Roy, 2020; Embassy of India, Tehran, Iran). Japan cites civilizational reasons in the Senkaku Islands dispute (Ministry of Foreign Affairs of Japan); judgmental – The United States has been involved in advocacy/interventionist measures to affirm or derecognize democratic legitimacy in the Middle East (Harris, et al., 2013; Riedel, 2013) & Central and Eastern Europe (Luers, 1987; Bertram, 1984); secular realist - People’s Republic of China imposes claims on the South China Sea on the basis of repressive, expansionist measures (US-China Economic and Security Review Commission, 2016; Thomson Reuters, 2018); formalist/legalist – The Council of Europe and the European Union maintain technical liability frameworks in matters related to human rights, rule of law and social welfare. The liberal system of judicial governance under the European Court of Human Rights and the Court of the Justice of the European Union endorse legal formalism, which even costs their relations with Russian Federation (Council of the European Union, 2020; Council of Europe, 2019).


Artificial Intelligence and Policy in India

41

105

ferenda should be on the technical commitments of the juristic entity towards the multilateral body/treaty/arrangement carefully. Further understanding can be derived from the dimensions of the hierarchies we can seek in – which can help us in determining the long-range customs that can be adopted in case of algorithmic activities41. The limit of such character interpretation of the AI realm will have some reservations, but achieving equity between multilateralism and sovereign equality (and state interests for that matter) would be better than a stricter, technocratic version of equality of balance between the two aforestated; When we would interpret the scope of algorithmic activities and operations with respect to plurilateral arrangements, the role of customary international law (CIL) would be significant – but that same CIL framework unless has attained relevant significance in terms of the diplomatic maneuvers affirmed by state actors coupled with the relevance of the principle on the subject-matter, has no straightforward value. Persistent or subsequent objection might not be the prima facie resorts exercised (does not mean they do not exist), but at least – the nature of conduct coupled with strategic restraints and cost-benefit analysis of the state actors if turns out to be pluralistic, and differing/consensual – then the involvement can be somehow determined; Achieving rules-based strict regularization in IAL even at diplomatic levels would take time, but the approach of legal formalism can be adequately attracted in order to foster incremental step-by-step changes, considering the liberal, ‘interconnected’ notions of internet. It must be acknowledged that the universalist notions of interconnectivity, net neutrality, open internet, global privacy, etc., can be the good resorts to expand the rules-based international framework, provided that risk assessment becomes a mutual part of research in limited parallelism with the life cycle of the AI and the algorithmic activities and operations; Stronger liability/responsibility/accountability mechanisms cannot be established in case of regularizing the persona of AI unless the purpose behind the hierarchies are clear. A nudged extra-coalescent approach, where frameworks are misused to portray strict legalism, without determining the real, instrumental considerations behind the activities and operations adjudged – would be a self-defeating method, and this must be avoided; The persona of AI and the footprints of algorithmic activities and operations will be subject to pluralist interpretations, and the diversity of adjudication A reference to the North Sea Continental Shelf Cases in the ICJ can be made: “With respect to the other elements usually regarded as necessary before a conventional rule can be considered to have become a general rule of international law, [that] it might be that, even without the passage of any considerable period of time, a very widespread and representative participation in the convention might suffice of itself, provided it included that of States whose interests were specially affected (International Court of Justice, 1969).”


106

Artificial Intelligence and Policy in India, Volume 2

must be recognized to attain step-by-step normalization in specific domains for long-term implications; • The notions of cyberspace-related sovereignty, such as data protectionism, intra-state securitization, splinternet, etc., would be possible and can be implemented. It is important to note that the openness of IAL instruments towards plurilateral measures is not just preserving the integrity and dignity of international cyber law, but also preserves the holistic, political and practical reasonability of the multilateral stakeholders and organizations, which contribute in shaping rules and measures over algorithmic activities and operations;

The omnipresence and omnipotence of AI

• The scope of determination of the omnipresence and omnipotence of AI is a key factor in determining as to how the movements, activities and scale of operations bestowed upon algorithms and their infrastructures is measured and adjudicated. • The term omnipresence signifies that the AI system or realm does have the capability to be present in many spaces of reference and adjudication as possible. For e.g.: the issue of accountability of any AI system cannot be based entirely on a top-to-down approach of imposition of liability norms, but the mechanism of accountability can be conversed and developed via a sense of preparedness that any AI system requires. This also leads us to understand that the schemes of preparedness for a Responsible AI system or realm is not unidirectional neither it can be unidimensional. Omnipresence therefore can be distinctively categorized by accepting the fact that: ─ The directional vectors that guide algorithmic activities are aesthetically allcomprehensive subject to the scope of data quality and the experience of the learning mechanisms involved in the due process; ─ The dimensional references of the AI system are cross-fertilized and may not align to similar considerations and limited interdisciplinary estimates towards circumstances and their restraints; • The term omnipotence signifies that the plateau and forms in the potential of algorithmic activities and operations, subject (limitedly) to the way they work and are designed, are wide-reaching. It also means that there can be some limited jurisprudential assumptions that can be applied to measure, deter and convict the counts and scores of activities incurred upon, but it is recommend that the adjudicatory systems in states and international organizations are prepared at best to reckon that the resolution of issues based on omnipotence is subject to better experience studies and not just mere democratization of liability regimes and modules;

The hierarchies that involve AI in diplomatic activities


Artificial Intelligence and Policy in India

107

• The hierarchical arrangements in the order of diplomatic activities where AI is involved can be based on some important understandings: ─ The basic nature of the political framework of a state and its government machineries, based on their skill cum experience mechanisms in matters related to diplomatic engagement and resistance shall be an integral, disruptive and substantive factor in determining as to how will be the creation, disruption, amendment and repeal of the hierarchies would be done; ─ The covert nature of a political framework of a state and its government machineries would have an operational yet limited role considering the situation where a proportional relationship between transparency in governance & trust in security assets and operations is settled, transformed and maintained; ─ Strategies and planning measures also decide the recourse of the political ethos of a state framework, but the implementation of the same does not have an integral role in transforming the involvement of AI in diplomatic activities unless there are force majeure incidents or some due necessity is fungible; Strategic Constraints that Influence AI

Market Economics and Neoliberalism • Economics and geography are historically connected, and the potential of global capitalism and neoliberalism, which are the backbone of a technocratic rules-based globalization (3rd and 4th wave), is central towards the democratization and economization of AI. There can be various ways through which various Global North and Global South countries behave towards AI/ML products and services, but it is undoubtful – that in the soft power context, the democratization of AI involves how economics would play in the near future. In the realm of hard power AI services and products, notions and initiatives of protectionism would be important and might not be too much disruptive at least for a few years or a decade. However, in the soft power context, considering the utilization of AI in any possible operation, it can fuel the politics of economics based on the principles of neoliberalism. A large focus in this arena has been on the issues related to automation and augmentation, where we can find many relevant examples of juxtaposing economic measures to cause strategic and cultural-economic imbalances. • Interestingly, while in the hard power context, it would not be easy to figure whether algorithmic operations and activities transcend the connective relationship between developmental economics and topical geography, it is safe to propose that in the soft power context, algorithmic activities create digital territories of influence and culture/soft power economics. Examples


108

Artificial Intelligence and Policy in India, Volume 2

include propaganda campaigns by state machineries in China, automated surveillance, social media community policing and censorship, etc. More or less, algorithmic activities transcend the geo-economic realities of the physical world and can impose the metaphysical, digital and even spiritual (to a limited extent) dimensions of cyber and information (non-physical) space. Thus, AI is capable to infuse cultural-information warfare and diplomatic measures, and it can be ideologically muted as well as blind in many circumstances in the near future. The ideology of Multilateralism • Multilateralism, like any other political conception, has some ideological backbone that we must reckon carefully. Although in case of hard power situations, the ideological matrix of multilateralism has to face the brunt of constraints and resistance, where implementing the practices, commitments and confidence of the same is a big risk as well as a diplomatic, military and legal achievement (if it does tends towards acutely). In the case of soft power situations, multilateralism receives immense support from its ideological framework, provided that the practice itself is not dogmatic and without feedback. Multilateralism can have different utilitarian prospects in various arrangements, NGOs and international organizations. For example, while the UN’s system of multilateralism has a special focus on global and regional mandates rendered by states, the EU represents its member-states under the TEU and the TFEU in many controlled forms, where decision-making unlike the UN, is technocratically decided by the European Commission and the Council of the European Union with binding value. Similarly, the Eurasian Economic Union founded by Russian Federation and 4 other Central Asian countries, is more of an economic union, which does not render similar restraints on its members. On the other hand, NATO and CSTO focus on military operations and arrangements, but their role has certain imperative limitations. • Multilateralism therefore, is a reasonable political conception, even in matters related to cyberspace, provided that the risks of information, metaphysical and spiritual warfare condoned by state and non-state actors is (1) regulated, (2) tackled; and (3) addressed with clarity. The principle of subsidiarity in international law also plays an important role, because confidence-building measures do not always require over controlled measures led by multilateral bodies. Involvement of plurilateral actors also is helpful and workable in many situations, and might be disruptive towards the pre-emptive and presumptive expectations of the stakeholders in that multilateral organization or arrangement, but if the commitments are based on open-ended trust, and patience, then the obscuration of multilateralism as an ideology can be avoided in many ways possible. In the case of algorithmic activities and operations, it is


Artificial Intelligence and Policy in India

109

important to address three ideological, cultural, decisional and political issues that multilateralism might have to face: ─ The principles, precedents-based analogies and understanding of AI Ethics and its basic liability-accountability frameworks would not be in parallelism for all countries. There will be certain strategic considerations, which might be uncommonly discernible, and it is recommended that aesthetic differences are very central to the principled adherence of multilateralism as an ideology. ─ Multilateral organizations like the UN, EU, CoE and even NATO, have been criticized for ideological and political obscuration in various matters of international concerns, whether it is the issue of armed conflicts in the Middle East (Iraq, Libya, Syria etc.,), the issue of Brexit, or (a) the question of an independent investigation and (b) the promise of multilateralism in terms of support to smaller countries on the COVID19 pandemic.

Customary International Law in IAL: Approach and Formulations Influence on Traditional International Law

The use of IAL would affect the notions of traditional international law. However, the effect towards the notions of pure international law would remain perpetuated on the considerations of opinio juris and erga omnes. In general, cyber sovereignty will be the umbrella field behind the notions of data sovereignty (suzerainty over the flow, provision, procurement and protection of data), algorithmic sovereignty (suzerainty over the nature, examination and design of algorithmic activities based on research and state interests), cyber espionage and tech-economic protectionism. The definitive aspects of cyberspace emerged in the 20th century, after the Second World War. However, the rise of the phenomenon in international law was marked with a special moment – when the recognition of ICTs in the 2005 Tunis Agenda was done. Following the same, traditional international law became a skeletal backbone for international cyber law. Considering the circumstances that splinternet is inevitable in the coming decades, it is important to capture the legal traditions that have emerged. There are some common perspectives under emergence in traditional international law, which are explained as follows: • One integral perspective that has emerged is of a globalist, multilateral and presumably liberal vision of international cyber law, where net neutrality is respected as a strong unabetted principle in cyberspace. However, there already exist antithetical operations that prove that net neutrality was not possible;


110

Artificial Intelligence and Policy in India, Volume 2

• The second perspective that is still yet to emerge is of a disruptive, protectionist and plurilateral vision of international cyber law, where a pure understanding of geopolitics and realities in foreign relations and domestic issues defines the policy of states. There is a growing consensus in the Global North in favor of the second perspective; • The third perspective – which is considered either a counter or a balance to the above two perspectives, which are usually framed as ‘dichotomous’ is related to the proportionality between hard power approaches and soft power approaches, where absolute net neutrality, no protectionism and absolute transborder flow of data is not regarded as relevant under the international legal framework; • It is therefore suggested that the notions of international algorithmic law within the pure international legal framework must be based on a proper, realist and generalist conglomeration of the 3 mainstream perspectives, so that the clarity of definability of international legal customs is possible; How will the Sources of International Law shape under IAL?

The sources of international law will remain under the same Article 38 (of the Statute of the International Court of Justice) standard. There would be no special change in the status quo. However, the relevance, scope and onus of the regard of sources, at least under the framework of diplomatic proceduralism would surely be affected direly. It means that the sources of international law are not foreign or immune to anthropological issues and realities, and thus, including physical space, the role of metaphysical, information and cyber spaces will have some relevance to influence how the sources and the principles have emerged. For example – there is an emerging field known as Emotions in International Law, which is being discussed and worked on in European universities for some time now. The basis of this field is that juridical and administrative activities do not accept the element of emotion in a more constitutive and direct way. Adjudication, therefore under this field, must have the element of emotions and considerations to the elements of empathy and compassion, as is generally proposed. Although, many examples can be connected to the usual case studies of IHL and international refugee law, it is humbly submitted that sources of international law can easily be weaponized by states, whether it is through the complexity or interpretability of treaties, declarations, action plans, or perhaps other measures. Considering the nature of geopolitics however, nation-states would not intend much to resort for multilateral measures, except some of them, like India, China, Russia, EU members and perhaps some ASEAN member-states (on the South China issue, for example). Thus, the defining moment of the sources of international law is not too imminent, yet cannot be ignored. Additionally, if the


Artificial Intelligence and Policy in India

111

first perspective of traditional international law over cyber suzerainty as discussed above is imposed on member-states, then it might create serious problems for the credibility of the multilateral order. For example – the UN is considering partnerships with Chinese companies Alibaba and Tencent, which became a matter of concern for international NGOs and non-state actors. Even the UN Secretary-General was quoted in a recent tweet on the COVID19 pandemic, where his views on the patriarchy and the pandemic became a matter of concern for many people. Thus, the following observations can be made on the development of the sources of international law: • That the sources can be weaponized, and will be subjected to the dimensionality and plurality of different kinds of spaces, from territorial to air to outer space to the internet, and the same can go to the limits of information and metaphysical warfare; • It is humbly recommended that the development of the sources of international law must be incremental, because the schemes of political motivation endorsed by different state and non-state actors cannot be directly transitioned into absolute multilateral political consensuses. If this is not taken care of, then the multilateral ethos of international organizations committed to cyberspace and AI would be affected direly; • The openness of interpretations is not just about state interests under adjudication, because the anthropological backing – helmed by geopolitics would also decide how the sources can be weaponized to develop, improve and monitor the credibility, existentiality and relevance of the rules-based international order; Determinacy of the International Legal Customs, Jus Cogens and the Objector Principles

The development of peremptory norms under international law has been central to the schematization of the idea of internationalization and transfusion into the notion of ‘international community’42. Legal pluralism is often opposed when the relevancy of jus cogens comes into effect, and in general – that can emanate from the very simple examples of the universality of human rights, issues related to climate and environmental changes, humanitarian intervention (Normativity in International Law: The Case of Unilateral Humanitarian Intervention, 2003) & R2P and many more. However, it is clear that the ambit of peremptory norms and the admission of any principle under the realm of customary international law, within the draft conclusions over the identification of customary international law given by the International Law Commission (International Law Commission, 42

Please refer to the dealings of CIL and the objector principles in this book by James A. Green (Green, 2016 pp. 173-178).


112

Artificial Intelligence and Policy in India, Volume 2

2018 pp. 22-23, 27 ) must be incremental. Instancy of adoption of any peremptory norms and international legal customs is uncommon and must be discouraged anyways. Therefore, in line with the ICJ’s alternative view in the Malaysia/Singapore case of 200843 over the understanding and scope of CIL, it is clear that the procedural status quo of CIL, the objectors and jus cogens methods would not change that much. Nevertheless, it is important to understand that the growth of CIL in the case of IAL would be strategic, if not consensual. Perhaps the development can stem with states and their lobbied alliances or partnerships, but stemming it to a larger consensual mechanism is not a near possibility considering the political situation of the UN and some multilateral organizations. The fluidity and multidimensionality of metaphysics, cyberspace and information warfare cannot be controlled nor monitored effectively through CIL mechanisms, as it can be termed as some sort of anthropological overreach through legal and diplomatic means. Thus, more research is required in order to focus on more autonomy and better normalization.

Conclusions The conclusions of this Discussion Paper are presented as follows: • The need to develop scholarship in the field of International Algorithmic Law or IAL will be a reasonable step to ensure that the field of international law goes beyond the monolithic curves and structures of liabilities and considerations; • A top-to-down technocratic approach will not help the cause of multilateral institutions that involve in AI Ethics and its legal development at both national and global levels; • Algorithmic sovereignty and diplomacy would have a special significance in the field of diplomacy considering the disruptive nature of AI; • Algorithmic operations and activities can have a unitary and advanced level of role to transform as tools or modus operandi; • The dimensions of warfare (hard power and soft power) structures within IAL would not be limited to physical and cyber activities: it can extend to information, metaphysics and even eschatology; • Plurilateralism cannot be ignored, if multilateralism in cyberspace and the realm of AI is to be maintained. Sovereignty will be an important civilizational and developmental aspect of multilateral governance and plurilateral actions; • The morphology of algorithmic operations will be multidimensional, multidirectional, omnipotent and omnipresent, provided that the changes are monitored incrementally

43

“[t]he absence of reaction may well amount to acquiescence …. That is to say, silence may also speak, but only if the conduct of the other State calls for a response (International Court of Justice, 2008 pp. 12, 50–51)”.


Artificial Intelligence and Policy in India

113

with due care and responsibility to transfuse algorithmic activities under better regulations; • The development of traditional international law and the international legal custom are not prepared to deal with disruptions caused by algorithmic activities. However, the solutions to the problems must be based on effective political consensus and not ideological, political and social ethnocentrism; • Tools within the scope of jus cogens and the objectors (persistent and subsequent) would reasonably transform. However, one of the most defining factors in the questionable and unquestionable aspects of these will be central to the fluidity and multidimensionality of cyberspace, information warfare and metaphysics, which requires more research;

References 1. Adobe. 2020. 2020 Digital Trends. Adobe. [Online] 2020. https://www.adobe.com/content/dam/www/us/en/offer/digital-trends-2020/digital-trends2020-artificial-intelligence.pdf. 2. Appleyard, Bryne. 2014. The New Luddites: Why Former Digital Prophets Are Turning Against Tech. The New Republic. [Online] September 6, 2014. https://newrepublic.com/article/119347/neoluddisms-tech-skepticism. 3. Balduzzi, Marco, et al. 2010. Abusing Social Networks for Automated User Profiling. Springer. [Online] 2010. https://link.springer.com/chapter/10.1007/978-3-642-15512-3_22. 4. Bertram, Christoph. 1984. Europe and America in 1983. Foreign Affairs. [Online] February 1, 1984. [Cited: August 9, 2020.] https://www.foreignaffairs.com/articles/japan/1984-02-01/europe-andamerica-1983. 5. Bjola, Corneliu. 2020. Diplomacy in the Age of Artificial Intelligence. Emirates Diplomatic Agency. [Online] January 2020. https://static1.squarespace.com/static/52c8df77e4b0d4d2bd039977/t/5e3a9a45d29b7f336bbda 061/1580898895236/EDA+Working+Paper_Artificial+Intelligence_EN+copy.pdf. 6. Council of Europe. 2020. Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems. Council of Europe. [Online] April 8, 2020. https://search.coe.int/cm/pages/result_details.aspx?objectid=09000016809e1154. 7. —. 2019. Statement by Timo Soini, Chairman of the Committee of Ministers of the Council of Europe, Minister for Foreign Affairs of Finland. Council of Europe. [Online] March 21, 2019. [Cited: August 9, 2020.] https://www.coe.int/en/web/portal/-/5th-anniversary-of-annexation-ofcrimea. 8. Council of the European Union. 2020. Illegal annexation of Crimea and Sevastopol: EU renews sanctions by one year. Council of the European Union. [Online] June 18, 2020. [Cited: August 9, 2020.] https://www.consilium.europa.eu/en/press/press-releases/2020/06/18/illegal-annexation-ofcrimea-and-sevastopol-eu-renews-sanctions-by-one-year/. 9. Cummings, M L, et al. 2018. Artificial Intelligence and International Affairs: Disruption Anticipated. Chatham House. [Online] June 14, 2018. https://www.chathamhouse.org/publication/artificial-intelligence-and-international-affairs.


Artificial Intelligence and Policy in India, Volume 2 114 10. Desierto, Diane. 2020. Human Rights in the Era of Automation and Artificial Intelligence. EJIL: Talk! [Online] February 26, 2020. https://www.ejiltalk.org/human-rights-in-the-era-ofautomation-and-artificial-intelligence/. 11. Embassy of India, Tehran, Iran. India Iran Historical Links. Embassy of India, Tehran, Iran. [Online] [Cited: August 9, 2020.] https://www.indianembassytehran.gov.in/pages.php?id=17. 12. European Commission. 2020. White Paper on Artificial Intelligence - A European approach to excellence and trust. European Commission. [Online] February 19, 2020. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligencefeb2020_en.pdf. 13. Green, James A. 2016. The Persistent Objector Rule in International Law. s.l. : Oxford University Press, 2016. 14. Gupta, Bhasker. 2019. Here’s Why Existing Analytics Units Should Outgrow Into AI Units. Analytics India Magazine. [Online] June 20, 2019. https://analyticsindiamag.com/heres-whyexisting-analytics-units-should-outgrow-into-ai-units/. 15. Harris, Shane and Aid, Matthew M. 2013. Exclusive: CIA Files Prove America Helped Saddam as He Gassed Iran. Foreign Policy. [Online] August 26, 2013. [Cited: August 9, 2020.] https://foreignpolicy.com/2013/08/26/exclusive-cia-files-prove-america-helped-saddam-as-hegassed-iran/. 16. Information Commissioner's Office, UK. What is automated individual decision-making and profiling? Information Commissioner's Office, UK. [Online] Information Commissioner's Office, UK. [Cited: July 15, 2020.] https://ico.org.uk/for-organisations/guide-to-data-protection/guide-tothe-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/whatis-automated-individual-decision-making-andprofiling/#:~:text=You%20are%20carrying%20out%20pro. 17. International Court of Justice. 1969. Germany v Denmark and the Netherlands. 3, [1969] ICJ Rep 3, para 73. The Hague : International Court of Justice, 1969. 18. —. 2008. Sovereignty over Pedra Branca/Pulau Batu Puteh, Middle Rocks and South Ledge (Malaysia/Singapore), Judgment. The Hague : International Court of Justice, 2008. 19. International Law Commission. 2018. Draft conclusions on identification of customary international law, with commentaries. [book auth.] United Nations. Yearbook of the International Law Commission. s.l. : United Nations, 2018. 20. Luers, William H. 1987. The U.S. and Eastern Europe. Foreign Affairs. [Online] June 1, 1987. [Cited: August 9, 2020.] https://www.foreignaffairs.com/articles/europe/1987-06-01/us-andeastern-europe. 21. Mandatory Multilateralism. Criddle, Evan J and Fox-Decent, Evan. 2019. 2, s.l. : Cambridge University Press, 2019, American Journal of International Law, Vol. 13. 22. Ministry of Foreign Affairs of Japan. Senkaku Islands Q&A. Ministry of Foreign Affairs of Japan. [Online] [Cited: August 9, 2020.] https://www.mofa.go.jp/region/asiapaci/senkaku/qa_1010.html#q1. 23. Myers, David. 2014. The Psychology of the Sunni-Shia Divide. Politico. [Online] Politico, July 6, 2014. https://www.politico.com/magazine/story/2014/07/iraq-sunni-shia-divide-108530. 24. Normativity in International Law: The Case of Unilateral Humanitarian Intervention. Richemond, Daphne. 2003. 1, 2003, Yale Human Rights and Development Journal, Vol. 6.


Artificial Intelligence and Policy in India

115 25. Prakash, Abishur. 2019. Algorithmic Foreign Policy. Scientific American. [Online] August 29, 2019. https://blogs.scientificamerican.com/observations/algorithmic-foreign-policy/. 26. Rajopadhye, Hemant. 2020. India-Pakistan peacemaking: Beyond populist religious diplomacy. Observer Research Foundation. [Online] March 9, 2020. https://www.orfonline.org/research/indiapakistan-peacemaking-62226/. 27. Riedel, Bruce. 2013. Lessons from America’s First War with Iran. Brookings Institution. [Online] May 22, 2013. [Cited: August 9, 2020.] https://www.brookings.edu/articles/lessons-fromamericas-first-war-with-iran/. 28. Roy, Meena Singh. 2020. India’s Chabahar Dilemma. IDSA. [Online] July 31, 2020. [Cited: August 9, 2020.] https://idsa.in/idsacomments/india-chabahar-dilemma-mroy-310720. 29. South Asia Fast Track. 2018. Q&A with Dr. Samir Saran, President, Observer Research Foundation, about the SCO, Eurasia’s future, balancing Iran-US & Pakistan-Russia ties & BIMSTEC. South Asia Fast Track. [Online] June 25, 2018. https://southasiafasttrack.com/2018/06/25/dr-samir-saran-president-observer-researchfoundation-talks-about-india-sco-eurasia-iran-us-pak-russia-bimstec/. 30. The Economist. 2020. The new world disorder. The Economist. [Online] June 18, 2020. https://www.economist.com/leaders/2020/06/18/the-new-world-disorder. 31. Thomson Reuters. 2018. China building on new reef in South China Sea, think tank says. Thomson Reuters. [Online] November 21, 2018. [Cited: August 9, 2020.] https://www.reuters.com/article/us-china-southchinasea/china-building-on-new-reef-insouth-china-sea-think-tank-says-idUSKCN1NQ08Y. 32. UNICRI. UNICRI Centre for Artificial Intelligence and Robotics. UNICRI. [Online] UNICRI. http://www.unicri.it/in_focus/on/UNICRI_Centre_Artificial_Robotics . 33. US-China Economic and Security Review Commission. 2016. China’s Island Building in the South China Sea: Damage to the Marine Environment, Implications, and International Law. USChina Economic and Security Review Commission. [Online] December 4, 2016. [Cited: August 9, 2020.] https://www.uscc.gov/research/chinas-island-building-south-china-sea-damage-marineenvironment-implicationsand#:~:text=From%20December%202013%20to%20October,of%20the%20South%20China%2 0Sea.. 34. Wachter, Sandra , Mittelstadt, Brent and Russell, Chris. 2020. Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI. arxiv.org. [Online] May 12, 2020. https://arxiv.org/abs/2005.05906.


Artificial Intelligence and Policy in India, Volume 2

116

9 The Entitative Nature of Artificial Intelligence in International Law: An Analytic Legal Model Abhivardhan1 1Chairperson

& Managing Trustee, Indian Society of Artificial Intelligence & Law, India.

abhivardhan@isail.in

Abstract. The understanding of artificial intelligence must be AI-centric and it is important to consider that artificial intelligence has an entitative nature. The article thus proposes an analytic legal model on the entitative nature of artificial intelligence with jurisprudential reference and systemic modulations on AI Ethics for the influence in the realm of public international law. The proposed model applies in general cases and is not extended to the ambit of international humanitarian law. The proposition in the article affirms and establishes that Artificial Intelligence, resembles, based on its juristic ontology, an entitative nature, where an AI is a more original and unique entity of its kind, without any imitated human personified characterization. In estimation, the genealogy of artificial intelligence is social and technical. It is a general proposition that artificial intelligence cannot be limited to the scope of a subject of use or a mere human artefact. The first nature of an AI, as affirmed in the polite convention theory, i.e., the Turing Test and the Dartmouth proposal, is the subjective one, with no selftransformative capabilities entitled, which is described in the propositions of the article as the Utilitarian Nature of AI. The latter nature, which we focus on the proposition, is the entitative nature of AI, also known as the Self-Transformative and Entitative Nature of AI (STEN). Artificial Intelligence, as a realm, renders a potential to exist as a unique, general and diversely transformative juristic entity. This nature, evolves via the penetration of social, cultural, management and economic factors, leading to more socio-human development of AI, in the field of law and anthropomorphism. The proposed parameters, which are essential to determine the entitative nature of artificial intelligence are: (a) Legal historiography; (b) Anthropomorphic scope; (c) Technical utility; and (d) Doctrinal need. All these parameters are based on a doctrinal analysis of the legal developments in the field of AI ethics taking the inference that development, credibility and stability are essentially important to let the status quo of AI as a legal entity thrive properly. The article extends with the concepts postulated in the proposition in the realm of AI Ethics and Law, i.e., (a)


Artificial Intelligence and Policy in India

117

The Doctrine of Intelligent Determination; (b) The Realm of Dimensional Perpetuity; and (c)The Privacy Doctrine. The article further analyses the practical cases of algorithmic policing, customer experience, enculturation and human rights with relevant cases analysed and connoted with the proposal on STEN. The conclusion of the article arrives at the significance and need to render a principled action to commence the legal structure and jurisprudence for the entitative nature of artificial intelligence in international law with the proposed model as a seminal suggestion. Keywords : AI Ethics, International Law, Human Rights, Algorithmic Policing, Anthropocentrism, Bioethics, Data Privacy.

Introduction The dynamics in the anthropocentric developments related to Artificial Intelligence is significant to the generational and practical development of human society because the potential and adverse capabilities that disruptive technologies like AI possess are tenable enough to challenge the status quo of anthropomorphic legality. The understanding that the perceptible, substantive, operational and existential attributes of a real human can be mechanized is significant from the development of the history of machines. However, the credit to the early contributions by Alan Turing, Gottfried Leibniz, Charles Babbage, Claude Shannon and Nathan Rochester is also preceded by the socio-cultural nature of method and action involving certain background work on machinic logic and intelligence, which employs the development of artificial intelligence in a significant way. The Turing Test, one of the most significant works on computational intelligence and its philosophy, (Computing Machinery and Intelligence, 1950) (also called the theory of polite convention), where it was proposed by Alan Turing that a machine can be tested whether it can mimick the empathy and anthropomorphic footprints of a real human being. Even if the Turing Test was central to the question of human empathy or command-based empathy, it was merely indicative to at least render limited direction to understand how machinic intelligence could be determined and understood in those years, when the idea came at the forefront. In the coming decades in the modern era of 1950s to 1970s, the culture of entrepreneurship and innovation in the field of technology advances and becomes predominant, where industrialists and entrepreneurs in developed countries embraced the utilitarian aspect of technology, signs of which we find in the neoliberal economic order of the 21st Century – leading to a culture of distancing via the use of technology in human society. This idea of materializing human perceptions, which was even discussed in the Dartmouth Conference in the 1950s, endorsed aesthetic notions about disruptive technologies like AI, which is known as technology distancing (Pacey, 1999). Technology distancing, according to Pacey is a case when human actors distance themselves from the manual


118

Artificial Intelligence and Policy in India, Volume 2

workability and usage of technology in general. One of the most profound examples we can find these days is Amazon Alexa, where mere instructions enable the device to perform certain set of tasks, enabling less manual work and distancing human data subjects from the manual use of technology. Amidst the fact that entrepreneurs and industrialists focused on the utility side in a neoliberal economic order since the 1990s, technology distancing became a relevant factor to understand the economic and social order of a post-truth world influenced by AI, where it is observant that the AI used machinic logic as capillaries and methods to implement and enculture the anthropomorphic attributes of the human society. This influences the practical idea related to machine learning named as algorithmic policing (Hartree, 1949; McCarthy, et al., 1955; Larson, 2018). Thus, the paradigm shifts in the lifestyle and utilitarian aspect of a human society owed to technology distancing changes the pursuits of international law and relations towards technology, its diplomatic contours and economic aspects. Further, the role of disruptive technologies like AI was influenced by the entrepreneurial vision and action of democratization of technology as a resource and utility (Cervellati, et al., 2009). Artificial Intelligence is thus in a neoliberal economic order - influenced by the culture of technology distancing owed to rapid democratization of technology. However, the constraints of existence come into being when the treatment of a disruptive technology like AI as an entrepreneurial asset is ontologically treated as a utility for human development and welfare (Artificial Intelligence Index, 2017). An entrepreneurial asset typically means any human artefact, or any object, which may be recognized as a juristic entity by law, which has entrepreneurial qualities. How Technology Distancing Influenced AI’s Role in Human Society The neoliberal approach of behavioural economics – where utility dominated the discourse and diplomatic tie-ups in the field of technology, the materialization of development as a factor of identity to endorse commercial marketing of technology as a utility (Adobe, 2018; Emerging Technology from the arXiv, 2019; itut, 2017) became a new normal. The best example in this regard is the birth of consumer experience (CX) technologies to improve algorithmic policing of consumers treated as data subjects. That is a reason why from governments to technology companies, the trend to endorse the model of utilitarian and humancentred artificial intelligence limits has been dominant. It however led AI (and its hybrids and similar tech resources) to become redundant when it came to inner social development and replenishment of a human society because human identity and decluttered and generalized. Technology distancing therefore, is a dynamic outcome evolving out of the need felt by companies and governments to improve the facilities and utilitarian


Artificial Intelligence and Policy in India

119

structure of human society. There would be some economic and political factors, including the increased emergence and conversion of traditional state regimes into democracies as globalization and liberalization became inevitable since the 1990s (Cervellati, et al., 2009 p. 3). Technology distancing is capable to influence the rule of law of a state due to the repercussions and implications of determining and tracing the contours of artificial intelligence as a legal personality. Thus, treating AI as a utility-based limited human artefact may be the initial course of analysis and discussion, (Statista, 2018) but it cannot complete the purpose of AI in a wider spectrum of research and purpose in the law concerning AI Ethics. With increased income inequality, the democratization and stability of the rule of law in the democratic sense in developed and developing countries can be and are influenced (Mounk, 2018). This is also connected with the problem that the the framework of legal rights decided for legal and juristic entities is unable to face and resist the repercussions that would come into being when matters related to AI Ethics are dealt under existent legal systems around the world. Also, there are special concerns regarding empathy developed by AI, especially in recognition and communication systems like chat-bots and deepfake systems. Moreover, it is proposed that there are certain self-transformative capabilities of artificial intelligence, which may not be equitable to our anthropomorphic institutions and cultures. It is therefore proposed in the article that if the purposive construct of AI at an aesthetic level is ignored and not treated properly, then the system has the potential to encourage more technology distancing, leading to adverse circumstances for treating and legalizing AI. This legal and technical anomaly cannot be resolved by algorithmic accountability only or by rendering mere formalistic control over the principled development of AI systems by the procedural facets of the position of law determined by scientists and policymakers over issues like accountability and entitlement. A restriction on the true and reasonable nature of AI defeats the fulfilment to entitle and enculture the safe, certain, integrated and credible legal personality of the same, which must be avoided (Lebada, 2017; Adobe, 2018; Statista, 2018). The article thus proposes a new nature of Artificial Intelligence under a proposed analytic model as described in the article. It is proposed that artificial intelligence has the freedom to attain a self-transformative and entitative nature other than the general utilitarian and regulatory nature as is prescribed under various international regulations and committee drafts suggested by D9 countries. Such a self-transformative nature is proposed in the sense and realm of the jurisprudence of a legal personality for an AI, and further considerations and analyses are provided in the article to outline the heterogeneous and integral aspects of the entitative nature of AI. The reference of international law for AI Ethics is connected to the need to evolve and improve the international human rights law and international cyber law regimes. The proposition in the article focuses on the current trends and culture of entrepreneurial ethics and innovation in the fourth stage of globalization in a post-truth rules-based international order.


120

Artificial Intelligence and Policy in India, Volume 2

Adequate conclusions on the legal analytic model on the entitative nature of AI are provided therein. The Basis of Artificial Intelligence as a Self-Transformative and Entitative Legal Personality: Analysis of AI as a Utility The entitative nature of the legal personality of artificial intelligence is asserted on the basis of the following postulates that: Artificial Intelligence as a legal personality has a reasonable capability and inalienable right to attain a self-transformative and entitative nature; The Self-Transformative and Entitative Nature (hereinafter STEN) of AI as a legal personality is postulated with the substantive and procedural attributes of an AI; The attributes of the STEN of AI as a Legal Personality are postulated based on the postulated legal doctrines that govern the all-comprehensive aspect of AI as a legal personality; The doctrines proposed to establish the analytical model of the STEN of AI are (a) The Privacy Doctrine; (b) The Doctrine of Intelligent Determination; and (c) The Realm of Dimensional Perpetuity; The core aspect of STEN is based on an estimation of the culture of entrepreneurship and innovation and its relevant changes sought in the age of post-truth globalization. As discussed, AI is inevitably converted into a utility – by design and approach paved by technology companies (Statista, 2018; Adobe, 2018), which is based on various methods to enable technology distancing. The estimable workout on algorithmic policing – the policy science involved to regulate and shape the use of algorithms for a particular policy by tech companies and governments is still limited to bare utility and restrictive cum subjective issues (Larson, 2018; Abramovich, 2018; Capgemini Research Institute, 2018; Thiel, 2018; Future of Life Institute, 2019)44. Surveys on CX show how technology 44

Research by Adobe and Capgemini Research Institute (Adobe, 2018; Capgemini Research Institute, 2018) show that AI is becoming a dependent resource both for the consumers and the companies. In the 2018 Adobe report, an estimation on Customer Experience usage is given as follows: [R]espondents’ CX-specific priorities indicate that their organizations are focusing on improving the end-to-end customer experience instead of the entire customer journey from acquisition to loyalty was the top priority (46%), followed by improving cross-channel experiences (45%), and expanding content marketing [capabilities] (42%) (Adobe, 2018 p. 2)

Capgemini Research Institute also brought up a significant statistic data, where it explored that 55% people are keener to use applications if the interactions with the artificial intelligence were more human-like (Capgemini Research Institute, 2018 p. 5). Also – in the same case of interactions, 50% people as per the AI in CX survey feel they have a better emotional engagement


Artificial Intelligence and Policy in India

121

companies and governments are inclined towards the approach of AI as a utility, which itself, by design implies technology distancing. Even the legislative developments in the D9 economies show that AI requires a proper legal recognition. In the European Union, as per the primacy of the General Data Protection Regulation (GDPR) and its ontology, a declaration was signed by the European Data Protection Supervisor and other parties in October 2018 (Commission Nationale de l’Informatique et des Libertés (CNIL), France, European Data Protection Supervisor (EDPS), European Union, Garante per la protezione dei dati personali, Italy, 2018). This Declaration on Ethics and Artificial Intelligence encouraged two special principles of data protection law specific to artificial intelligence, which bears special importance to reckon the approach of AI Ethics in Law. These principles are (1) the Fairness Principle and (2) the Privacy by Design and Privacy by Default principle. While the Fairness Principle recognizes the doctrine of reasonable expectation and limits the utilitarian ambit of AI to data usage to a central and technically rational original purpose of the collection – it provides an ethical mandate to collective security and privacy of individuals, which itself is an enabling and progressive aspect of data protection jurisprudence taken in a mandate by Europe45. The other principle, as proposed in the declaration – the Privacy by Design and Privacy by Default recognizes the ontological and multi-dimensional approach of privacy as a legal institution with delicacy in ethical treatment and considerations. The above principles discussed endorse and focus to regulate the ethical and fair use of AI at a structural and integral level. Also – if we demarcate the principle and understand it by the aspects of design and default as the referred terms, it is discernible that Privacy by Design implies the topological transformation inherent and the involved cum stimulated inertia of AI subject to assessment in comparison with the concerned data subject involved (for example, human). It means the Design postulate (on privacy by design) imposes accountability on the creators to avoid any human-centred repercussions via AI due to its transformative nature and physiology that it does possess. The other postulate – Privacy by Default (hereinafter the Default Postulate) is a planar restrictive caution, which is a presumptive or pre-determinant responsibility imposed on the AI development and maintenance teams towards the data subject(s) to create certain default features in order to prevent biases. While the Design postulate focuses on the deep ends, the Default postulate is precautionarily required by the responsible establishments that develop and maintain the AI system. However, with the AI, which itself is determinant of the fact that there is a dire need to understand and utilize AI in a different form out of the normative paucities. 45

The fairness principle is connected with the original inspiration from the GDPR and takes it in the lines of a Grundnorm to AI Ethics and Law.


122

Artificial Intelligence and Policy in India, Volume 2

the legal scope of the Default postulate is weaker than the legal scope of the Design postulate, which is due to the penetrable nature that the Design postulate resonates. The Design postulate if is properly implemented, can impose serious and systemic accountability on the development team despite the adherence and fulfilment of the compliance measures that the Default postulate demands. The nature of the Design Postulate is reflective of the rights given in the GDPR and resembles a stable, flexible and less inertial estimation of data protection and privacy establishments. The problem, however, is related to the aspect to limit and influence accountability as an elementary aspect of procedural delicacy and utility concerning AI. It can hold any actor accountable and create a pre-defined fabric of responsibilities. This creates a cartesian formation of AI regulation and its needful collocation to estimate the binding value and outreach that the rule of law itself is capable to surpass due to lack of any deep legal entitative analysis of AI. This current legal development embraced by the EU is similar to the Algorithmic Accountability Act (currently it attains the status of a bill), which establishes a focused accountability perspective towards automated systems. AI is included as a technique in the definitions (U. S. Congress, 2019) and the scope of automated systems is limited to the matters of consumer ethics and business development, which is a small step towards democratizing and crystallizing opportunities to understand the aspects of legal personality for artificial intelligence in law.

The Utilitarian Approach to Artificial Intelligence This similar approach is found in the National Strategy for Artificial Intelligence (hereinafter NSAI) by the National Institute for Transforming India (NITI Aayog) under the Government of India (NITI Aayog, Government of India, 2018). The discussion paper of NSAI embodies on the potential of AI in general focus towards India as a developing economy, where the regulatory and solving capability of artificial intelligence is taken as a governmental initiative with open hands to collaboration with companies, individuals, governments and institutions. The exploratory aspect of this discussion paper is interesting and inquisitive. Special policy insight and statement on the importance of Explainable Artificial Intelligence (XAI) by the NITI Aayog resonates with the approach of DARPA on XAI (Looper, 2016) is an innovative vision towards AI, which has not been recognized in a direct and open conjugation by the legislative developments vide AI in Europe and the US but is similarly given importance in various policy documentations. A special analysis of the AI Policy of China resonates from the dynamic changes the Chinese Government has sought from the Internet of Things(IoT) to AI, which is connected with swift and reliable advancement, based on factual claims of academic, governmental and corporate engagement in the field of AI Ethics (Future of Life Institute, 2017). The Beijing AI Principles (China) provide a soft and delicate attribution to developmental ethics and


Artificial Intelligence and Policy in India

123

aesthetics related to AI, which is a good step to proceed with (BAAI, 2019). The lack of steps taken shows we need to crystallize the true nature of AI, which is recognized, anticipated and encultured in law because the actions or initiatives suggested or imparted by the policy documents, legislative developments, principles and declarations focus on AI as a utility and fostering a human-centric law-making and development approach. We can term it as the Utilitarian Approach to Artificial Intelligence (UAAI). The article lays down further with propositions for the Entitative Approach to Artificial Intelligence (EAAI) via STEN and the other doctrines related to the analytical model of STEN.

The Entitative Approach to Artificial Intelligence (EAAI): The Concept and Doctrines The Self-Transformative and Entitative Nature of AI (STEN) is postulated based on the core argument that Artificial Intelligence can retain its self-transformative nature and as a legal personality, by all ontological and topological means, AI must be treated, determined and recognized as an entity. However, the argument is not exclusive to UAAI and is an antithesis of the utilitarian nature of artificial intelligence. An AI can be a utility, but its entitative nature is a need cum requirement to understand and resolve the legal modalities that may be created due to lack of an anatomical fluidity in law to map and estimate a special state of nature for artificial intelligence46. The problem emerges from the lack of jurisprudential analysis on the penetrable nature of an AI. This penetrable nature of AI is not to be adjudged on the basis whether AI has a human-connected utility. An alternative approach suggested under EAAI is that the real nature of AI as a legal personality must be adjudged in consonance with full recognition of the attributes and self-transformative capabilities endowed within artificial intelligence. This enables us to recognize and estimate the legal modalities connected with an AI as a full entity. It does not, however, mean a personified legal outlook towards AI similar to other entities. Personification in jurisprudence has been integral to the instrument and recognize the legal personality of any entity. However, for a human artefact like artificial intelligence, it is proposed that AI does not require legal personification due to its selftransformation as a diverse reflection of its existence, purpose and action. In the sense of anthropomorphism, AI has emerged due to technology distancing (Pacey, 1999 p. 8) and its historical development is affected by two special factors in the field of law and international affairs – (1) Ethnocentrism; and (2) Scientific

46 This need cum requirement evolves from the complexity in the determination to use and estimate

how an AI can be furthered with purpose.


124

Artificial Intelligence and Policy in India, Volume 2

Humanism and Liberalism47. The contribution of scientific humanism and liberalism has been subjected to the need to innovate human life and prevent any lag towards the future sought (Tucker, 2016). Nevertheless, the pluralist aspect of technology distancing – which is not expressed in terms of policy approaches directly but indirectly by the ontological barriers and solutions created and diminished by nation-states makes AI vulnerable to globalization in terms of the method used to estimate the original and entitative originality of artificial intelligence. In due estimation, it is proposed that artificial intelligence has a wider capability to have its kingdom of technological species, which can be scaled based on their strength. The basic scale of strength to estimate an AI starts from an AI being weak to strong to superintelligence and then to the finality of Artificial General Intelligence (AGI). According to the polite convention theory48, there are two kinds of reasoning modalities taken into consideration to track the course of human empathy consumed and produced by an AI. These two modalities are (a) deception and (b) replication. Deception refers to the illusion of human perception and presence represented and expressed to a data subject, who is a human, according to Turing (Computing Machinery and Intelligence, 1950 pp. 433-460). Replication is nothing but a direct or indirection rarefaction of human empathy and identity-related empathy. These two modalities form the basis of the polite convention theory and settle the generic basis of humanization (under the technical ontology of personification) to understand and mechanize the productive aspects of human reality. From the text, audio, graphic and visual recognition devices to automated systems, the purposive object behind their creation is entitled with the core utilization of the concept of anthropomorphism, where technology is conformant towards human reality. The Dartmouth proposal is an extended positive assertion and realization to the polite convention theory and recognizes the potential of an AI to absorb the human reality (McCarthy, et al., 1955). This ultimately rises into the theory of UAAI because here the AI itself is treated as a human artefact of utility. The utility is essential to and connected with human empathy and has a special role to govern the aspects related to artificial intelligence as a legal personality. The causal influence to the realm of entrepreneurship and ethics in the industry of technology is influences by the two factors. Much of it is endowed to the United States of America and other Allied states for the rise in ethnocentric attitude towards technology & social life and issues (Dutta, 2009; Ikenburry, 2000). This is also preceded by the generational development of multilateralism in public international law (Koskenniemi, 2005). 48 There is lack of clarity regarding the existence of ruled intergovernmental or multiorganization standard(s) to determine the strength of artificial intelligence. However, the conventional scope begins with the Polite Convention theory by Alan Turing in the infamous Turing Test. Now, in recent years, there has been some incremental contributions by various institutions and organizations including the white article of the Asilomar Conference of 2017 and other policy documentations. 47


Artificial Intelligence and Policy in India

125

The theory of Entitative Approach to AI provides a legal approach to AI Ethics beyond the polite convention theory and the Dartmouth proposal with the basic postulates and the concerning doctrines. The definitive characteristics of the SelfTransformative and Entitative Nature of AI (STEN) are hereby provided as follows: 1. Artificial Intelligence possesses the capabilities of self-transformation, which means that an AI can transform its own existential and operational norms and characteristics in terms of anatomy and viability; 2. The legal personality of artificial intelligence is dynamic and cannot be comparably personified as it is possible in the case of humans; 3. Artificial Intelligence possesses the nature of an entity, which means its corporeal, personal and ethical capability is beyond human empathy, ethics and perception in terms of the legal reality assumed by positive law. Further, the topological perspective and existence of AI cannot be restricted by law due to its diverse and alien nature of legal empathy; 4. The following parameters perpetuate the basic aspect regarding the entitative nature of artificial intelligence - (a) Legal historiography; (b) Anthropomorphic scope; (c) Technical utility; and (d) Doctrinal need; 5. The three basic doctrines determining the anatomy and course of the purpose of an AI as a STEN – are (a) The Privacy Doctrine; (b) The Doctrine of Intelligent Determination; and (c) The Realm of Dimensional Perpetuity; The Parameters in the Entitative Approach Towards Artificial Intelligence

The basic parameters proposed regarding EAAI determine the qualitative aspect of an AI and help in measures to decide and estimate the dynamic aspect of artificial intelligence as proposed. Legal Historiography. An estimation of the development of jurisprudence shows the pragmatic development of common law and international law in the scholarly utility of the doctrines of monism and dualism (Rousseau, 2017). Monism implies that both the national and international legal systems are coalesced to each other, while dualism rests on separating both the systems by the pluralism of their positive legal systems. Now, the development and tendency of development of technology shows the trend of personification of human artefacts, wherein semantics, for example, we find the mention of a constitution, government, enterprises and even equipment in law by the sense of an organism (Koskenniemi, 2005 p. 17; McCorduck, 2004; Larson, 2018; European Union, 2016; Commission Nationale de l’Informatique et des Libertés (CNIL), France, European Data Protection Supervisor (EDPS), European Union, Garante per la protezione dei dati personali, Italy, 2018). The status of a juristic person is rendered in the law of essential centric importance, and it changes the provisional ecosystem of the resembling


126

Artificial Intelligence and Policy in India, Volume 2

entity involved. This approach of extensibility is used by courts and administrations across the globe to incentivize a better regularization of the socio-economic circumstances of the individuals and other non-state actors and influences international law and relations. Thus, a historical backdrop enables us to render that AI has been left with the concomitants of understanding related to materiality, a limited juristic person, and is webbed with the legal personification. This historical backdrop is a legal method to estimate the development of AI as a human artefact along the course of development of recognition and assessment of technology by legal systems (i.e., international organizations, courts/tribunals, national legislative and executive cum administrative bodies and quasi-judicial bodies) and the evolution in the jurisprudence of law and technology. Moreover, it helps us to determine the relative scope and construct between the human-led institutions of law and the AI as an entitative human artefact beyond the controlled role of AI as a utility for services. The parameter of Legal Historiography protects the legal and social heritage of human life and connects the role of internal state laws and international law with Artificial Intelligence as a perpetual coalescence. The role of the parameter is to estimate the historical connect and scope between AI and manhood in general. Anthropomorphic Scope. The parameter of Anthropomorphic Scope signifies how the attributes related to human reality influence the pragmatic discourse of artificial intelligence. It is important to understand that this parameter is used in the ontological consequence of artificial intelligence as an entity and must be avoided from mixing it with the utilitarian nature of the same. The entitative nature is a design of the multi-dimensional and heterogenous liberty that emanates through AI as a legal personality. Since, in general assumption, it has a special connect with human information in the possible material and immaterial forms, it signifies the foot-printing of information in raw form, which is utilized. In general, the utilitarian approach to AI regards and restricts the anthropomorphic concerns and modalities with AI (Commission Nationale de l’Informatique et des Libertés (CNIL), France, European Data Protection Supervisor (EDPS), European Union, Garante per la protezione dei dati personali, Italy, 2018). The Entitative Approach to understand STEN in case of AI goes beyond restrictions and attempts to morph and structure a dynamic personality, in legal terms – which is needed to be taken into acute and precise consideration to estimate the human-connected scope that shapes up the personality and actions of AI. Technical Utility. Artificial Intelligence requires a technical utility, which is required to be deeply rooted in the work ethic of enterprises/tech companies and governments. This utility emerges with time and the capillaries of utility and purpose diversify and change - based on the technological capabilities of the individual AI itself. Under the ambit of technical utility, the scope of analysis precedes with these key aspects to consider: (i) Predictability; (ii) Strength and (iii) Intelligence Asset. All


Artificial Intelligence and Policy in India

127

of this observation is in the case of AI Ethics and Law and has special implications involved in a perspective with legal theory and international law of a doctrinal sense. These parameters are relative to each other and do not entail any cardinal value, but are entirely observational to be used in case of analysis. The nature of predictability signifies the efficiency and activity of AI concerned (Howard, 1994) – which is connected with the operant capabilities of artificial intelligence. Strength refers to the capability of artificial intelligence to possess a stature as a technological human artefact in terms of its all-comprehensive operations and substantive self-construct. The purposive meaning behind Intelligence Asset as a key aspect is based on the idea that AI is a socio-economic asset of utility in material and entitative terms. Moreover, terming it an asset signifies besides that it has material and immaterial value in terms of the culture of entrepreneurship and innovation, the perpetual need towards non-human socialization and making the approach of law more dynamic and cultivable. The key portions consist of (a) Socio-Economic and Legal Attribution; (b) Self-Sustainability and Transformation. The parameter of technical utility is a means to coalesce the utilitarian and entitative natures of artificial intelligence together. This coalescence and adequacy are to be driven by the conception of Intelligence Asset. The first characteristic of an Intelligence Asset is the Socio-Economic and Legal Attribution, which defines how an AI is connected in the socio-economic and legal dimensions of human and natural society. This characteristic is a test of checking subjective attributes of an AI and their semantic construct with the data subjects with parameters determining the social, economic and legal assets connected with the data subject in proportions. The second characteristic of the Intelligence Asset is Self-Sustainability and Transformation, which is connected with the core procedural systemization of the observation and reception of the AI itself. This characteristic is connected with the doctrines proposed. The measure of estimation of the self-sustainability and transformation is a test to identify the natural and man-led potential of AI as a techno-social species and is helpful to determine the jurisprudential horizons of the concerned anatomy of AI taken into consideration. Doctrinal Need. The doctrinal need is simply a parameter which entails screening the theoretical and pragmatic essence behind the dynamics of the legal personality of AI. It is a screening parameter, which is used to encourage democratized methods of learning and neutrally encouraging AI research. The Basic Doctrines Determining EAAI. The basic doctrines are proposed to decide the anatomy and the course of the purpose of artificial intelligence – and render a structural and entitative understanding of AI Ethics. The Doctrine of Intelligent Determination. The Doctrine of Intelligent Determination presents the basic origination and legitimation of AI as an Entity and postulates that such a manifestation developed


128

Artificial Intelligence and Policy in India, Volume 2

by artificial intelligence, where it is subjected to discourses where basic human rights are determinable and subject to review – renders a general course of nature in the human world of public order. The proposition stands on the argument that an entitative artificial intelligence is to be subjected to the democratization of its real-time interface to a scenario, where public order exists. The doctrine is further divided into principles, which govern the stimulus of AI in terms of legal indoctrination and legitimation. The principles are designed to avoid human personification of AI and stabilize the individuality and diversity of AI systems into a preliminary understanding. The Dimensionality Principle. This principle means that artificial intelligence in diverse representations is to be based on the variations of perspectives, (which are not exhaustive) based on the process and the natural growth of the AI realm. The principle also signifies that AI as an entity is empowered to have a fabric of diverse contours and growth as a legal personality, and cannot be overlapped with a common frame of reference to position and decide the direct personification of AI as a Legal Personality. The principle affirms the argument that artificial intelligence as an entity has the potential to be influenced by complex circumstances and perspectives of reality and this influence defines the self-transformative nature of the AI itself. The Medium Principle The Medium Principle is a regard of the existential precedent of the utilitarian nature of Artificial Intelligence and affirms that an AI as an Entity is not exclusive of its utilitarianism and connects the UAAI and EAAI. In general, the extensive dependence between AI-related systems and human beings (Adobe, 2018; Artificial Intelligence Index, 2017; West, et al., 2018). While the Dimensionality Principle proposes a focus over the diversity of influence subjected to AI, the Medium Principle regards the need to approach the utilitarian need of AI and make the externalities related intelligent or at least conscious of the pragmatic originality and entitative serenity that AI as a legal personality must attain. The principle thus renders to make human-centred AI approaches coherent to the AIcentric approaches as per the proposition. The Receptivity Principle . This principle means that an entitative AI has an inalienable right to receptivity in the presence of the real-time conditions relevant and possible in its vicinity. It means that the receptivity of AI exists as a non-absolute right to the reception of the data subject. This right to reception is a harmonious, reasonable and normal intervention of the privacy of the attributions that the data subject may/may not contain. The purpose of the principle is to establish the genealogy of action that AI as an entity possesses in its natural potential based on the design it has. However, it does not mean that there can be no limitations to such receptivity rights. The limitations can be created concerning anthropomorphic needs and also preserve the privacy of the data subjects concerned. There can be methods to


Artificial Intelligence and Policy in India

129

employ receptivity innovatively – but the value of the receptivity principle begins when the AI concerned attains the status of being self-transformative. This status is dependent on the case of adverse predictability and patterning of the

Fig. 1. Figure explaining the Dimensionality Principle and its perspective over the acknowledgment of AI as an entity (Abhivardhan, 2019 p. 27).

algorithms involved with the due process of subjected reception towards the concerned data subject. The Retentivity Principle The Principle of Retentivity simply means that an entitative AI has a due residual capability as a legal personality, which enables its course of action and confers endowment in circumstances as it is ought to be. The principle is a means to recognize the dynamics involving the fluid approaches to retain that there may exist a collateral perspective of responsibility and accountability that an entitative AI may possess. The Realm of Dimensional Perpetuity This doctrine means that an entitative AI has the due potential to stay perpetual with its diversity of influence and can retain the nature of being subjected to the multidimensional qualities of the data subject involved with. The Doctrine is a connoted extension to the Dimensionality principle to establish that an AI has the due potential and is endowed by the material implications in the real-time data subject or environment with which the AI is subjected towards. This potential extracts the true and advanced seld-transformative nature of the AI itself and renders adverse possibilities. This is also because (a) artificial intelligence requires no presumed immaterial yet materially connected identity to exist, which exists in reality; (b) design is a human procedure, which establishes its progress, which itself makes it uncertain as to how we can encumber the use of the realm, and; (c) Innovation cannot be restricted and defeated by law; the


130

Artificial Intelligence and Policy in India, Volume 2

purpose of the law is societal and corporeal cultivation even if restrictive laws can prevent data influx and processing certainties. However, the tenable uncertainties and their uniqueness can be monitored but not suppressed; The Privacy Doctrine The Privacy Doctrine is a collation of five different technical postulates, which decide the ontology over the privacy concerns connected between humans and AI. The Doctrine takes the premise in the affirmation that the inalienable privacy rights of humans and the inalienable reception rights of AI realms must be safeguarded and harmonized together, keeping in consideration that both of the rights do not affect each other and are after each other. Keeping the recognition of the right to privacy of a human inalienable and the right to the receptivity of AI as inalienable with design and structure-based preliminary restrictions if pursued, the Privacy Doctrine relies on the following postulates: 1. Streamlined Cognizance of the Polite Convention Doctrine by Turing; According to the Polite Convention Doctrine by Alan Turing, AI can replicate the data subjects connotated. The Polite Convention despite being old can be used to conjoin the Dartmouth Proposal and the approaches towards AI Ethics in the present time. The important aspect of the basic doctrine related to privacy concerns is that when there is a relationship between imitation and precedential and experiential reference and learning imparted among the data subject/content & the AI itself. The postulate is furthered with the argument that there is a cognizant role of an AI (strong/weak). There can be soft or hard influences via Artificial Intelligence over human beings and their conditions where human rights are enforceable and safeguarded. The purpose of the cognizance factor here is to democratize the techno-social relations between the AI and the data subjects under consideration. 2. Techno-cultural Semblance in AI Entities; The second postulate is in consideration concerning the idea of semblance and democratization of cultural and ethical values of human society. The proposition in the postulate is over the premise that there is a dire need to establish an encultured semblance between AI as a disruptive technology and the humans subjected. There should be a rendered personalization of the selfexistential and preliminary interests of AI and human entities. Since there is no generic method to determine the existential and preliminary interests of an AI, it is recommended to study and estimate the amorphous features of such unpredictable stamina and capability that the AI itself can endow. It can be case-to-case based, but the basic priority must be to give adequate space to the information related to humans to appreciate approximated, amorphous and saturated harmonization with the short-run and long-run aspects of such stability and settlement to pursue and encourage substantive welfare to innovation and social needs at the same time. 3. Techno-socialization concerning the Data Subject;


Artificial Intelligence and Policy in India

131

The third postulate affirms the proposition that a techno-socialization (socialization of technology as a human artefact) must be subjected in lines with respecting, acknowledging and protecting the existential, substantive and action-based value, purpose and manifestation of the data subject. The postulate extends with the argument that such techno-socialization must be objective towards the data subject to acknowledge (a) the lack of proximity towards controlling and generalizing self-experiential ethics learnt and improved by legal systems and the AI itself (in terms of its algorithmic nature) and (b) the identity of the data subject and its most possible characteristics, which may or may not have far-reaching implications. Point (a) is compliable because proximity is not absolute in case of the determining trust and control over artificial intelligence and point (b) is compliable because the identity of the concerned data subject must be safeguarded as a basic preference to cultivate immune and innovative methods to safeguard the privacy of the data subject. This – as proposed may cause the fusion or merging compromises between technology and culture (Hao, 2018; Tucker, 2016). 4. Intelligent Determination and its Residual Nature; The Privacy Doctrine connects with the Intelligent Determination Doctrine here affirms and postulates here that as there is an inalienable right to the receptivity of an entitative AI, there is always a case that some residual, amorphous & approximate modalities that the AI itself is related with. Such modalities may cause biases of any kind that may alter the course of analyzing the deviating trends in the highly predictable algorithmic operations concerned with the AI realm. 5. Predictability and its space of Dimensional Perpetuity; The fifth postulate establishes a general argument that algorithmic predictability in case of an entitative AI is beyond control and cannot be dominated by mere human welfare-based restrictions imposed on the AI itself. Connecting with the Doctrine of Dimensional Perpetuity, the postulate appreciates the essential role of predictability and the need to construct riskhandling mechanisms or capabilities for and by the AI realms and as a social and ethical need for the data subjects. The postulate recognizes the probabilistic nature of artificial intelligence and renders the position that under EAAI, the probabilistic nature of AI can tend to vaguely deterministic consequences due to the case that AI operations lack openness in algorithmic policing and their processes are opaque. It means that Machine Learning (ML) is opaque by the procedure (Tjoa, et al., 2019; Looper, 2016; Akula, et al., 2019) and there is a need to estimate the possible contours of such ML involved in the process with the data subject. The use of a trust can assist towards a coherent perspective and connect between AI and humans. Thus, accepting this in legal essence and implementing it in our social models, we can understand AI as a different and innovative legal personality in a more


132

Artificial Intelligence and Policy in India, Volume 2

coherently designed and friendlier way. This is the naturalistic proposition over the Entitative Nature of AI. Here are the concluding assertions on the Entitative Approach to AI and STEN provided to complete the scope and purpose of the proposition: • The model proposed is preliminary and is capable of providing a theoretical and jurisprudential semblance to understand the modalities of AI Ethics and treating AI as a special legal personality; • The model focuses on the genealogy of AI as a legal personality, which is selftransformative and entitative by its nature; • The parameters and the doctrines have been proposed to recognize the importance of a utility-based AI, which is capable to be self-transformative taking into account the severity of the conditions concerning the usage of AI; • The propositions are doctrinal in nature and are intended to commence a progressive, naturalist, neutral, democratized and anthropomorphic ecosystem of AI and natural species by legal essence and acknowledgement; • The purpose of this model is to expand jurisprudential approaches to estimate the approximated legal persona of AI and to improve the persona determined by the semblance of the utilitarian social and economic perspectives with the self-transformative and entitative nature of Artificial Intelligence; The model of EAAI thus is based on the need for a preliminary acknowledgement and innovative legal approach to handle and connote AI as a Legal Personality.

The Critical Side of Consumer Experience, Enculturation and Algorithmic Policing: The STEN Perspective The Self-Transformative and Entitative Nature of AI (STEN) retains the position that Artificial Intelligence as a Legal Personality is self-transformative and entitative by its nature. The model proposed is based on this assumption itself. However, STEN is connotative and not exclusive of UAAI and respects the utilitarian nature of AI. Under the ambit of AI Ethics concerning utility, it is important to analyze the critical sides of three important conceptions related – (a) Consumer Experience (CX), (b) Enculturation and (c) Algorithmic Policing. Concerning the model, the conceptions are important due to their potential and the need to estimate and understand AI Ethics into a naturalistic perspective. Consumer Experience and Behavioral Economics: The Phenomenon over Data Extracted

Consumer Experience (CX) has the potential to extract and understand the


Artificial Intelligence and Policy in India

133

traces of utility that human consumers require from companies. In general, such data extraction employed via CX methods from the tiniest to the hugest of services and products employed via AI assists companies to gain loyalty from consumers easily. The recent trends on CX show that 37% of the respondents on surveys have exceeded their top business goal in 2018 by a significant margin as very advanced (Adobe, 2019 p. 9). There is a gained rise in the process towards an omnichannel of consumer experience journey, which is based on the perspective of influencing and acquiring loyalty (Allman, 2019) of consumers. Also, the method of strong omnichannel strategies enables retains companies 89% of their consumers in comparison to the 33%, who do not maintain such strong omnichannel strategies (Dimension Data, 2019). Moreover, there is a 46% trend of fragmented approach towards dealing with a consumer with inconsistent integration between technologies among the companies (Adobe, 2019 p. 44). The perspective regarding the rise of CX as a method of influence and multi-analytical engagement towards consumers seems to acquire and stabilize loyalty as an experiential concern in marketing strategies. Using AI undoubtedly increases mobility for companies and eases position to understand and use statistical literature to efficient figure the perspectives of the data subject. This indeed comes into the ambit of treating AI as a utility. However, this also shows that the value of a data subject is regarded beyond the sense and purpose of pseudonymization and the utilitarian cum experiential value of data and the concerned data subject(s) has become a big concern for companies. Here are some suggestions from the perspective of the STEN of AI proposed: • The pseudonymization of data has been improved with value and utility-based services and AI (in any possible form) can be used to bridge the need and provide better data. Thus, it is recommended that omnichannel-based strategies must render customer journey management towards a trust-based, transparent and naturalized end-to-end ecosystem between the company and the individual consumer; • There should be methods by which the employed technology must socialize with consumers and give up the method of acquiring loyalty of consumers by experiencebased influence methods. Instead of acquiring loyalty, the company must focus on the equity of opportunity towards making the opportunity ecosystem user-friendly in terms of letting the concerned data subject as the consumer to attain the right to pause and proceed with the product/service. Moreover, quality concerns matter, which must never be ignored in the case to socialize with consumers; • Companies should take care that their CX strategies must not monopolize the cyberspace of marketing nor in physical terms. Cyberspace must not be contaminated and relevant approaches should be created, where diversity of representation should be protected as the natural cyber rights of every possible digital entity. Ethical and trust-quality connected approaches can assist them;


134

Artificial Intelligence and Policy in India, Volume 2

The Need to Understand the Legal Anatomy of Enculturation: Need of a Neutral Approach towards Rapprochement of Cultures

Enculturation is a process involving cohesion and coalescence of identities and their cultural improvements by acknowledgement, learning and acquisition. There is a need to understand the entitative perspective (concerning EAAI) to estimate and develop neutral legal and technological approaches to handle the identity-based footprints of data subjects, which define and showcase them directly or indirectly. The rigorous development of machine learning (ML) and deep learning (DL) systems over the analytical impression of voice/text/visual data present has special implications (Akula, et al., 2019). There have been enormous issues over maintaining accountability over algorithms to preserve the identity of cultures, ethnicities and other entitative dimensions entitled with data subjects (McCorduck, 2004; Noble, 2018; Paris, et al., 2019). There is a need to open up towards entailing explainable artificial intelligence (XAI) and focus on better and mobilized interpretability. However, taking the case of the international community, there is a need to establish a neutral approach towards recognizing and revitalizing the approach of rapprochement of cultures, with the expressive understanding to estimate how probabilistic algorithms can be inclusive and open to a case of reasonable and transformative coherence to data subjects in the purview of circumstantial necessities. Enculturation is dynamic and opaque in case of AI, and it is necessary to preserve the ethical resonance of cyberspace and of the physical modalities concerned with data subjects, which are influenced. Here are some suggestions towards understanding and proceeding towards a neutral and friendly rapprochement of cultures: • The action of data receptivity by an AI must be regarded as an ethical and experiential reality, where there must exist space for collaborative governance between the AI systems and human users involved; • Protection of identity must be immune to adversarial political interests. There should not be biases on the grounds of materialistic political legitimation; instead, there should be an open, non-presumed and naturalistic approach to estimate accountability towards heterogeneous and homogenous outcomes produced by AI systems with the due need to improve and enable the AI itself to be immune against any bias by essence or influence of the data subject with socialized and apolitical interpretability;

Algorithmic Policing and International Law: Need for Immune Privacy Considerations Algorithmic policing is a simple process involving ethical policing of algorithms by companies, entities and governmental (and intergovernmental) bodies to make algorithms socially secured and purposive. There have been adverse cases of policing of data and their increased surveillance mechanisms with legal, social,


Artificial Intelligence and Policy in India

135

human, technological and commercial issues. Most prominent examples are found in India and China. The Chinese government has received wide international condemnation for their treatment of Uighur minorities in Xinjiang, China. In the politics of detention of the minorities, the authorities had used unverified and unsettled algorithms in their automated CCTV, apps and other digital tools via AI directly or indirectly to monitor the minorities and relinquishing their basic human rights (Larson, 2018; Amnesty International, 2018). In India, the issue is about the dysfunctional issues related to the Aadhar schemed by UIDAI, the authority under the Government of India (Khera, 2019; Grewal, et al., 2016) and the recent Data Protection Bill (Ministry of Electronics and Information Technology, Government of India, 2018) proposed in the Indian Parliament has serious flaws on three grounds - data localization issues, problems related to law enforcement access to data, and weak oversight in the law itself. There are redemptive implications of the draft bill, which dislocate the ethos of data protection. Nevertheless, Delhi and Beijing represented their aligned stances on a National AI policy. While India sided with the West, China remained with its Eastern approach (NITI Aayog, Government of India, 2018; BAAI, 2019). Other than third world states, the D9 economies and the US need to rethink on the utilitarian perspectives of AI and improve them as they face conventional problems similar to those under CX and enculturation. Thus, in the case of better data-driven governance, relevance is a primary requirement (The Dialogue, 2018). The suggestions concerning algorithmic policing are provided thereto: • There is a need to recognize a peremptory norm over algorithmic policing as a key priority to improve data-driven governance measures to prevent political divides over the balance between governance and liberties in developing states. There may not be a case to recognize it with ease among nation-states, but there must be relevant approaches to deal with the same with keeping the relations between the AI systems and humans in the lines of a naturalistic essence by law and social legitimation; • There are contentious issues concerning the nature of the debate over protecting the social and economic rights of the data subjects (humans) while keeping governance immune from excessive and unreasonable intervention. A better solution can be is to avoid materialistic political legitimation and adopt a neutral, transparent and naturalistic approach towards improving AIassisted data governance with the preservation of ethical standards towards the treatment of data beyond, during and before the layer of pseudonymization conferred to a data subject;

Conclusions The Entitative Approach to AI (EAAI) is proposed with the purpose to render jurisprudential and stable solutions to revisit and improve the limits and ethos of


136

Artificial Intelligence and Policy in India, Volume 2

law towards AI as a disruptive human artefact. The model is preliminary by nature and may seek changes as per necessity. However, the purpose of Artificial Intelligence must not be sought to complete multi-utilitarianism and absolute technology distancing. The propositions in the article are meant to keep two aspects intact, i.e., (a) the concerns of human innovation, integrity and improvement as self-owned assets of their lives and (b) the need to make AI explainable and naturalistic by recognizing its individualistic nature. The propositions derived from the analysis, in conclusion, are provided thereto: • The Entitative Nature of Artificial Intelligence is required to preserve the integrity of human society and open spaces to accept diverse human artefacts of disruptive nature in the jurisprudence of law and technology. The model is an attempt to reconsider and improve the legal essence of AI as a Legal Personality and seek careful cum naturalistic efforts to proceed beyond the monotonicity of law towards a coalescence to openly estimate, recognize, acknowledge and resolve better futures in cohesion and harmony between disruptive technology and humanity; • There is a dire need to improve the synthetic jurisprudential approaches to law and technology concerning AI. The utilitarian model has the potential to grasp and evolve around the considerations to magnify over the data protection liberties and responsibilities to be conferred to a data subject by intervening and welcoming the principles of AI Ethics and Technology into the scope and space of law. However, it is a global necessity to enculture and improves the model. As the propositions on the model have been stated, the Utilitarian Nature of AI (UAAI) is not to be excluded and the EAAI, being conformant and harmonious to anthropocentric legitimation, must implement measures to make AI self-transformative, explainable, interpretable and naturalistic. The process is long, and it requires efforts to fix the efforts beyond, during and before the pseudonymization for the data subject in any possible way. • There exist concerns over the potential of AI to equate with humans. The proposition entails the suggestion that technology distancing by design must not defy space to improve human potential. The ethical perspectives to innovate and improve AI must render higher possibility to be useful wherever human capability needs to be improved and co-assisted/helped, which should not include relinquishing their privacy rights and the right to be capable, whether materially, physically, mentally or immaterially; • There is a need to keep the right to receptivity of an AI absolute because it is needed to be acknowledged and not defeated. However, relevant regulation mechanisms in lines of anthropocentric legitimation must ensure that they encourage naturalistic restrictions to make AI improved, explainable, interpretive and self-socialized to the conditions of the data subject(s). It is suitable for humans to improve and grow with time. Nevertheless, it is also important to make the essence and rule of law cultivable and open to disruptive


Artificial Intelligence and Policy in India

137

innovation to preserve the integrity and purpose of the system created and maintained via anthropocentric legitimation; • There are political concerns over AI and also on the contentious nature of the data itself, which is capable of rendering political essence and legitimation via its connoted relationship with the AI systems involved. Therefore, it is important that if the political ecosystem remains materialistic by its nature and presence, it is imperative to avoid political legitimation. There exist material issues in handling political issues among state and non-state actors. It is thus important to educate and improve human society by balancing the naturalistic and interactive capabilities of both the AI systems and the humans (as data subjects). When there is a democratized balance, political legitimation can certainly be improved and re-recognized. Also, it can improve the scheme and content of political concerns and conversation to improve standards of politics and society. Therefore, it is important to keep an unrestricted, unignored balance and equation between the right to the receptivity of an AI and the right to privacy of humans;

References 1. Abhivardhan. 2019. Artificial Intelligence Ethics and International Law: An Introduction. New Delhi : BPB Publications, India, 2019. 978-93-88511-629. 2. Abramovich, Giselle. 2018. Study Finds Investments In Customer Experience Are Paying Off. CMO.com. [Online] February 26, 2018. https://www.cmo.com/features/articles/2018/2/26/adobe-2018-digital-trends-reportfindings.html#gs.xoSSi8Q. 3. Adobe. 2019. 2019 Digital Trends. Adobe.com. [Online] 2019. https://www.adobe.com/content/dam/acom/en/modal-offers/econsultancy-digital-trends2019/pdfs/econsultancy-2019-digital-trends_US.pdf. 4. —. 2018. The Business Impact of Investing In Experience. Adobe. [Online] July 2018. https://www.adobe.com/content/dam/acom/au/landing/Adobe_Biz_Impact_CX_APAC_Spotli ght.pdf. 5. Akula, Arjun R, et al. 2019. X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust. arXiv.org. [Online] September 15, 2019. https://arxiv.org/abs/1909.06907. 6. Allman, Rob. 2019. Customer experience trends in 2019. Dimension Data. [Online] 2019. https://www.dimensiondata.com/en/insights/technology-trends/customer-experiencetrends-2019. 7. Amnesty International. 2018. UP TO ONE MILLION DETAINED IN CHINA: “WHERE ARE THEY?” TIME FOR ANSWERS ABOUT MASS DETENTIONS IN THE XINJIANG UIGHUR AUTONOMOUS REGIONMASS “RE-EDUCATION” DRIVE. s.l. : Amnesty International, 2018. ASA 17/9113/2018. 8. Artificial Intelligence Index. 2017. Artificial Intelligence Index: 2017 Annual Report. Artificial Intelligence Index. [Online] November 2017. http://aiindex.org/2017-report.pdf. 9. BAAI. 2019. Beijing AI Principles. baai.ac.cn. [Online] May 28, 2019. https://www.baai.ac.cn/blog/beijing-ai-principles. 10. Capgemini Research Institute. 2018. The Secret to Winning Customer's Heart With Artificial Intelligence. Capgemini Research Institute. [Online] 2018. https://www.capgemini.com/wp-content/uploads/2018/07/AI-in-CX-Report_Digital.pdf.


Artificial Intelligence and Policy in India, Volume 2 138 11. Cervellati, Matteo, Fortunato, Piergiuseppe and Sunde, Uwe. 2009. Democratization and the Rule of Law. World Trade Organization. [Online] 2009. https://www.wto.org/english/res_e/reser_e/gtdw_e/wkshop10_e/fortunato_e.pdf. 12. Commission Nationale de l’Informatique et des Libertés (CNIL), France, European Data Protection Supervisor (EDPS), European Union, Garante per la protezione dei dati personali, Italy. 2018. Declaration on Ethics and Data Protection in Artificial Intelligence. 40th International Conference of Data Protection and Privacy Commissioners. [Online] October 23, 2018. https://edps.europa.eu/sites/edp/files/publication/icdppc-40th_aideclaration_adopted_en_0.pdf. 13. Computing Machinery and Intelligence. Turing, A. M. 1950. 1950, Mind, pp. 433-460. 14. Dimension Data. 2019. Customer Experience 2019: Technology Trends Infographic. Dimension Data. [Online] 2019. https://www.dimensiondata.com/insights/-/media/dd/insights/techtrends/customer-experience-2019/customer-experience-2019-technology-trendsinfographic.pdf?la=en. 15. Dutta, Saheli. 2009. Determinants of Ethnocentric Attitudes in the United States. Princeton University. [Online] 2009. https://paa2009.princeton.edu/abstracts/91531. 16. Emerging Technology from the arXiv. 2019. Data mining adds evidence that war is baked into the structure of society. MIT Technology Review. [Online] January 4, 2019. https://www.technologyreview.com/s/612704/data-mining-adds-evidence-that-war-isbaked-into-the-structure-ofsociety/?utm_campaign=the_download.unpaid.engagement&utm_source=hs_email&utm_med ium=email&utm_content=68743473&_hsenc=p2ANqtz-_HN2oiBWwOTp7RTCnqoxY. 17. European Union. 2016. REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC. Official Journal of the European Union. April 25, 2016. 18. Future of Life Institute. 2017. AI POLICY – CHINA. futureoflife.org. [Online] 2017. https://futureoflife.org/ai-policy-china/?cn-reloaded=1. 19. —. 2019. The Problem of Self-Referential Reasoning in Self-Improving AI: An Interview with Ramana Kumar, Part 2. Future of Life Institute. [Online] futureoflife.org, March 21, 2019. [Cited: July 29, 2019.] https://futureoflife.org/2019/03/21/the-problem-of-self-referential-reasoning-inself-improving-ai-an-interview-with-ramana-kumar-part-2/. 20. Grewal, Jaspreet, et al. 2016. Report on Understanding Aadhaar and its New Challenges. cisindia.org. [Online] August 31, 2016. https://cis-india.org/internet-governance/blog/report-onunderstanding-aadhaar-and-its-new-challenges. 21. Hao, Karen. 2018. Establishing an AI code of ethics will be harder than people think. MIT Technology Review. [Online] October 21, 2018. https://www.technologyreview.com/s/612318/establishing-an-ai-code-of-ethics-will-beharder-than-peoplethink/?utm_campaign=the_algorithm.unpaid.engagement&utm_source=hs_email&utm_mediu m=email&utm_content=68751142&_hsenc=p2ANqtz--35-5Ot4me-Lnsz8P9hK2PB73PI. 22. Hartree, D. R. 1949. Calculating Instruments and Machine. New York : s.n., 1949. 23. Howard, Philip K. 1994. The Death of Common Sense: How Law is Suffocating America. New York : Random House, 1994. 24. Ikenburry, John. 2000. America’s Liberal Grand Strategy: Democracy and National Security in the Post-War Era. [ed.] Micheal Cox, John Ikenberry and Takashi Inoguchi. American Democracy Promotion: Impulses, Strategies, and Impacts. Oxford : Oxford University Press, 2000.


Artificial Intelligence and Policy in India

139 25. itut. 2017. Human-Compatible AI: Design Principles To Prevent War Between Machines and Men. [Online] itut, June 9, 2017. [Cited: July 27, 2018.] http://newslog.itu.int/archives/1571. 26. Khera, Reetika. 2019. Aadhaar Failures: A Tragedy of Errors. EPW. [Online] April 5, 2019. https://www.epw.in/engage/article/aadhaar-failures-food-services-welfare. 27. Koskenniemi, Martii. 2005. From Apology to Utopia: The Structure of International Legal Argument. Cambridge, NY : Cambridge University Press, 2005. 28. Larson, Christina. 2018. Who needs democracy when you have data? MIT Technology Review. [Online] August 20, 2018. https://www.technologyreview.com/s/611815/who-needsdemocracy-when-you-have-data/. 29. Lebada, Ana Maria. 2017. Second Committee Considers Role of AI in Advancing SDGs. IISD. [Online] October 12, 2017. http://sdg.iisd.org/news/second-committee-considers-role-of-ai-inadvancing-sdgs/. 30. Looper, Christian de. 2016. DARPA thinks artificial intelligence could wring out bandwidth from the radio spectrum. Digital Trends. [Online] March 28, 2016. https://www.digitaltrends.com/mobile/darpa-ai-radio-spectrum-competition/. 31. McCarthy, John, et al. 1955. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Stanford University. [Online] 1955. http://wwwformal.stanford.edu/jmc/history/dartmouth/dartmouth.html. 32. McCorduck, Pamela. 2004. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick, Massachusetts : A K Peters, Ltd., 2004. 33. Ministry of Electronics and Information Technology, Government of India. 2018. Personal Data Protection Bill, 2018. meity.gov.in. [Online] 2018. https://meity.gov.in/writereaddata/files/Personal_Data_Protection_Bill,2018.pdf. 34. MIT Technology Review. A Shanghai startup’s demo of its system for facial recognition. 35. Mounk, Y. 2018. The People Versus Democracy: The Rise of Undemocratic Liberalism and the Threat. s.l. : Harvard University Press, 2018. 36. NITI Aayog, Government of India. 2018. National Strategy for AI - Discussion Paper. Niti.gov.in. [Online] June 2018. [Cited: July 27, 2019.] https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AIDiscussion-Paper.pdf. 37. Noble, Safiya Umoja. 2018. Algorithms of Oppression. New York : New York University Press, USA, 2018. 978-1-4798-3364-1. 38. Pacey, Arnold. 1999. Meaning of Technology. Cambridge : The MIT Press, 1999. 39. Paris, Britt and Donovan, Joan. 2019. Deepfakes and Cheap fakes. New York, US : Data and Society Research Institute, 2019. 40. Rousseau, Jean-Jacques. 2017. The Social Contract. s.l. : Jonathan Bennett, 2017. 41. Statista. 2018. Robotic/intelligent process automation (RPA/IPA) and artificial intelligence (AI) automation spending worldwide from 2016 to 2021, by segment (in billion U.S. dollars). Statista. [Online] 2018. https://www.statista.com/statistics/740436/worldwide-roboticprocess-automation-artificial-intelligence-spending-by-segment/. 42. The Dialogue. 2018. Report: Intersection of AI with Cross Border Data Flow and Privacy. thedialogue.co. [Online] December 2018. http://thedialogue.co/dialogue/wpcontent/uploads/2018/12/Report-Intersection-of-AI-with-Cross-Border-Data-Flow-andPrivacy-2.pdf. 43. Thiel, Will. 2018. The Role Of AI In Customer Experience. Pointillist. [Online] 2018. https://www.pointillist.com/blog/role-of-ai-in-customer-experience/.


Artificial Intelligence and Policy in India, Volume 2 140 44. Tjoa, Erico and Fellow, Cuntai Guan. 2019. A Survey on Explainable Artificial Intelligence (XAI): towards Medical XAI. aRXiv.org. [Online] July 17, 2019. https://arxiv.org/ftp/arxiv/papers/1907/1907.07374.pdf. 45. Tucker, Ian. 2016. Genevieve Bell: ‘Humanity’s greatest fear is about being irrelevant’. The Guardian. [Online] November 26, 2016. https://www.theguardian.com/technology/2016/nov/27/genevieve-bell-ai-roboticsanthropologist-robots. 46. U. S. Congress. 2019. H.R.2231 - Algorithmic Accountability Act of 2019. Congress.gov. [Online] 2019. [Cited: July 29, 2019.] https://www.congress.gov/bill/116th-congress/housebill/2231/text. 47. VDW. 2017. Policy Paper on the Asilomar Principles on Artificial Intelligence. Asilomar : s.n., 2017. 48. West, D. M. and Allen, J. 2018. How Artificial Intelligence Is Transforming the World. Brookings. [Online] 2018. https://www.brookings.edu/research/how-artificial-intelligence-istransforming-the-world/.


Artificial Intelligence and Policy in India

141

10 AI Ethics in a Multicultural India: Ethnocentric or Perplexed? Analysis of the Socio-cultural Elements of the Democracy Abhivardhan1, Ritu Agarwal2 1Chairperson

& Managing Trustee, Indian Society of Artificial Intelligence and Law 2Assistant Professor, Amity University Uttar Pradesh, India.

Abstract. The ethnographic influence and coloration of artificial intelligence is beyond logic and reason, and is susceptible towards the treatment of data subject by and the passive residence to a processed information acquired and captured by the AI. Apart from technocentric biases in machine learning activities, the ethnocentric, absolutist and opaque behaviorism of artificial intelligence is contributory and succinct towards influencing the superposition of the capacities and aesthetic mobilities reserved by the AI & the sociological resistance and cohesion of the human society. Considering the aesthetic involvement between human data subjects and AI systems, it is required to understand the pluralistic and multi-lithic nature of human cultures and identities. In case of the Indian society, the aesthetic involvement of AI by design, actions and features would be leaning towards the ethnocentric attributes of privacy and data protection that are signified by the West-inspired aesthetics of liberal and humanist character since India has not yet developed a politico-legal jurisprudence over the inertial and inherently invisible aesthetics of AI-human involvements. While in the US, EU and other D9 countries in the West, the market economy approach dominates the legal policy on regulating and shaping the aesthetics of AI, the East is pursuing some different yet debatably not resistant approaches in due calibration with a limited market economy approach. The paper devises certain politico-legal challenges to the ethnocentric model of AI Ethics in proposition in the context of the social and cultural elements of the Indian democracy and analyses the observant aspects of the post-modern problems in the aesthetic human-AI involvements. The paper scrutinizes the cohesive, sociological and multi-lithic nature of the social and cultural elements of the Indian democracy, and analyses the problems that ethnocentrism cause to the aesthetics of human-AI involvement. The paper rests on suggestive conclusions in the sociocultural context of the democratic realm in India.


Artificial Intelligence and Policy in India, Volume 2 142 Keywords: Indian Society, AI Ethics, Ethnocentrism, Data Ethics, Democracy.

Introduction The development of AI as a human artefact has been characterised by its technological and ethical characteristics. The technological characteristics of AI are definitive within the features of AI that are program-based, mathematical and logical by nature. These characteristics are controlled by the cognitive and technical establishments of AI that are handled by means of scientific objectivity and purpose. However, beyond technological limitation and potential, artificial intelligence has an aesthetic involvement and resonance with data subjects of any nature, which do influence the learning and execution capabilities and mobilities of the AI in general. Even generic operations involving ML, DL and other learning mechanisms can be subjected to biases or be prone to adversarial machine learning, where such problems affect the aesthetic component of the AI-data subject involvement. The aesthetic environment of artificial intelligence conversed with a data subject is based on the interaction and convergence of the empathy generated by the learning and perception mechanisms of AI and a generic empathy possessed by a data subject. Considering the scope of the data subject to be a human or any information based on or connected to human entities, the converging empathy that influences the decision-making potential of AI is the human empathy. In general, to attain efficient AI practices, it is necessary for the AI to develop and retain an aesthetic environment where it tends to be human-centric. However, there are certain challenges to the ethical model of AI, which is encouraged in behavioural economics, business ethics and political ethics. In case of nationstates, where culture is a complex yet important instrument of the social democracy of those states, the economic, social and political functionaries and norms (MIT Technology Review) developed among the people are readily complicated and influenced. It is proposed that the ethical model of AI proposed by the D9 states and the USA, which focuses on the ideals of a liberal democracy centred to scientific humanism (Commission Nationale de l’Informatique et des Libertés (CNIL), France, European Data Protection Supervisor (EDPS), European Union, Garante per la protezione dei dati personali, Italy, 2018; Information Commissioner's Office, 2018), does lack to acknowledge the diverse spaces and modalities that by means of aesthetics and social engineering are different and can be normalized with time that are specific and dynamic to nation-states. When any ethic or ideology imposes or overwhelms a region of human societies with ideas, mores, methods, legal ethics and other means, such imposition of any ethic of ideology would fall into the category of ethnocentrism. The problem of ethnocentrism is central to the aesthetic attributes of AI because the aesthetics while is based on the principles of a liberal democracy, it lacks means to form (1) economic viability


Artificial Intelligence and Policy in India

143

through its usage; (2) influences the cultural aspect of the entrepreneurial design of human society to monotruist ideas; & (3) disregards the sociological character of the human society. The paper therefore analyses the ethnocentric model of AI Ethics and analyses the sociological & multi-lithic nature of the democracy in India, the challenges of the current AI Ethics model as endorsed by D9 nation-states and the USA for India in the domains of technology diplomacy and socio-cultural welfare. Further, the paper proposes conclusions with respect to the cultural attributes of the democracy in India that can form the aesthetic background of the AI-human environment based on the observant attributes of the Indian society with conclusions provided thereto.

The Challenges to the Ethical Model of AI: Hypothesis and Modalities The challenges to the liberal model of AI Ethics do not construe to the existential attributions of technological liberalism and humanism, but are connected with the problem of ethnocentrism, which covers an impeccable aspect of the community of decision-making state and non-state actors who wish to proceed with open and rational technology diplomacy in order to achieve public welfare. The hypothesis on the challenges to the ethnocentric model of AI Ethics is enumerated as follows: • The legal principles that are envisioned to precede AI Ethics are postulated, created and based on certain international law instruments, policy documents by state and non-state actors from EU, India, US and D9 countries. While the Beijing Approach to AI (BAAI) (BAAI, 2019) from People’s Republic of China does differ with the Western alliance countries to some extent, they accept the existential aspects of the rule-based international order and carefully posit the Chinese position (Future of Life Institute, 2017) on retaining an AI Ethics ecosystem within China; • Most of the legislative developments, declarations, regulations, policy briefs and comments developed by the Western economies, which include certain notable allies such as Japan, India and Brazil, do follow the West-centric AI Ethics model, which includes the corollaries of data protection law and constitutional jurisprudence so as to embrace the libertarian model of data ethics, is central to the neoliberal economic model of globalization, which is the market economy concept; • The flaws with the current model of AI Ethics, whose essential precedents can be found in the GDPR (European Union, 2016), the EDPS Declaration (Commission Nationale de l’Informatique et des Libertés (CNIL), France, European Data Protection Supervisor (EDPS), European Union, Garante per la protezione dei dati personali, Italy, 2018), the Algorithmic Accountability


144

• •

Artificial Intelligence and Policy in India, Volume 2

Act (U. S. Congress, 2019) in the US and even the NSAI by the NITI Aayog (NITI Aayog, Government of India, 2018; 2017) in India are they lack (1) economic viability through usage; (2) influence the cultural aspect of the entrepreneurial design of human society to monotruist ideas; & (3) disregards the sociological character of the human society of complex cultures and societies; It is proposed that an impartial approach to deal with cultural identity and ethics of the Indian society can be achieved by understanding and using the two key characteristics of the Indian society – that in many ways fit comfortably with the constitutional machinery and the values on which the social and political democracy of India is based; These two characteristics of the Indian society – in due noticeability are (1) multi-lithic nature; and (2) sociological nature; The important challenges to the Indian society in order to socialize and adopt AI as a normalized and cohesive social ingredient (which do not cover the secular and objective state institutions of the democracy) are (a) Cultural cohesion and Multiplicity; (b) Multi-lithic omnipresence of complex and migrating identities; (c) the Sociological roots of the Indian culture and its entrepreneurial ethics; (d) Resistance to Cultural Alienation due to Ethnocentric values and measures;

The four key challenges to the ethical model of AI are enumerated as follows: •

Cultural Cohesion and Multiplicity

The role of culture is vital in the Indian society. It is sustained by various ethnographers and historians that the Indian society itself does not possess organized cultures so often, and encourages multiplicity in cultural attributions. Since, the Indian cultural ecosystem is not monotruist, the system is not prone to the monotruist values of the data protection ethics that are embraced and implemented by the aesthetic environments of AI. While AI becomes a utility to innovate the procedural establishments of the systems, it is worth estimating that the subjective establishments of the Indian society embrace cultural innovation, which cannot be contained by ethnocentric measures employed by the AI. The problems would not arise with the technological aspect and due rationality in the mathematical infrastructure of the AI, but would exist in the nature of learning that AI involves. The opaque behavior of ML is recognizable and may be improved by a technologically strong explainable artificial intelligence (XAI) (B, et al., 2019). However, in case of multi-lithic cultures in India, the learning biases are monotruist, polarized and limited, which may cause repercussions in general. •

Multi-lithic Omnipresence of Complex and Migrating Identities The social components of the Indian democracy are often influenced by various


Artificial Intelligence and Policy in India

145

multiplying cultural realms (Craig) (OHCHR, 2011), which spread and become dynamic with time. The extent of creativity in such components, which are resembled in the concepts of caste, religion, financial and economic status, creed and other categorial realms is real in the social democracy of India. In many sources of cultural heritage, whether tangible or intangible, we find multiple referencing of different realms and concepts, which itself shows a metaphysical convergence of complex and migrating identities. The current AI realms designed and used in India, in normal lives that are useful for people (where secularization and neutralization of data is not necessary or is not foreseen) would not be put into objective use until the learning and executive mechanisms are well-trained with the aesthetic policy of the people. •

Sociological Roots of the Indian Culture & its Entrepreneurial Ethics

Historians note that the Indian Culture is much sociological, disorganized, and its ethos of being open to change and coalescence with foreign ideas is much more comfortable in comparison to the monotruist ideas of the West (Stanford Encyclopedia of Philosophy). Further, these sociological roots of the Indian culture are multipliable and can normalize if they are not rudimentary and are subjected to rationalization and neutralization. This is also seen in the Indian way of entrepreneurship and skill development ethics, evidence of which can be found in various texts on human life and ethics, notably one of which is the Arthashastra by Chanakya (Chanakya, 1992). The entrepreneurial ethics followed by India is based on practical roots, and does also encourage newer avenues with a pragmatic fusion with the Western ideals of scientific humanism and liberalism with its own reservations gained and normalized. •

Resistance to Cultural Alienation due to Ethnocentrism

It is a common practice among societies which seek the cultural, social and economic repercussions of globalization and feel that their social and cultural values are suffering at the verge of multiculturalism and ethnocentrism. They are sometimes opposed to multicultural and globalist values and this is seen in India. However, the resistance to globalism and ethnocentric values in India is quite different from monotruist societies in the West.

Analysis of the Sociological and Multi-lithic Characteristics of Democracy in India It is proposed that the social democracy of India does resemble two key and special features that form the basis of the challenges in the previous section. Furthermore, it is necessary to understand the sociological and multi-lithic


146

Artificial Intelligence and Policy in India, Volume 2

characteristics to define the politico-legal redemptions that may have a strategic role to define those challenges and infer any solutions to redefine the AI Ethics policy in India. •

Sociological Nature of the Indian Society

The sociological nature of the Indian society is based on the grassroots of cultural humility and servility. People in India for ages have subscribed to reflective values, and they have often resembled the same in their own polytruistic traditions regardless of identity differences. Their poly-truistic nature seems indifferent and is not proper to be regimented strictly when understood in a relevant manner. Certain commonalities from the sociological nature of the Indian society are enumerated as follows: • The sociological set-up of the Indian society is open to unorganized traditions and practices in hand, which somewhere may not comply with the monotruist approach to deal data ethics and AI-related aesthetics in legal and social issues related to such technology; • The Indian society is subjected to populism, and populist measures adopted by administrative and bureaucratic authorities (The legacy of Jean Bodin: absolutism, populism or constitutionalism?, 1996; Blokker, 2017), regardless of partisan lines and identity politics, show that either the element of pragmatism among the Indian people that leads to the absorption and adaptation of the changes required is disrupted, or constitutional and social redemptions are rigorously created, which is anyways not suitable for the economic and individual development of the people. In case of AI, it is proposed that a bigger AI ecosystem would be susceptible to repercussions and thus, it would affect India’s open technology and knowledge-driven society and diplomacy stand at different forums; • While India is a developing country, the role of urban and semi-urban areas in adapting and normalizing AI would be less challenging as compared to the rural areas (NITI Aayog, Government of India, 2018). However, if there is no dissemination of the entrepreneurial design that is put into practice among indigenous communities in India, it would be unreasonable for disruptive technologies to foster efforts to become a disruptor and facilitator in the conventional infrastructure of the Indian economy and polity (OHCHR, 2011); • Literacy is another problem in India which would affect the preservation, normalization and replenishment of the Indian cultural and social values. Since, development and literacy are required to be coherent and complicit to each other for rural and semi-urban economies, it is necessary that innovative schemes are encouraged with the approach of socializing and normalizing literacy. There are certain innovative schemes like the National Strategy on AI by the NITI Aayog (NITI Aayog, Government of


Artificial Intelligence and Policy in India

147

India, 2018), which can seek social innovation by resolving the grassroots of the Indian economy; •

Multi-lithic nature of the Indian Society

Monolithic societies are less complex in comparison to multi-lithic societies, and their social and politico-legal problems are yet less. However, in the case of India, while the politico-legal problems are in line with the multi-lithic cultural identity of India are complex, the sociological nature of the Indian society enables the people to act reflectively and cohesively. The multi-lithic nature of the Indian society is not limited, by practice being complex and should be concluded as complicatedly unsolvable. The poly-truistic nature of the social democracy in India in general enables people to resort the least and afford for creative and passive solutions to alleviate social, legal and political problems. There are instances in various legal, social and political issues, where people have shown the signs of these methods. Certain commonalities from the sociological nature of the Indian society are enumerated as follows: • The multi-lithic nature of the Indian society is capable enough to enable people to make strategic decisions in complex societies. Further, the Indian people are keen and brilliant in keeping social and cultural precedents in their activities and decisions, which itself is not understood by the current technology that is designed for the people; • Monotruist values are not antithetical to the Indian society. However, the rational basis of the monotruist values are well-deserved with respect and implementation in the Indian society, which shows the role of multiculturalism in the Indian society. A comprehensive policy-making in the field of AI Ethics that represents strategic and normalizing treatment of cultural values is not available in India; • India’s alignment with the West economies is based on key values and interests that shape its foreign and public policies. While it is undeniable that there is a lack of such cultural values that can be normalized and retransformed in India, their repercussions would exist, and would be seen in the political and social problems that populism does represent to people. The multi-lithic archetypes of the Indian people if are properly adjudged, can be useful for the AI to learn and unlearn data properly and deal with complex issues. Since there is no reference in the Indian realm as of now, public welfare initiatives to normalize cultural repercussions would exist;

Conclusions The conclusions provided are central to the assumption that the Indian culture is not antithetical to Western values of scientific humanism and liberalism. However, the conflict between ethnocentric and indigenous values is the key reflection to all the conclusions that are proposed as below:


Artificial Intelligence and Policy in India, Volume 2

148

• •

Tangible and intangible identities must be liberalized and modernized by proper channelling by civil societies and apolitical organizations to avoid culture-alienating public diplomacy; Proper education is essential to preserve and maintain the purpose of intangible and tangible cultural heritage in the Indian societies. Data in cyberspace is dynamic and yet not receptively settled as a historic archetype among the Indian people, and this should be importantly foreseen by civil rights authorities and education policy-makers; The aesthetic environment of AI can be unlearned and learnt properly by making AI prone to complex and multi-lithic data, with open and pragmatic ethnographic measures in order to enable innovative facilitation for indigenous communities, which would help in lessening the digital divide in India; Populism must not be left out as a political phenomenon: it has a phase and can reap through the social and cultural components of the democracy in India. Therefore, to enable AI as a neutral and secularised but pragmatic unit of the economic and social factions of the society, it is important that the disruptive social and individualistic repercussions of populism are put into discourse to normalize India’s technology diplomacy stand and channelize peaceful and reasonable drifts in the policies, both foreign and public in order to seek pragmatic solutions and retain India’s role as a multi-aligned and stabilizing power; It would be important to discern and understand the parameters of diversity and representation that may be accrued to the data subjects and the AI systems involving complex identity-related operations. However, at the stage, the conclusions are primarily based on encouraging further academic enquiry and analysis to declutter the review considerations with regards to the diversity and representation of the Indian society by a proper AI risk assessment which would be central to assign better categories and rosters that can be appropriately used by the AI to gauge cultural and aesthetic mobility to render minimum biases and risks when it comes to the treatment of identities and their tangible and intangible manifestations;

References 1. MIT Technology Review, A Shanghai startup’s demo of its system for facial recognition.. 2. Commission Nationale de l’Informatique et des Libertés (CNIL), France, European Data Protection Supervisor (EDPS), European Union, Garante per la protezione dei dati personali, Italy, "Declaration on Ethics and Data Protection in Artificial Intelligence," 23 October 2018. [Online]. Available: https://edps.europa.eu/sites/edp/files/publication/icdppc-40th_aideclaration_adopted_en_0.pdf. 3. Information Commissioner's Office, "Right to be informed," 2018. [Online]. Available: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-dataprotection-regulation-gdpr/individual-rights/right-to-be-informed/.


Artificial Intelligence and Policy in India

149 4. BAAI, "Beijing AI Principles," 28 May 2019. [Online]. Available: https://www.baai.ac.cn/blog/beijing-ai-principles. 5. Future of Life Institute, "AI POLICY – CHINA," 2017. [Online]. Available: https://futureoflife.org/ai-policy-china/?cn-reloaded=1. 6. European Union, REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, 2016. 7. U. S. Congress, "H.R.2231 - Algorithmic Accountability Act of 2019," 2019. [Online]. Available: https://www.congress.gov/bill/116th-congress/house-bill/2231/text. [Accessed 29 July 2019]. 8. NITI Aayog, Government of India, "National Strategy for AI - Discussion Paper," June 2018. [Online]. Available: https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AIDiscussion-Paper.pdf. [Accessed 27 July 2019]. 9. Binoy Viswam v. Union of India (2017) 7 SCC 59, 2017. 10. A. A. B, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins, R. Chatila and F. Herrera, "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI," Arxiv.org, 22 October 2019. [Online]. Available: https://arxiv.org/abs/1910.10045. 11. M. Craig, "A post-modern theological model for understanding the religious concept of ultimate reality and religious diversity," [Online]. Available: https://staff.acu.edu.au/__data/assets/pdf_file/0007/225394/Craig_Multilithic_model_GH.pdf. [Accessed 2020]. 12. OHCHR, "Guiding Principles on Business and Human Rights," 2011. [Online]. Available: https://www.ohchr.org/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf. 13. Stanford Encyclopedia of Philosophy, "Epistemology in Classical Indian Philosophy," Stanford University, [Online]. Available: https://plato.stanford.edu/cgibin/encyclopedia/archinfo.cgi?entry=epistemology-india. [Accessed 12 January 2020]. 14. Chanakya, The Arthashastra, New Delhi: Penguin Books, 1992. 15. J. Salmon, "The legacy of Jean Bodin: absolutism, populism or constitutionalism?," History of Political Thought, vol. 17, no. 4, pp. 500-522(23), 1996. 16. P. Blokker, "Populist Constitutionalism," Verfassungsblog, 4 May 2017. [Online]. Available: https://verfassungsblog.de/populist-constitutionalism/.


Artificial Intelligence and Policy in India, Volume 2

150

11 The Disruptive Unison of AI and Blockchain: A Critical Review Arundhati Kale1 1Research

Contributor, Indian Society of Artificial Intelligence & Law, India.

research@isail.in

Abstract. Artificial learning, in its simplest terms, is machine-based intelligence, which is in contrast to human and animal thinking, yet not so different either. Of the main roles, AI is generally used in planning, decision making and support, natural language processing, data processing, perception and optimization. Blockchain, on the other hand, is a disruptive technology which functions as a distributed ledger that can be seen as decentralised blocks which records each and every transaction in an immutable, secure manner. It eliminates the need for third party interference using complex algorithms to maintain secure networks. Both these technologies may be considered as highly disruptive and revolutionary. While AI lacks traceability, blockchain compliments the same and provides records of AI decisions, when combined, amongst many other things. Similarly, while the blockchain may seem to rigid, AI provides that flexibility and the ease of using blockchain by simplifying accessibility for users. This paper attempts to analyse how AI and blockchain are integrated together, keeping in sync with the three aspects of human intelligence; further describing how AI is being used in areas like security, journalism and smart contracts, and in law to name a few. Two models of such integration have been mentioned, to show how developers are trying to integrate AI with blockchain. AI is being exceeding used to maintain data privacy yet national security. Myriad areas like waste management, health care and disaster management is seeing the rise of AI. Since AI depends on information to learn and function from, it becomes imperative that a solid informational base is established for AI to flourish; it means sieving out irrelevant information, and retaining information of various models of similar incidents, from varied view-points.


Artificial Intelligence and Policy in India

151

Why Blockchain is a Disruptive Technology (C.R., 2018; Spyros Makridakis, 2018) Blockchain is hailed as the most disruptive technologies in decades, since it creates a paradigm shift from a centralised server-based system to a cryptographic transparent network i.e., Internet of Information to Internet of Value. Internet of Information makes it impossible for trustworthiness to remain, without the approval of an intermediary. With the Internet of Value based blockchain technology fosters trust through visible peer-to-peer network, instead of a centralised server. Not only does it function on a trustless system, it replaces the need for third party intermediaries for verification of transactions (i.e., no manual verification is required). This thus reduces costs and eases the general complexity of financial transactions. With blockchain, the cryptology replaces third-party intermediaries as keeper of trust, with all participants running complex algorithms to certify the integrity of the whole. Automation of business operations becomes possible, since financial transactions get stored on the network to feed future transactions. It becomes easier, transparent and more secure than cloud storage, since it allows user to access information. Thus, real-time auditing becomes easy and accurate. AI and Blockchain: A disruptive integration

While blockchain faces issues like scalability and efficiency, AI suffers from lack of trustworthiness, vagueness, inexplicability. Intertwining of these highly disruptive technologies was inevitable; one compensates the shortcomings of the other, and they complement each other, revolutionizing the upcoming digital era. Blockchain would help ensure trust-lessness, privacy and traceability of AI. In turn, AI would bring forth better security, scalability, and chances of personalization and effective governance. Geoff, the author of “Big mind, How collective intelligence can change the world” (Mulgan, 2018) believes that for creating a positive future using collective intelligence, understanding of what happens when a thought occurs at a large scale in crowds. For the creation of viable technology, three kinds of intelligence occur in the human mind at almost all times: 1. Data analysis, Memory and Prediction. 2. Merger of multiple kinds of intelligence; Empathy, observation, intuition etc., to create new thought patterns. 3. Evaluation of the basis of how one thinks and develops wisdom and judgement.

Keeping the above in mind, the amalgamation of AI applications with blockchain networks, while complex, acts as a booster for the two disruptive technology. Autonomic Computing. One of the focal goals of AI is to fully (or partially) enable autonomous operations, whereby multiple small computer programs (also considered as multiple


152

Artificial Intelligence and Policy in India, Volume 2

intelligent agents) observe their constituent environments, preserve their intrinsic states and perform actions specified. In order for this automation to occur, modern computing systems are required to handle massive amounts of heterogeneity at all vertical levels (including data storage, sources, data processing etc). The enablement of these agents at multiple levels not only facilitate handling of heterogeneity, but also in establishing inter layer and intralayer operability across all systems. The blockchain technology ensures operational decentralisation and keeps permanent trail of interactions, data, applications etc. which ultimately allows for a fully decentralised autonomous system. While data is considered as power and gold in today’s data governed economy, getting hold of such data is problematic, often being out of reach unless one partners with big corporates. This scarcity of data deters AI researchers to collect information for AI development. Further, the increased risk of lost privacy. But the very nature of blockchain provides a solid base for data sharing since it boosts transparency and accountability regarding which user’s data is accessed, when and by whom (Thang N. Dinh). Blockchain allows the user to control their data, thus ensuring security at the same time. The economic trade-offs present with information distribution necessary for the generation of decentralised consensus, are extremely relevant from a practical and regulatory standpoint (Lin Willam Cong). With blockchain, common consensus is reached though distributing transactional information, with the key being with the owner (encrypted key), to the rest of the population on the blockchain, so that all fed information is open for public perusal. The masking of personal data is possible and yet can be visible to the public. While authorities are gathering data for “monitoring”, it is no secret that personal information may be used for more than that. For example, post terrorist mishaps see an increase in authorisation of military force and disenabling of communications. In many cases, such cuts and increase linger for an extended amount of time not justified, which makes one believe that temporary surveillance outlasts the need of emergency circumstances. Thus, governments effectively aim to regulate via a two-pronged sword; banning/regulating internet and centralising a decentralised system. What central authorities don’t realise is that blockchain itself can be used to uncover unscrupulous activity and ensure law enforcement (Valkenburgh, 2015). With cryptocurrency, a centralised institution cannot centralize every transaction under a particular name or address. As a result, no single entity possesses complete knowledge of a person’s spending. A crypto asset holder can use this technology for their benefit, by making sure that no single database reveals a sensitive, personal data simply by uploading the data on the blockchain network. However, floating back to the basics of the technology, such transactions are not completely anonymous; they are what one can term as “Pseudo anonymous” since all transaction of a specific public address are recorded on the public ledger, free for viewing. The name of the


Artificial Intelligence and Policy in India

153

user remains hidden and a dummy username is allotted. This presents an exceptional option for law enforcement official. They are free to investigate all transactional activities on the ledger without the identity being disclosed. Once they have identified the benign groups, they may simply eliminate those, without any direct personal investigation against said persons. It becomes easier to sieve through public records while maintaining anonymity and may only flag those transactions which seem suspicious. De-anonymization can be sought for such addresses once suspicious transactions have been proven. Thus, two objectives are simultaneously accomplished; • Enforcement costs of law enforcement can be reduced by focusing on the narrowed down field of suspects identified in advance, before the investigation, implying limiting the taxpayer money/resources on real, rather than potential threats. • innocent parties are assured of their privacy, the same not being violated while their pseudonymous account’s good name is cleared.

AI too is being developed to ensure security. Emerging fields show possible algorithms which work with data while still being encrypted. Since the way blockchain functions is through a public distributed ledger system, anonymity is not absolute. Because of this public nature, bitcoin, for example, has proven less appealing to criminals, who tend to lean on centralized digital currency, which have closed books or systems. Thus, according to Edward Lowery, Special Agent for the United States Secret Service, when testifying before the Senate Committee on Homeland Security and Governmental Affairs Committee stated; “within what we see in our investigations, the online cybercriminals, the high-level international cybercriminals we are talking about, have not, by and large, gravitated towards the peer-to-peer cryptocurrencies such as Bitcoin.”

With blockchain and its verification mechanism, data sharing becomes smoother and more stable. AI can be used for verification as well, in terms of either support in building credible content or as a special function itself. Robo-checking is being used to check for claims against various databases of information. If such information is made to upload on a pre verifying, secure network, the problem of authenticity is reduced, and separation and generation of credible content becomes easier. Exploitation via misinformation in machine learning and AI system becomes negligible, with AI using blockchain and its own functions to teach itself. Integration of information on the blockchain is seen widely in medical and journalism sector, where accuracy and transparency are highly sought. Robochecking is being used to check for claims against various databases of information. Optimization. Jeremy Bentham thought that a public forum reserved for resolving disputes has


154

Artificial Intelligence and Policy in India, Volume 2

three chief attributes: (i) it assists in uncovering the truth; (ii) it helps education; and (iii) being subject to public scrutiny, it potentiates the discipline of judges. He advocated placing judges in the public eye and expanding the audience by ensuring the information was received by even those who were not present. Studies into collective intelligence have tried to ascertain how the aggregation of individual judgments within a group of individuals leads to collective judgments. We see the Athenians mastering both the aggregation procedure as well as the resultant collective judgements, via combining attributes for an effective epistemic system. Their democracy and trials were “fuelled by incentives, oiled by low communication costs and efficient means of information transfers, and regulated by formal and informal sanctions” Three pillars in international arbitration make it the most suitable dispute resolution mechanism while dealing with the inevitable novel disputes which would emerge from the increased use of decentralised digital assets: neutrality, cross border enforceability and the flexibility to tailor specific arbitration rules(New Technologies and arbitration, 2018). Take for example, Kleros, which purports to “solve the problem of the rise in disputes of the global, digital and decentralised economy in areas that cannot be solved by state courts and existing alternative dispute resolution methods […] by using blockchain and crowdsourced specialists to adjudicate disputes in a fast, secure and affordable way. […]. Crowdsourcing taps into a global pool of jurors. Blockchain technology guarantee eevidence integrity, transparency in jury selection and incentives for honest rulings”. Kleros is attempting to develop a quasi-judicial system complete with a general court and two levels of sub-courts. However, the wouldbe jurors who wish to participate, are selected at random. Another distinguishing feature of Kleros is that it has put in place both, an appeal and an anti-bribery system. Automatization of these systems, by possible AI incorporation, would see the security and transparency at its max.But since the randomly selected panel of jurors have base their judgment solely on evidence stored on the blockchain– instead hearing arguments presented– Article V(1)(b) of the New York Convention may act as an obstruction to the recognition and enforcement of a decree on the grounds that recognition and enforcement may be refused by the party against whom the award is invoked was not given proper notice of the appointment of the arbitrator or of the arbitration proceedings or was otherwise unable to present his case(NewYorkConvention1958); It is not a matter of the evidence being stored on the blockchain, but rather the fact that it is a documentary evidence instead of an opportunity for the parties to present their case in person. International Bar Association [“IBA”] Rules on the Taking of Evidence in International Arbitration state that a ‘document’ is a “writing, communication, picture, drawing, program or data of any kind, whether recorded or maintained on paper or by electronic, audio, visual or any other means”. This broad definition of document encompasses, amongst a numerous thing, contracts written entirely in code and supposed decisions rendered in a codified manner. Article 19(1) of the UNCITRAL Model


Artificial Intelligence and Policy in India

155

Law on International Commercial Arbitration [“Model Law”] states that “subject to the provisions of this Law, the parties are free to agree on the procedure to be followed by the arbitral tribunal in conducting proceedings”. Further, Article 19(2) of the Model Law states that “failing such agreement, the arbitral tribunal may, subject to the provisions of this Law, conduct the arbitration in such a manner as it considers appropriate”, and also has “the power to determine the admissibility, relevance, materiality and weight of any evidence”. Article 19.1 of the Singapore International Arbitration Centre [“SIAC”] Rules provides that “the tribunal shall conduct the arbitration in such manner as it considers appropriate, after consulting with the parties, to ensure the fair, expeditious, economical and final resolution of the dispute”. Article 19.2 SIAC Rules goes on to state that “the Tribunal shall determine the relevance, materiality and admissibility of all evidence [and] is not required to apply the rules of evidence of any applicable law in making such determination”. 18 Similar provisions can also be found under the Hong Kong International Arbitration Centre Rules, London Court of International Arbitration Rules, and the International Chamber of Commerce [“ICC”] Rules. Therefore, it can be gathered that there is a significant degree of freedom awarded to arbitrators in establishing the facts of the case – and there is no specific mention or restriction on the means by which they may do so.(New Technologies and arbitration, 2018) Popularly, AI applications are primarily used to find the optimum solutions; best solution from all possible solutions. Since modern AI applications function under dynamic environments (including ubiquitous ones, constrained ones like mobiles and geographic ones like wireless local area networks etc) such optimization strategies operate under constrained or unconstrained environment. These strategies find the optimised solution by scrounging through the most relevant data sources in pervasive environments. Current optimization strategies are executed with centralised supervision and considers system wide/application wide optimization solutions. This results in a lot of irrelevant data being processed, leading to inferior system/application performance. Decentralized networks like blockchain allows for wider research and development opportunities. It promotes increased system performance by processing highly relevant data. It is especially useful when multiple strategies with different optimization objectives are required to run simultaneously across systems. Legal Techs are in the midst of providing case management and predictive services, which entails deployment of AI to analyse precedents so that the desired outcome in proceedings statistically. AI can gain the capacity to propose settlements based on similar disputes and generate more comprehensive data about trends, thus giving more incisive and conclusive insights to third party funders to support their decision on what disputes should they fund. Furthermore, the NLP of AI enables analysis and extraction of meaning of words from various unlimited number of papers that may be relevant to the cases present. This is a significant improvement from the present technology that rakes through documentation hunting for inputted key words. The NLP technology is enabled to extract


156

Artificial Intelligence and Policy in India, Volume 2

meaning from written and oral materials which reduces time and costs, particularly in discovery. Planning. Planning strategies are implemented by AI in order to collaborate with other applications and to solve complex issues in new environments. Planning strategies support AO in terms of operational efficiency and resilience by taking a current input state and executing a variety of logic and rule-based algorithms to reach preestablished roles. This planning strategy is again run in a centralized format, which makes said planning complicated and time demanding. Blockchain can argument AI and promote robust strategies with clear tracking and provenance history. It also comes in handy to devise essential and immutable plans. For example, AI is being used to manage waste in cities without any additional burden on citizens. The auto sensing trash receptors monitor waste levels real time. Once the trash bin is filled, in automatically intimates the city’s waste management department to send for garbage trucks. This is not only more efficient (since multiple trips at set time is avoided and bins are emptied as needed), it reduces the pollution levels, street congestion and air quality. The data amassed from these operations can be further stored on the blockchain and provide base for the management to analyse, forecast and devise new waste management needs and plans. Perception. AI bots and applications continually collect, inspect, select and organise data from their environments via centralised perception strategies which amount to colossal data collection. But a decentralized perception can allow collection of relevant data from multiple viewpoints. Blockchain based decentralisation allows for tracing of perception trajectories, secure transfer of collected data and immutable data storage. This is ingenious because the AI itself does not require to collect data streams repeatedly. Considering the permanent nature of blockchain, only the trail of successful perceptions are stored. (Khaled Salah, 2019) In the field of healthcare, primary role of AI is to generate models for image analysis and recognition, for example, the Computer-Aided Diagnosis (CAD systems). CAD systems are used for supporting decisions in the detection and interpretation of diseases, especially cancer. Systems learn to produce reliable analysis of results via feedback loops. These loops need and generate vast amounts of data. This data can be successfully and extracted and fed into the blockchain system. The blockchain can ensure the privacy of such data. At the same time, it can ensure secure and reliable operations over CAD. For example, the details of various doctors from different institutions can be uploaded on the consortium blockchain in form of hashes. The consortium then verifies and validates these hashes in a decentralized format to determine whether a user should be trusted to get access to this data, thereby preventing malicious intrusions. Reasoning.


Artificial Intelligence and Policy in India

157

Logic programming in AI allows it to develop inductive or deductive reasoning rules to reach decisions. Centralised reasoning hampers such growth of AI, since it caters to a more generalized global behaviour across all AI application components. A blockchain based distributed reasoning facilitate personalisation of reasoning strategy, which could be used more during perception, learning and model deployment. Blockchain further stores and makes available unforgettable reasoning processes which further assist in in execution of similar reasoning strategies (Khaled Salah, 2019). Smart Contracts. While there are no concrete definitions for smart contracts their core functionality makes it possible to term them as digital contracts which allow contingent terms on decentralized consensus which are tamper-proof and characteristically self-enforcing through automated execution. It facilitates transfers at minimal costs and even automates value transfers based on a decentralised consensus. (Khaled Salah, 2019) Traditionally, consensus is provided by courts, notaries, governments etc. on a contract, which is labour intensive, time consuming, and prone to tampering. Such resolution parties bring in excess human intercession and the added pressure of volatility, uncertainty and costs. Without a decentralised consensus, party providing centralised consensus has data monopoly and thus relishes huge market power. Plus, when we take into the Incomplete Contract Theory (Incomplete Contracts and Control, 2017), more problems arise, since in practice, contracts cannot specify and provide for every possible contingency. Future contingencies are often obscure, and may not even be foreseeable. Moreover, contracting parties may renegotiate the terms if they prove to be suitable and mutually beneficial [Grossman and Hart (1986), Hart and Moore (1990), and Hart (1995)] Hence, an immediate repercussion of the incomplete contracting approach is hold up problem. The intention of smart contracts is automation of business via reducing paperwork and eliminating intermediaries. Thus, all sorts of wills and legal documents can be processed using smart contracts. (Towards a Self-Learned Smart Contracts, 2018)AI itself has the capacity to be merged with smart contracts through rules and policies assigned on the chains, which implies mixing of self-learning AI and smart contracts and assuring that the contracts are legally binding and adaptive. The key to creation and execution of smart contracts lies in the deep comprehension of such rules and policies by AI, making the AI more effective and learnable. The more data AI has, the better it can predict outcomes and the better it can improve its recommendation system. It can learn how to review archived contracts (to see how parties contracted in the past, identify the factors which were and were not previously considered) and then recommend languages and clauses which make for a more secure agreement. A common trustless ledger allows for settlements to take place quicker, with frauds and lemons problems being mitigated. Smart Contracts increase not only contractibility but


158

Artificial Intelligence and Policy in India, Volume 2

also facilitate exchange of money, property, shares, or anything of value algorithmically, in an automated and conflict-free manner. This decentralised consensus format of the blockchain has the potential to reduce noncontractible contingencies, and can augment contract enforceability on certain contingencies, like the lock in requirement of fund withdrawal or automated payment on a party’s successful compliance. Interpretation of language is the crux of contractual language. Due to the selflearning nature of AI, it may be highly possible that the essence of the contract is lost when the term “execute” is interpreted as killing, as opposed agreements or follow through. Natural language processing researchers are trying to develop AI so that contract analysis and drafting may be automated. It is plausible that AI may be used to create at least partially, self-executing and automated codes and parameters, via parsing a traditional human contract, in order to generate smart contracts. Techniques adopted may include shallow semantic processing, named entity recognition, word sense disambiguation etc. Language which refers to finite and definable things (dates, time, quantities etc.,) can be effortlessly integrated in a smart contract. AI can be wired to negotiate the terms for price using AI game playing algorithms, with the static parameters-gap fillers as they are known-being adjusted dynamically. AI, via supervised or unsupervised learning methods, may mine the necessary data for the best price to quality ratio and then automate purchase at a particular time, when the price to quality ratio is at its maximum. Self-executing contract can further facilitate changes in quality of deliverables based on price changes negotiated by AI. While the use of AI in smart contracts in a robust style for real-world enterprise application has not been actively thought of, the possibilities present are ample. Since blockchain is used for cloud storing and data sharing, AI may easily analyse such data, correspond them with current trends and even suggest possible contingencies which might arise in the future (Bailey, 2018). Thus, the use of AI in smart contracts can act as a natural extension for statutory gap fillers, providing a dynamic yet specific parameters, tailored to the parties’ needs. Considering that smart contract technologies make more adaptive decisions and behaviour using AI (for e.g., Automatic gap fillers in agreements, as stated earlier) the UTEA has recognized this dynamic behaviour and notes: An automated transaction is a transaction performed or conducted by electronic means in which machines are used without human intervention to form contracts and perform obligations under existing contracts. Such a broad statement is essential for the diversity of transactions which can be carried out. We may see, in practice, courts may deem certain terms as incapable of being in an agreement (as a matter of law) via an electronic agent. In cases where AI adapts and does not use exact inputs, a clause of severability, along with a binding dispute resolution provision be added between parties before such arbitrators who are qualified to understand AI and smart contracts. As such, courts have upheld severability clauses. The Federal Arbitration Act and the New York convention permit international enforcement,


Artificial Intelligence and Policy in India

159

and such legal tools should be utilised in smart contracts enabled by AI. An arbitration clause would come in handy to avoid liability for smart contract platform provider, to prevent a user from complaining about lack of proper offer or acceptance (due to unagreed or indefinite terms). Such contract would help in disclosing how AI is imperfect, its behaviour unpredictable.

Case Studies on Blockchain Integration QWICSchain (Brune, 2020). The open-source QWICSchain blockchain framework provides a more traditional, enterprise-class blockchain implementation built on Java Enterprise Edition. The Groovy Language is being popularly used by tech developers, especially in the financial sector, due to the ease with which the language handles the decimal numerals. The QWICSchain blockchain framework uses the Java Virtual Machine based Apache Groovy language to facilitate implementation of smart contracts.It uses machine learning tools (which has been trained by several platforms) off the block chain (“off-chain”) and analyses any data both within a blockchain network and other external sources. These trained models are further utilized for smart contracts which run on blockchain (“On-chain”) to come make inferences. Thus, an existing Groovy-based smart contract is made to incorporate trained machine learning models as a part of smart contract codes on the chain, thereby ensuring transparency and immutability of these models. In QWICSchain, smart contracts are written in the Groovy language. Every smart contract sits on its dedicated contract account on the distributed ledger. This account is fully controlled by smart contracts and may receive or send transactions. Therefore, smart contracts act like automated account owners. This behaviour is somewhat similar to other blockchain frameworks, e.g. Ethereum, but varies since smart contracts are written in a pre- established scripting language, not a proprietary notation. Smart contracts can change the state of the ledger only by sending transactions to other accounts. While they areallowed to read the complete state, any change facilitated may performed only through transaction sending.Additionally, every smart contract owns a dedicated, determined storage to maintain its state across multiple transactions. It is implemented as a key-value store for objects of any kind, with the keys being string identifiers. This basic structure has now been extended for the present work to incorporate the use of trained PFA models inside such smart contracts. Newer approaches allow trained prediction model inside a smart contract, and ensures the model and its strictures are stored in a readable way on the blockchain. This makes all decisions made by a contract using such kind of, transparent and auditable by all participants of the network. MATRIX: The Start of Blockchain 3.0 (Matrix AI Network). Bitcoin was the first cohort during the launch of blockchain technology,


160

Artificial Intelligence and Policy in India, Volume 2

popularly being recognized as Blockchain 1.0. At the start, the initial design of blockchain could not anticipate the large scale and varied application of this technology and was limited to mining and establishment of distributed ledgers, resulting in low commercial scalability and efficiency. To tackle these aforementioned glitches, Ethereum emerged as a system, with smart contracts at its core, which provided an interface more apposite for large scale commercial applicability with more efficiency. This was considered as the second generation of blockchain technology or Blockchain 2.0. The onset of MATRIX is seen as a revolutionary, modern generation, being dubbed as Blockchain 3.0, aiming to integrate Artificial Intelligence within blockchain, thereby dynamically updating the parameters for blockchains so as to create a self-evolving network. It attempts to overcome four fundamental problems in blockchain; low transactional speeds, lack of security, difficulty of use and wasted resources. To solves these, it builds a blockchain based economy powered by the three pillars of AI; data, computing power, and AI models. For example, MATRIX’s Intelligent Contracts use Natural Language Programming and adaptive deep learning-based templates to auto code contracts, allowing users to choose and use from a pre-set smart contract template executable over the blockchain network via voice or text inputs i.e. natural languages. Its AI powered machine further bookmarks prospective loopholes and malicious intentions while certifying robustness under high-force attack using a generative adversarial network. Smart contracts, by their very nature, depend on third party/host systems for functions. Further, execution time of a contract is not guaranteed, since programs run on various computers in a distributed manner. While it ensures transparency and decentralisation, it poses security threats as well. But through the employment of formal verification technology, it makes contracts more secure. It does not limit itself to one particular chain (not sole dependent on either private or public chains); rather, it seamlessly integrates the two, allowing for a multi-chain collaboration and adaptive self-optimization of the network. Thus, it attempts to introduce a separate control chain, which not only merges public and private chains, but also ensures deployment of security control. It evolves with the user since tuning of blockchain parameters is conceivable without creating a hard fork. Nebula AI (Nebula AI Inc., 2018). Nebula AI Inc., (Nebula AI) is committed to building a decentralized artificial intelligence computing blockchain (NBAI) that reduces the energy costs of traditional Proof of Work by converting GPU mining machines into AI computing services. The AI transactions recorded on NBAI will be irreversible. The distributed computing network also ensures high concurrency and low latency computing power. The conversion of GPU mining machines makes it possible to provide more cost-effective artificial intelligence services It aims to improve the status quo of the current centralized cloud computing ecosystem via utilization of the decentralization feature of blockchain technology


Artificial Intelligence and Policy in India

161

to rent and distribute the computing power of artificial intelligence machines globally. Since blockchain encryption is efficient in avoiding the problem of internal leakage and the maintenance of distributed AI, calculation units is handed to the owners of various AI calculation units, which reduces the workload of maintenance significantly.

Conclusions We see two disruptive beasts being the next revolutionary technology. While both have their shortcomings, they both augment the strong functionality of each other, making it the ultimate amalgamation. Understanding why blockchain is disruptive like AI is essential to understand why it can be merged. Automated computing and optimization see the best use of AI and blockchain. Smart contracts see the most effective use of this merger. Since contract law itself is seeing a revolution, smart contracts are on their way to be the next normal. Arbitration, dispute resolution see the most use of smart contracts. AI can be used to boost the systems and provide templates and gap filler rules to be executed. Myriad avenues further see the application of AI, proving that their amalgamation might be the next normal.

References 1. 2.

3.

4. 5. 6.

7. 8.

Bailey, Huu Nguyen and Scott. 2018. USE OF ARTIFICIAL INTELLIGENCE FOR SMART CONTRACTS AND BLOCKCHAINS. Thomson Reuters Corporation. s.l. : Fintech Law, 2018. Brune, Philipp. 2020. Towards an Enterprise-Ready Implementation of Artificial IntelligenceEnabled, Blockchain-Based Smart Contracts. Arvix.org. [Online] 03 21, 2020. https://arxiv.org/pdf/2003.09744.pdf. C.R., Venkatesh. 2018. 4 Things That Made Blockchain The Most Disruptive Tech In Decades. INC42.com. [Online] March 11, 2018. https://inc42.com/resources/4-things-that-madeblockchain-the-most-disruptive-tech-in-decades/. IBID 4. Incomplete Contracts and Control. Hart, Oliver. 2017. 7, July 2017, American Economic Review, Vol. 107. Khaled Salah, M.Habib UR Rehman, Nishara Nizzamuddin, Alal Al-Fuqaha. 2019. Blockchain for AI: Review and Open Research Challenges. IEEE Access. [Online] january 1, 2019. https://ieeexplore.ieee.org/ielx7/6287639/8600701/08598784.pdf?tp=&arnumber=8598784&is number=8600701&ref=aHR0cHM6Ly9pZWVleHBsb3JlLmllZWUub3JnL2Fic3RyYWN0L2 RvY3VtZW50Lzg1OTg3ODQ=. Lin Willam Cong, Zhiguo He. BlockChain Disruption and Smart Contracts. s.l. : SSRN-id-2985764. Matrix AI Network. Matrix Technical White paper. Matrix.io. [Online] https://www.matrix.io/uploads/file/MATRIXTechnicalWhitePaper.pdf.


9.

10.

11. 12.

13.

14. 15.

16.

Artificial Intelligence and Policy in India, Volume 2 162 Mulgan, Geoff. 2018. Forbes . www.forbes.com. [Online] August 7, 2018. https://www.forbes.com/sites/westernbonime/2018/08/07/future-fests-vision-of-collectiveintelligence-ai-and-blockchain-that-makes-sense/#728594a541bf. Nebula AI Inc. 2018. NEBULA AI (NBAI) — DECENTRALIZED AI BLOCKCHAIN WHITEPAPER . nebulaai.org. [Online] January 2018. https://nebulaai.org/_include/whitepaper/NBAI_whitepaper_EN.pdf. New Technologies and arbitration. Soares, Francisco Uríbarri. 2018. 2018, IJAIL, VOL 7, Issue 1. NewYorkConvention1958. Newyorkconvention1958 website. [Online] http://newyorkconvention1958.org/index.php?lvl=cmspage&pageid=10&menu=730&opac_vie w=-1. Spyros Makridakis, Antonis Polemitis, George Giaglis and Soula Louca. 2018. Blockchain: The Next Breakthrough in the Rapid Progress of AI. [book auth.] Marco Antonio AcevesFernandez. [ed.] Marco Antonio Aceves-Fernandez. Artificial Intelligence - Emerging Trends and Applications. London : IntechOpen, 2018. Thang N. Dinh, My T. Thai. AI and Blockchain : A Disruptive Integration . s.l. : IEEE Comp 18. Towards a Self-Learned Smart Contracts. A.S. Almasoud, M.M.Eljazzar and F. Hussain. 2018. Xian, China : IEEE - Institute of Electrical and Electronics Engineers, 2018. 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE). pp. 269-273. 10.1109/ICEBE.2018.00051. Valkenburgh, Peter Van. 2015. Bitcoin: Our Best Tool for Privacy and Identity on the Internet. Coincenter.org. [Online] 03 03, 2015. https://www.coincenter.org/app/uploads/2020/05/reportbitcoinprivacyidentity.pdf.


Artificial Intelligence and Policy in India

163

12 AI and its Industrial Impacts in the Legal Sector: A Critical Review Avani Tiwari1 1Research

Contributor, Indian Society of Artificial Intelligence & Law, India.

research@isail.in

Abstract. Artificial Intelligence is the new buzzword which is slowly permeating the Indian Legal System. It is expected to have a significant impact on the workforce in the legal sector resulting in job losses in the short run and creation of new kind of jobs in the long run. It will reduce costs and time involved in high-volume low-value work resulting in cheaper services for clients. Traditional law firm model is no longer aligned with customer expectation, hence, demand for law firm services are flat while that of legal services is increasing. Lately, the legal industry has started to recognize the fact that technology shall be preferred over labour arbitrage. Legal expertise clubbed with process management and technology are essential for effective delivery of legal services. AI will enable firms to make best of everything by incorporating latest technology. It can be used in reviewing and standardizing documents, due diligence, transactional practices, cross-border contract drafting, judgement prediction, risk assessment etc. It will help firms in improving quality, efficiency, accuracy and cost of work by streamlining its workforce, saving money spent of providing salaries to such workforce and spending it on AI tools. It will save time spent on mundane, routine work so that lawyer’s role is limited to core functions that are beyond the scope of AI. Legal Professionals believe that AI will replace their jobs resulting in large scale unemployment, however, it will only alter the way services are delivered by them, redefine tasks and functions as well as business models defining them. It is to be noted that it will only compress the case disposition time helping them improve client access and quality of legal solutions provided. As rightly said by Michio Kaku, a noted theoretical physicist and futurist, “The job market of the future will consist of those jobs that robots cannot perform.”

Introduction Technology has been transforming the legal sector among the others over the past couple of decades. Though the legal sector is least susceptible to technology, its


164

Artificial Intelligence and Policy in India, Volume 2

impact is significant. From text editors (especially MS Word), PCs and printers replacing the typewriters to e-filings and video conference hearings amidst the COVID-19 situation, the examples are many. “Artificial Intelligence” or “AI” is one such recent advancement in technology that is commanding keen attention of legal professionals and the academia. It is a new buzzword among the legal fraternity as well. As stated above, technology has already been transforming the way legal tasks are done, but then why is AI gaining so much more attention as compared to the other technological advancements, is an important question to be addressed. Lawyers have been using e-mails, online legal databases, MS-Word etc. for over a decade now, then why is AI having a bigger hype? This can be because of 3 principle reasons: increasing pressure on lawyers to reduce fee, cost reduction and client’s despondence towards the traditional “billable hours” law firm model; increase in the access to justice problem; AI can perform greater tasks than the previous technologies (Professor Dana Remus, 2017). One of the foremost concerns of the legal fraternity is the impact of AI on employment. Such a concern is not unwarranted, considering the impact of the 3rd IT revolution on employment. However, the impact of the 4th IT revolution on employment is gaining more traction because the disruption caused by AI is happening at 10 times faster rate and at 300 times the scale of the industrial revolutions of the late 18th and early 19th centuries and is therefore predicted to have about 3000 times more impact as per the researchers of McKinsey Global Institute (Dobbs, Manyika and Woetzel, 2015). Hence, people are concerned about the job losses in the future. Therefore, an important question here is that which of the two labour market effects- displacement or productivity, will dominate in the AI era? To answer this question, it is important to know what AI can and cannot do. Another question that arises is that whether AI will impact the legal sector similar to that of other sectors? Whether the impact on the Indian legal sector specifically, will be similar to that of other developed countries? Amidst these questions it appears rational to delve deeper and look into both the short term as well as the long term impacts, impact on the different players of the legal system individually, the potential of COVID-19 situation in shaping the course of the transformation, the current position of the IT and cybersecurity framework in India. All this might seem a little overwhelming to those who have no or even little knowledge of AI and economics. Therefore, the purpose of this paper is to simplify the concepts and provide an insight on the impact of AI on employment in a simpler yet deep manner, so that the readers can get a clear understanding of the issue.

What is AI? AI or Artificial Intelligence also known as Augmented Intelligence is basically a discipline of computer science which deals with simulation of human like


Artificial Intelligence and Policy in India

165

intelligent behaviour in computers. Being an emerging filed it has no universal definition. It has different meanings for different people, professionals and varies from industry to industry. However, to understand its impact, it is essential to have a basic understanding of what AI is and what it does. AI is an interdisciplinary science having multiple approaches concerned with building smart machines. It works on the basis of predetermined rules, algorithms and pattern searching mathematical and statistical models. It is highly dependent on large amount of data that needs to be fed into the system. Although the purpose of it was to impart human like intelligence to computers, yet at the fundamental level, AI is entirely different from human brain in the sense that humans have what we call common sense, consciousness, they can imagine, can transfer their learning to other domains, they have abstract thinking capabilities while AI lacks these.

Categories of Machine learning AI systems learn from the data fed into the system; this learning can be of 3 or more types: • Supervised Learning: It is the most common example. It is simpler to learn and easier to implement. Data or examples are fed into the algorithms with labels. AI learns from the examples and associated target responses to predict new outcomes to similar but newer examples. It is backed by a feedback mechanism as to whether the responses are correct or not. An analogy can be derived between this type of learning to that of what we learn from our teachers, we derive certain rules based on our learnings and apply them to real life problems that we come across. Feeding more and more examples into the system can increase the accuracy of systems based on this type of learning. • Unsupervised Learning: It is the opposite of supervised learning. In this form of learning data is fed without labels. Such systems do not get responses as to whether their outcomes are right or wrong. The algorithms find patterns in the data, classify and organize it, understand its properties to come to conclusions or predict outcomes. It is advantageous because most of the data in the world is unlabelled. It can be compared to how humans draw similarity between different situations, classify them and apply the learnings accordingly. The most popular use of this type of learning is the recommendations that we get on Amazon or YouTube. • Reinforcement Learning: Data/examples fed in this type are not labelled but they are attached by positive or negative responses. The algorithms learn in the process of avoiding the negative examples while moving towards achieving the positive ones. It is similar to how humans learn from their mistakes. It can be used for prescription and not only description like the unsupervised learning, to make decisions having consequences.


166

Artificial Intelligence and Policy in India, Volume 2

Kinds of AI • Weak/ Narrow AI: It is the only type of AI that has been successfully created today. It is goal oriented and can perform only the task it is trained for. It is not capable of applying that knowledge to other areas or other tasks. It has not achieved human level intelligence yet but can simulate human tasks within a narrow area. It cannot perform new tasks. It can be of two types: reactive AI, which does not store any information and limited memory AI, which is highly data driven, it stores data and makes decision based on that data. It is the most commonly used AI. Some of the examples are chatbots, Siri, IBM Watson etc. • Strong/ Generalized AI: It is the kind of AI that can think like humans. It is capable of having human level intelligence. Therefore, it can learn new tasks, apply its learnings to various fields. It has the ability to learn experientially. This kind of AI has not been fully achieved because human level intelligence requires consciousness, but the term itself has not been aptly understood or defined. • Super/ Conscious AI: This type of AI as of now is hypothetical. It has capabilities more than humans i.e. it has more than human level of intelligence. It is fully conscious and self-aware. It can surpass human intelligence and hence can become good at everything be it art, science, maths etc.

It is evident from the discussion above that what can potentially impact employment in the near future is Narrow AI as it comprises most of the AI today. Therefore, the term AI used hereafter in this paper will generally refer to weak or narrow AI, unless otherwise stated. Now that we know what AI is, it is equally important to know what AI is not in order to better understand its impacts on employment.

What is Not AI? AI is nothing to be afraid of, as perceived by most of the people, based on notions of AI depicted by Sci-Fi movies. To sustain this statement, it is essential to reiterate that AI today is narrow AI which can perform only those tasks for which it has been trained using various mathematical and statistical models and the examples on which such systems are trained. Even a two to three-year child has more cognitive ability than AI today. Following are the limitations that today’s AI: • AI lacks common sense and tacit knowledge: These are the things we learn from our childhood, through our experiences. Such knowledge and sense cannot be, as of now, transferred to machines. So, AI powered systems might stop unexpectedly, for reasons it considers obstacles, which in fact might not be as such to humans. For example, self-driving cars might stop because of a carry bag or small stone on its way, considering it as an obstacle. We humans know that a car can drive past such things because we have common sense.


Artificial Intelligence and Policy in India

167

• Problem of Abstraction: Today AI is not capable of abstract thinking like humans, it cannot generate new ideas or techniques. It is challenged by the ability to transfer the learnings to areas other than that it has been trained for. For example, even if AI is trained to qualify the Bar Exam, it will certainly not be able to apply that knowledge to tasks that the qualifiers are supposed to perform. • Explainability: AI systems mostly cannot explain the reason behind the outcome it derives. In such a case it becomes difficult for the user of such system to know why has AI come to such outcome or conclusion. • Transparency: AI systems come to conclusions or provide outcomes based on the models they have been trained on; the data that has been fed into the system to train AI. Such backend computer codes are not accessible to the users of such systems. • Data Driven: Most of the AI today is data driven as mentioned earlier. Such data is fed into the system by engineers and might suffer from various biases which might go unnoticed until a substantial number of users are affected by it. • Finds Shortcuts: Although, this can be an advantage when we look at the speed of performing tasks, but it has some pitfalls as well. AI does not learn concepts as a human does. It finds shortcuts or patterns to do such tasks. It does it in ways that are entirely different from humans. Now that we know what AI is capable and not capable of doing, let us dig deeper into its impacts on employment.

Impact of technology on employment Technological advancements are anticipated to and are already affecting the jobs that humans do. This is mainly because many such jobs can be better and effectively performed by machines. It can noticeably decrease the costs and time to perform the tasks as well. Such bearings can be easily deduced from the past IT revolutions where machines have impacted various sectors ranging from textiles to automobile. It has been observed overtime that technology affects businesses positively and workforce negatively. Within the workforce itself it affects different classes differently. It leads to Job Polarization. Due to this affect middle skilled workers tend to lose their jobs in contrast to high skilled high wage and low skilled low wage workers. Because of its skill biased nature, it is not possible to generalize whether technology and humans are complementary or substitute to each other. It is a substitute to the middle skilled workers doing routine tasks while being complimentary to high skilled cognitive task workers. It does not affect low skilled routine workers much because the cost of substituting such workforce by acquiring machines is higher than what is being paid to these workers. Hence,


168

Artificial Intelligence and Policy in India, Volume 2

there is an increase in demand for high paid high skilled workforce doing nonroutine cognitive task and low paid low skilled, doing non routine manual work. Technology has one of the two effects on employment namely: displacement effect and productivity effect. Displacement effect is when technology displaces the workforce from the jobs they were previously performing, which can then be performed using technology. Whereas productivity effect is where technology results in the creation of newer jobs and the displaced workforce is reallocated to the new emerging jobs and industries (John Maynard Keyens, 1973). Previous IT revolutions have indicated that such technological advancements tend to have displacement effect in the short run, but with time when the job market becomes more adaptable, when the displaced workforce equips itself with the skills required for the new jobs generated, then productivity effect has been seen to be more dominant. It has been observed that labor freed from one sector was absorbed by the other sectors later on. More jobs were created than were destroyed in the past.

To what extent AI is expected to follow the same trend as other technologies? Impact of AI on employment can be construed as a natural extension of the impact of technology as a whole on employment. As it is an emerging field of technology there is not much data available to demonstration its impact on employment. It is rightly said that the past might not always be a true predictor of the future. Hence, in the context of AI it is important to note that AI is different from the previous technologies in the sense that it is growing at a faster pace than the previous technologies. It is growing exponentially unlike the others which showed linear growth. As already mentioned above, AI is predicted to have 300% more impact than the previous technologies (Dobbs, Manyika and Woetzel, 2015). In the light of the discussion above it is crucial to mention that just because there is a technological advancement that is making work easier, faster and better does not necessarily mean that firms or companies will adopt such technologies. It differs from industry to industry and economy to economy. It also depends on the investment patterns and size/scale of the firms, companies or individual users. Hence, it appears logical to analyse its impact on a specific industry of a specific economy. Therefore, the following part of the paper will deal with AI’s impact on employment in the legal sector with a major focus on the current Indian legal scenario.

AI’s impact on employment in the legal sector The legal sector is resilient to change because by nature they are risk averse. Most of the law firms still depend on the billable hours business model. Out of the various players of the sector, the judiciary has shown least adaptability to AI.


Artificial Intelligence and Policy in India

169

The role of technology can have 2 futures (Richard Susskind, 2016) : 1. Automation 2. Innovation

These two futures according to Richard are expected to run parallel for quite some time but in the long run the latter is expected to dominate. For the purpose of further discussion, it will be useful to again divide the narrow AI that we are talking about into two categories namely (Harry Surden, 2019): • Rules based AI: This type of AI works on the basis of predefined rules that are fed into the system. Such AI when encounters a problem deduces from the given set of rules, the possible solution(s). A large proportion of the legislations consist of such rules which have predefined deductions. For example, under Taxation, a certain type of organization with a certain amount of share capital or turnover has to pay certain predetermined amount of tax; under the Indian Penal Code punishments and penalties are predefined; the contracts and partnership deeds, AOA and MOA have predefined set of rules. Hence, rules-based AI can be used in such cases. Smart Contracts is one such real life application of this type of AI. • Machine Learning: In contrast to the aforementioned form of AI, machine learning depends on large amount of data that needs to be fed into the system, such data may or may not be labelled. Machine Learning techniques find patterns in the data, understands the structure and properties of the data to come to logical conclusions based on statistical and mathematical models. E- discovery, judgement prediction, counter party argument prediction, AI based due diligence etc. are a few applications of it.

Apart from the above stated examples AI can also be used to summarize high volume contracts, manage documents and for risk management. However, this is not an exhaustive list, it can have as many applications as can be ideated and those ideas which are feasible given the current status of technology. It is proposed that AI will not be able to automate or innovate the legal sector to the extent it does or is capable of automating and innovating the other sectors. This is mainly because most legal problems do have a definite or a single right solution, therefore making them incompatible to fit into mathematical and statistical models. The legal profession requires skills that today’s AI cannot perform. Legal Professionals require more than just subject knowledge such as soft skills, common sense, abstract thinking, abstraction skills, emotional intelligence, to name a few. AI is only good at representing human knowledge in well-defined spaces. (Harry Surden, 2020)

What kind of jobs will AI replace in the near future? As we have seen earlier that technology has skill biased nature. It tends to displace the middle skilled routine jobs. It already has, to a certain degree and will certainly, in near future massively impact the jobs of the clerks, researchers and


170

Artificial Intelligence and Policy in India, Volume 2

paralegals. Automation in the drafting process and smart contracts also, can lead to job losses. Some believe that technology will impact the bottom layer of the broad base pyramid model of the law firms, i.e. the junior associates in the law firms who are mostly assigned with routine work. However, this is not the only case, AI is today is also capable of performing tasks like risk management, due diligence etc. which can be used to make more informed decisions. Considering the rate at which AI is developing, it will not be surprising to see it performing higher level tasks than what it is capable of today. Therefore, it can be concluded that AI will replace all those jobs or tasks that narrow AI is able to accomplish.

AI’s impact on the different players of the legal sector The legal sector can be divided into 3 categories namely, the administrators, legal practitioners and the users. For analysing its impact on employment on a deeper level it is necessary to look into the potential impacts on the administrators and the practitioners. The administrators include the judges, legislators, government officials and ther. As of now AI cannot replace judges completely because judges are required to give reasoned decision whereas AI, although can come to conclusions by deriving patterns from the data fed in the system but as stated above, they are incapable of giving reasons or explaining why they arrived at a particular conclusion. One of the aims of the Judiciary in the Indian legal system is to maintain transparency, however the backend codes based on which AI provides outcomes is inaccessible to the general public hence can dilute the transparency and lead to loss of faith in the judiciary. AI also can suffer from biases that can go unnoticed until a substantial number of people get affected. Bias in the judiciary is against the very “Principles of Natural Justice” that is the crux of or the backbone of all the laws. AI cannot replace lawyers in the court proceedings as they cannot understand the arguments the legal principles in ways humans can. Presenting a case in the courts, highly require skills such as soft skills, common sense, abstract thinking and other cognitive skills that are absent in AI. It can only replace legal professionals in routine and mundane tasks. On the basis of this discussion it would thus not be wrong to conclude that AI will not replace all those jobs that are out of the ambit of machines to perform.

Skills required for future and potential job creations When technology is able to perform tasks like contract drafting, researching etc. such skills will not carry much weightage in future. It is recommended to work more on the skills that technology cannot perform for example soft skills, abstract thinking etc. In future, professionals will tend to have an advantage over others if they have knowledge about technology. Lawyers who are coders or data scientists i.e. those having interdisciplinary know how seem to have an advantage over others. Th future jobs could have titles like regulatory data scientists, smart


Artificial Intelligence and Policy in India

171

contract coding lawyers (Richard Susskind, 2016).

Suggestions 1. In the legal education sector, to make the legal workforce of the future ready to face the implications of technology on jobs, “AI and the Law” should be introduced as a subject. Similar to the integrated courses offered today like BA LLB, BBA LLB, B. Com LLB, BSc. LLB, new courses like B Tech (Computer Science) LLB or BSc. (Computer Science) LLB can be introduced. For the middle age workers who cannot go back to college to gain these skills or for others who are not enrolled for these special courses, law schools can provide certificate courses and diplomas in these areas. 2. India does not have a robust IT and Cybersecurity law. The IT Act, 2000 is vague and does not address the newer challenges that AI will create. For example, problem related to the liability, when AI does some wrong, when it infringes any Intellectual Property Right etc. For this, legislation shall be drafted and passed in collaboration with experts from both the fraternities.

Conclusions AI will impact employment definitely. This can be inferred from the past IT revolutions. But the main question that needs to be addressed is, when and how. It is proposed that technology will only automate the routine jobs and innovate a few where it can find patterns i.e. those jobs which can be standardised, which have a definite procedure to reach outcomes. Over the last 5 years the number of legal tech start-ups has been increasing. It is also important to mention that, Cyril Amarchand Mangaldas (one of the tier one law firms in India) is the first to setup a Legal Tech Incubator and among the few, which are already using AI to enhance the services they provided. On the basis of these facts it is proposed that the Indian legal sector (excluding the judiciary for aforementioned reasons) is taking AI positively even though it lags behind in terms of adaptability as compared to the developed nations for obvious reasons. It is proposed that AI can cause job loss in the short run but its impact can be reduced by bringing necessary changes in the Indian legal education system. In the long run Legal Tech will create new avenues for innovation in the legal sector. It will enable the legal professionals to explore different dimensions opening up doors for the ones interested in entrepreneurship. This in turn, in the long run will result in creation of more jobs. Hence, if the legal fraternity is aware and well equipped to face the challenges that AI will impose on employment, it can actually reduce the tenure of the short run period (displacement effect) and can achieve the productivity effect earlier, that is not possible otherwise. But this could be done only if the legal sector acknowledges the impact that AI is going to have and adapt accordingly. In this way AI will complement the future jobs of


172

Artificial Intelligence and Policy in India, Volume 2

legal professionals by enhancing their performance.

References 1. Dobbs, Manyika and Woetzel. 2015. s.l. : McKinsey Global Institute, 2015. 2. Harry Surden. 2019. Artificial Intelligence and Law- Overview and History. [You Tube] s.l. : Stanford Law School, 2019. 3. —. 2020. Artificial Intelligence, the Future of Employment and the Law. [You Tube] s.l. : Silicon Flatirons, Colorado Law, 2020. 4. John Maynard Keyens. 1973. Technological Unemployment Theory. 1973. 5. Professor Dana Remus. 2017. Artificial Inteligence, Technology and the Future of Law. [Youtube] s.l. : Faculty of Law, University of Toronto, University of Toronto, 2017. 6. Richard Susskind. 2016. Artificial Intelligence and the Law Conference at Vanderbilt Law School. [You Tube] s.l. : Vanderbilt Law School, 2016.


Artificial Intelligence and Policy in India

{ Policy Analyses }

173


Artificial Intelligence and Policy in India, Volume 2

174

13 Indo-US Relations and the National Security Commission on Artificial Intelligence (NSCAI’s) Interim Report and the Third Quarter Recommendations Dev Tejnani1 1Programme

Coordinator, Indian Society of Artificial Intelligence & Law, India.

research@isail.in

Synopsis. This is a policy brief for the Civilized AI Project.

Introduction The National Security Commission on Artificial Intelligence (NSCAI) submitted its third interim report to the United States Congress and the President’s office on 13th October, 2020, which also included the recommendations made by the NSCAI towards the steps that are imperative for the United States of America to take when it comes to becoming a world leader in Artificial Intelligence. The Commissioners presiding over the drafting of the report extensively deliberated upon innumerable ways which could help America establish itself as a world leader in AI. The Commission extensively deliberated and drafted a comprehensive report consisting of 66 approved recommendations amongst the innumerable recommendations which were put forth. These 66 recommendations involved non-partisan recommendations for both the executive and the legislative branches of the US Government and the drafting committee was of the opinion that the Congress and the President’s office should take immediate steps in order to implement the recommendations made by the NSCAI.

Recommendations of the Committee The NSCAI recommended the Executive and the Legislative branches to implement the recommendations which primarily focused on advancing three


Artificial Intelligence and Policy in India

175

priority decisions and they are as follows: Organizing an AI and Emerging Technologies Competition: The Committee presiding over the drafting of the Interim Report was of the opinion that Artificial Intelligence (AI) and the innumerable advancements in the field of technology are reaching its zenith and they could certainly be deemed to be regarded as the cornerstone of a country’s national development. The advancements in the field of AI and Machine Learning is something which can be deemed to be regarded as an entirely new domain when it comes to dealing with multilateral objectives of a particular country and this sector requires constant attention by the Government and its people. Therefore, the commission was of the opinion that the Congress should develop a Technology Competitiveness Council which could be presided over by the Vice President of the United States of America and alongside the Vice President, there should be an Assistant to the President who possess the requisite knowledge and has the potential to develop and implement a national technologically advanced leader strategy which could help the United States to develop and integrate AI and Machine Learning (ML) in various sectors such economic sector, security sector and matters pertaining to policy making. The Committee further opined that in order for a technological competition to be successful when AI and ML are incorporated, it is highly imperative to involve the Department of Defence (DoD) and the Intelligence Community (IC). The Committee was of the opinion that the DoD and the IC have the capacity to successfully undertake a technological competition with AI and ML as the focal point and they have the capacity to integrate the various perspectives that the technologists and the operators may possess when it comes to implementing the various plans at every step. The Third Quarter Recommendation report elucidated upon how imperative it is for the United States of America to confer extra powers upon the Chief Technology Officer (CTO) in the DoD. It further enumerated upon the appointment and designation of a Chief Technology Officer for the Intelligence Community who would be conferred with the authorities to drive and deliver AI enabled capabilities to individuals going for war at immense speed. The Commission further reiterated that the Department of Defence should undertake projects in order to enhance the growth and the development when it comes to working in close consonance with the industry partners on Artificial Intelligence and allocate resources towards the research and development in the field of AI in order to ensure the smooth and quick transition of technology. Apart from the various ways recommended by the Commission, the Commission held that AI is in the centre when it comes to dealing with emerging technologies or the various advancements that are being undertaken in the field of technology and it is imperative for the United States to come up with a holistic approach or decide upon a comprehensive strategy across various sectors in order to ensure that the US is able to sustain itself in the highly competitive AI market. The Commission also analysed and deliberated upon how AI could be deemed to be


176

Artificial Intelligence and Policy in India, Volume 2

regarded as having a great impact on other closely associated technologies and it contended the various ways in which the United States could control over other emerging technological advancements and it specifically enumerated on how the United States should take charge when it comes to making advancements in the field of biotechnology considering the potential advantages that it may have if it develops its technology in field of Genetics and Biotech given the potential that AI has when it comes to fundamentally transforming itself. It further contended that the United States should actively develop AI tools in order to support specific quantum computing applications which could come handy for promoting national security, improving America’s supply chain resilience thereby promoting the development in the field of microelectronics and also marking its niche in other sectors or industries which could be deemed to be regarded as critical and imperative for growth. Democratize AI innovation and Expand the AI Talent Pipeline: The Federal Government is conferred with the requisite powers and has the responsibility to ensure that the American Innovation on AI is unleashed and the resources that America has are utilised in its complete sense, thereby ensuring that there is maximum utilisation of resources when it comes to creating an AI research environment which will enable the United States to create a solid base when it comes to laying down its framework pertaining to aspects revolving around national security and the advantages that it may be able to derive economically. Furthermore, in order for the United States to undertake innovative projects in the field of AI which enables it to mark its niche in the global market, it is imperative that the government should make huge bets when it comes to procuring the requisite talent, i.e., providing AI researchers the resources and the space to carry out experiments and pursue their ideas which would thereby enable the US to achieve greater heights and these opportunities could be given to individual AI experts who have the capacity to carry out huge research projects. Furthermore, such experts shall be conferred with powers which would enable them to support multidisciplinary teams enabling them to deal with various challenges that they may face when it comes to dealing with aspects revolving around the various AI solutions. The Government should grant and democratize the access of free flow data which AI experts could easily access and use in order to support its application when it comes to dealing with innovations in a variety of fields through the creation of domain-specific AI testbeds which could be used by researchers and industry experts when it comes to dealing with complex and exemplar data sets.49 The report submitted by the Commission further enumerated upon how imperative it is for the United States of America to broaden its horizons in the 49 “National Security Commission on Artificial Intelligence Submits 2020 Interim Report and Third

Quarter Recommendations to Congress and the President, Press Release dated 13.10.2020, available on: https://www.nscai.gov/press/press-releases/press-release-20201013.”


Artificial Intelligence and Policy in India

177

field of AI and STEM (Science, Technology, Engineering and Mathematics) Education in order to improve the country’s national security affairs and at the same time it would enable the United States to improve its economy and in order to reach its desired goals, the United States of America needs to focus primarily on establishing new career opportunities for individuals in the military and at the same time in the civilian sector as well. It could provide STEM education and improve the STEM education regime in the United States and at the same time introduce undergraduate and graduation programs for individuals aiming to venture in the field of Artificial Intelligence and Machine Learning. It could also develop courses in order to educate individuals who are already practising in such fields in order for them to get acquainted with the advancements in technology, plus, when it comes to dealing with the national security department and the agencies entrusted with the national security department, AI is ought to play a significant role. It is highly imperative to understand that in order to make the national security department and the agencies under the national security department AI compliant, the Government needs to allocate resources in order to develop a workforce which is well-versed with knowledge pertaining to AI and the various algorithms revolving around the scope and the ambit of AI. This basically means that the national security agency needs to train its employees and make their workforce AI proficient. Furthermore, the Commission recommended robust, comprehensive actions which were directed towards ensuring that the workers which are appointed are technically trained and apart from that the leaders of such teams are also well-versed with the various advancements in technology and AI, in order to improve the quality of the workforce. Marshalling International AI Cooperation: In order for the United States of America to become a global leader in AI it is imperative for it to work alongside its allies and establish its own base with the strengths of its allies thereby also preserving free and open societies. The Commission elucidated upon how important it is for the United States of America to come up with a comprehensive and a robust framework which would enable it to marshal its international, multilateral and bilateral cooperation.50 The Report laid down the ways in which the United States could develop its AI regime in order to win the global technology competition. It could do so by expediting the requisite developments in the field of AI with the help and assistance of NATO i.e. the North Atlantic Treaty Organization and other member states. It could at the same time undertake developments in the field of defence by entering into defence cooperation agreements with its allies in the Indo-Pacific Region, this is where the relation that the United States has with India could be fostered and relied upon. Apart from this, the United States needs to take efforts which 50 “National Security Commission on Artificial Intelligence Submits 2020 Interim Report and Third

Quarter Recommendations to Congress and the President, Press Release dated 13.10.2020, available on: https://www.nscai.gov/press/press-releases/press-release-20201013.”


178

Artificial Intelligence and Policy in India, Volume 2

are multilateral in nature and with this they can extensively promote the usage of AI and how beneficial AI can be deemed to be regarded when it comes to making innovations in the field of defence and strengthening their democracy. It could also build its multilateral efforts by leading a coalition of democracies and explain the intricacies revolving around the extensive use of AI. It could promote further innovation and extend its support to private industries in other countries who have already ventured into the AI domain and are constantly endeavouring to develop itself, this would enable it to develop its diplomatic relations with other countries as well as it will enable it to develop its relations with the private players who may have the requisite resources in order to make sustainable AI. With advancements in technology, Artificial Intelligence and Machine Learning, the United States can aim towards fostering its growth in the AI sector, thereby also promoting international partners who may aid the United States in developing its AI regime. If the United States takes such steps and brings about innovations, then it can also ensure that the new emerging technological standards are based on technical considerations and the parties are adhering to the best practices which are not supported by any kinds of political manifestations. The United States needs to also understand that AI bilateral partnerships with free and open societies will help it to achieve its desired goals. The United States should aim to pursue these goals by forming a technological alliance with India, which is a growing democratic nation. India is still in its nascent stages when it comes to excelling in the field of AI, however, if America enters into a formal tech alliance with India, it would surely open up innumerable opportunities for both nations enabling both the nations to develop and grow simultaneously and also enabling both the nations to address the multitude of issues concerning the challenges and opportunities presented with the advancements in the field of Artificial Intelligence. The United States should also aim to make a Blueprint for AI Cooperation which would enable it to guide its innumerable efforts with its partners and allies, for instance, it could create a Blueprint for AI Cooperation with India, thereby strengthening their strategic relationships. Fostering Friendly Relations with Allies in order to have an operational framework for Global AI Cooperation The Report recommended that it is highly imperative for the United States of America to bolster its bilateral and multilateral relationships with its allies in order for it to develop an operational framework for itself across the globe. China deems to be regarded as a global leader in AI by the year 2030 and if at all the United States wishes to maintain itself as the most powerful nation, then it is imperative that the United States works extensively towards fostering relations with countries that have a strong potential to mark its niche in the AI sector. It is imperative to understand that since the year 2015, China has significantly changed its strategy and implemented and reoriented its domestic processes in


Artificial Intelligence and Policy in India

179

order to have an impact on the international standards 51 and this has led to the development of a robust campaign which ought to assume a very prominent role within the international AI standards, thereby making sure that organizations are in a position to fulfil its agendas.52 In fact, Beijing may release its “China Standards 2035”, which is likely to elucidate in detail upon how the Chinese Government in close consonance with private Chinese companies may set standards pertaining to a number of crucial emerging technologies such as Artificial Intelligence, 5G and the IoT (Internet of Things)53. The “China Standards 2035” is also likely to promote the standards that China is adopting in order to become an internationally recognized player in the field of AI, by effectively promoting its standards with the help of participation in standard bodies and encouraging adoption of Chinese standards through the help of Belt and Road investments.54 The methods adhered to by China can be deemed to be regarded as state-led approaches which have enabled China to set standards and employ innumerable methods in order for it to mould its international standards. China is a country that has thoroughly invested in carrying out Research and Development activities which have been focused on making China’s AI experts more skilful. It has specifically aimed at developing its workforce in such a way that China develops a technical standard when it comes to advancing in the field of AI and Machine Learning.55 On the other hand, China has also taken initiatives in order to establish itself in the Global AI Market by participating in developments which have been crucial to AI, such as participating in the standard development “It is noteworthy that, in addition to vast internal consultations piloted by the State Council and the heads of relevant ministries, Beijing sought the counsel of high-level representatives from standards-coordinating bodies in the United States (ANSI), Germany (DIN), the UK (BSI), and France (ANFOR) in an effort to incorporate the best practices.” These consultations informed the 2017 Standardization Law. John Seaman, China and the New Geopolitics of Technical Standardization, IFRI at 16th Jan, 2020, https://www.ifri.org/sites/default/files/atoms/files/seamna_china_standardization_2020.pdf.” 52 John Seaman, China and the New Geopolitics of Technical Standardization, IFRI at 16th Jan, 2020, https://www.ifri.org/sites/default/files/atoms/files/seamna_china_standardization_2020.pdf.” 53 Second Quarter Recommendations, NSCAI on 8th July, 2020, https://www.nscai.gov/reports.” 54 At present, China “has signed 85 standardization cooperation agreements with 49 countries and regions.” The BRI Progress, Contributions and Prospects, China Daily, https://chinadailyglobal.com/a/201904/23/WS5cbe5761a3104842260b7a41.html. If some countries opt for international standards and other utilise Chinese Standards, there is a long-term fear of a bifurcation of technological spheres, Jack Kamensky, China’s Participation in International Standard Setting: Benefits and Concerns for U.S. Industry, China Business Review, https://www.chinabusinessreview.com/chinas-participation-in-international-standards-settingbenefits-and-concerns-for-us-industry; John Seaman, China and the New Geopolitics of Technical Standardization, IFRI (Jan, 2020), https://www.ifri.org/sites/default/files/atoms/files/seamna_china_standardization_2020.pdf.” 55 The Chinese Standardization Administration of China (SAC) seeks to have 60 “standards of innovation bases” across China to improve China’s standardisation, https://www.ifri.org/sites/default/files/atoms/files/seamna_china_standardization_2020.pdf.” 51


180

Artificial Intelligence and Policy in India, Volume 2

organizations (SDOs) for AI and associated technologies. It has also endeavoured to participate and become a part of the International Organization for Standardization (ISO), International ElectroTechnical Commission (IEC), Institute of Electrical and Electronic Engineers (IEEE), and the United Nations’ International Telecommunication Union’s Telecommunication Standardization Sector (ITU-T).56 Apart from this, CCP motivates the participant countries to act as volunteers in order to bolster their positions and eventually such countries may emerge as leaders.57 It is quite pertinent to note that the result of this clearly shows that China has emerged as a winner and between the years 2011-2020, i.e. during the span of the last nine years, China has significantly developed itself and has gained its secretariat positions by 73% hike at the International Organization for Standardisation (ISO) and there has been around 67% hike at the International ElectroTechnical Commission (IEC). On the other hand, German and Japanese controlled secretariat positions have not changed over the course of nine years and it is interesting to note that the positions in the secretariat which were held by the United States have fallen.58 It is imperative to understand that technical standards go a long way in enabling a country to multiply its innovations and expand its operations in the growing technological marketplace. The standards also need to increase with the advancements in technology and these standards help a country in increasing its reliability which may also enable them to create a strong foundational AI framework that could be deemed to be regarded as a comprehensive piece of framework taking under its ambit quality assurance, consumer safety, enabling interoperability of the various products and services that it uses from innumerable organizations, which may enable it to facilitate consistency and be regulated at the same time.59 International technical AI standards are shaped primarily through four SDOs: The ISO and IEC, two private regulatory networks; the IEEE, a technical professional organization, through its Standards Association; and the ITU, a specialized UN agency, through its Telecommunication Standardization Sector (ITU-T). ISO and IEC have endeavoured to create a joint committee which focuses primarily on digital technologies in 1987 (JTC 1) and in 2017, it jointly created a Subcommittee 42- Artificial Intelligence (JTC 1/SC 42) dedicated exclusively to AI Standards. Peter Cihon’s Technical Report: Standard for AI Governance: International Standards to Enable Global Coordination in AI Research and Development, Future of Humanity Institute at the University of Oxford, https://www.fhi.ox.uk/wp-content/uploads/Standards-FHI-Technical-Report.pdf.” 57 China in International Standards Setting: USCBC Recommendations for Constructive Participation, The U.S.China Business Council, https://www.uschina.org/sites/default/files/china_in_international_standards_setting.pdf.” 58 China in International Standards Setting: USCBC Recommendations for Constructive Participation, The U.S.China Business Council, https://www.uschina.org/sites/default/files/china_in_international_standards_setting.pdf. 59 Jeffrey Ding, Balancing Standards: U.S. and Chinese Strategies for Developing Technical Standards in AI, The National Bureau of Asian Research (1st July, 2020), https://www.nbr.org/publication/balancing standards-u-s-and-chinese-strategies-for-developing-technical-standard-in-ai/; Standard & Measurements, NIST, https://www.nist.gov/services-resources/standards-and-measurements; Remark by Peter Brown, European Parliament’s Liaison Officer, delivered at Standards-Setting from 56


Artificial Intelligence and Policy in India

181

Maintaining standards carry innumerable economic consequences. Domestic Companies in a particular country that are in consonance with the international standards of development usually tend to have an upper edge over other organizations and this could be deemed to be regarded as the first mover advantage, which enables such corporations to have dominate position over its other competitors and at the same time it enables such organizations to first mover competitiveness.60 It is pertinent to understand that in the information and communication technology sector, companies have an upper edge if they get their designs and ideas patented as it enables them to subsequently claim their rights over other companies which may venture out later in similar fields and this enables such a company to have already met with the international or the standards which are acceptable. Furthermore, the standard-essential patents, “SEPs” can deem to act as severe barriers when it comes to adopting the market standards or facing the market competition as other organizations may not have the requisite resources in order to pay off the royalties to other organizations which own the patents to the technologies that they came up with first.61 The trades that a company makes in the global market is often a result of the standards that a country adheres to. In order for a country to increase its international standards when it comes to developing technology, it is imperative to understand that the utilisation of international standards lowers the costs that the company may incur when it comes to exporting industries. Organizations have for years hinted that China is a country that makes an effective use of its domestic standards or domestic technical standards which acts as a protectionist tool, thereby enabling China to take advantage over trade in the international market and this acts as a hindrance to other international companies, since China doesn’t rely on them in order to meet its international standards. Basically, China relies on its domestic standards in order to develop its international standards and it is imperative to note that China has so far been excelling in the methods a European Perspective Event from Centre for Strategic and International Studies, https://www.csis.org/events/online-event-standard-setting-european-perspective. 60 John Seaman, China and the New Geopolitics of Technical Standardization, IFRI at 16th Jan, 2020, https://www.ifri.org/sites/default/files/atoms/files/seamna_china_standardization_2020.pdf.” 61 “Given the potential negative impact of SEPs, the bodies which are standard have been able to develop a set of different policies which enable it to prevent the other industry participants from taking over or capturing a market. For instance, the ISO requires its participants to disclose-as early as possible, during the standards development process-in order to ensure whether they have a patent for the technology that they claim or whether they have filed for a patent application or not or whether their patent application is pending for a new technology. After a full disclosure is made, with regards to the participants owing the patent rights to the ideas that they are claiming, it is imperative for the participants to state whether they have or whether they are willing to negotiate with regards to the licenses that they own to the technology that they are claiming and whether these licenses required by the patent to other companies is free of charge and/or on reasonable and non-discriminatory terms.”- Robynne Sanders, et al., The Ongoing Problem with Standards and Patents, DLA Piper, https://www.dlapiper.com/en/global/insights/publications/2017/12/ipt-news-asiapacific-december-2017/the-ongoing-problem-with-standards-and-patents/.


182

Artificial Intelligence and Policy in India, Volume 2

that it is using. A 2019 survey made by the U.S.-China Business Council found 30 percent of member companies reporting standards-related protectionism in China.62 The United States Government needs to understand that it is imperative to develop its technical standards, especially when it comes to maintaining and promoting its international standards, which may play an integral role in harnessing the growth of the US National Security, also enabling it to protect the integrity, security and the values of the US National Security, which would surely ensure that the United States emerges as an economic winner over its other contenders in the international market. The National Artificial Intelligence Research and Development Strategic Plan delves into developing the AI standards and aims to set goals which could be deemed to be regarded as a research priority for the U.S. Department and agencies.63 In the month of February 2019, the President of the United States of America passed an Executive Order, which considered the development of the requisite technical standards in the field of AI that the United States should take and should bolster its growth towards it. This was one of the aspects which the President’s executive order covered out of the five principles which it adhered to whilst the development of an American AI initiative. The Executive Order further instructed the U.S. Departments to work upon developing the necessary international standards which would come handy in promoting and protecting innovation in the field of Artificial Intelligence. The Executive Order also involved the Secretary of the Chamber of Commerce and thereby instructed the Secretary to undertake the development of maintaining International Standards in AI and also discharged duties upon the National Institute of Standards and Technology (NIST) to make a plan which would enable it to incorporate Federal engagement when it comes to the development of technical standards and related tools in support of a reliable, robust and trustworthy systems that take under its ambit, the use of AI technologies. 64 The Report which was submitted by NIST majorly threw light upon plenty of AI standards focus areas, which were both deemed to be regarded as technical and non-technical.65 The further recommendations made by the NIST too under its Bjorn Fagerster & Tim Ruhliig, China’s Standard Power and Its Geopolitical Implications for Europe, Swedish Institute of International Affairs (Feb, 2019), https://www.ui.se/globalassets/ui.seeng/publications/ui-publications/2019/ui-brief-no-2-2019.pdf. 63 “AI Standards were classified as a research priority in both the 2016 Strategic Plan and the 2019 Update to the Strategic Plan”- The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update, National Science and Technology Council (June 2019), https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf. ; The National Artificial Intelligence Research and Development Strategic Plan, National Science and Technology Council (Oct 2016), https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf. 64 Donald J. Trump, Executive Order on Maintaining American Leadership in Artificial Intelligence, The White House (Feb 11, 2019), https://www.whitehouse.gov/presidential-actions/executive-ordermaintaining-american-leadership-artificial-intelligence/. 65 U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, NIST 62


Artificial Intelligence and Policy in India

183

scope the recommendations pertaining to authorising a Standard Coordinator, which would enable the US to increase its alignment and cooperation with Federal agencies, thereby ensuring that there is an increase in the number of staff that participates and tends to develop the procedures pertaining to training, career development and also enabling promotions for such individuals. Apart from these recommendations, the report also elucidated upon how research activities should be promoted which would enable the United States of America to bring about developments when it comes to improving its standards and it should also focus of strengthening and developing its public-private partnerships and foster to develop its international standards in order to further its AI standards when it comes to build an AI ecosystem and ensuring that the U.S. economic and national security needs are also adhered to at the same time. China has taken innumerable steps in order to improve its international standards when it comes to developing its AI ecosystem. It wishes to become the most powerful nation in terms of the use of AI by 2030. In fact, it is imperative to note that the strategies used or adhered to by China are being copied by other countries as well which could be deemed to be regarded as newcomer countries and they are bolstering towards developing their international standards, therefore at this conjecture, it is highly imperative for the United States and its Government to take steps that help it to develop its international standards in AI and at the same time the Government should look into ways which can enable it to protect its interests when it comes to AI, data and associated technologies development and growth. United States Government-led dialogue with U.S. industry, as well as with its democratic allies, will surely open innumerable doors for the United States, as it can help the United States of America to come up and set things in motion when it comes to information asymmetries and the confusion that is lurking over the interest that are at present hindering the advancements in the field of AI. Fostering partnerships with its allies would enable the United States to maintain its AI technical standards which would in turn enable the United States to grow and protect its consumers and its people.66

(Aug. 9, 2019), https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug 2019.pdf.” 66 “In a joint statement, which was followed by the 2018 meeting of the then EC President, JeanClaude Juncker to the United States, the United States and the EU specifically dealt with how important it was to align their objectives on the technical standards, particularly when it came to facilitating their trade, cutting their cost and decreasing their bureaucratic obstacles. The joint statement which was passed by the United States and the EU, significantly focused upon a close dialogue and improved the conditions pertaining to coordination when it came to improving, “cooperation and coordination with the U.S. in maintaining the framework of the international standards setting bodies.”-Progress Report on the Implementation of the EU-U.S. Joint Statement on 25th July, 2018, European Commission at 6(2019), https://trade.cc.europa.eu/doclib/docs/2019/july/tradoc_158272.pdf.


184

Artificial Intelligence and Policy in India, Volume 2

United States to Build Bilateral Relations enabling advancements in AI along with its allies The Present Report, i.e. the report made by the National Security Commission on Artificial Intelligence (NSCAI) Interim Report and Third Quarter Recommendations emphasized upon how imperative it is for the United States to establish peaceful relations and strengthen these relations if it wishes to develop its AI ecosystem. The United States needs to work upon improving its AI cooperation and it could do so by working in close consonance to its allies. It would foster peaceful and friendly relations with its allies and create partnerships which would enable it to emerge as a winner and overcome the obstacles and challenges which have been imposed upon them by its powerful competitors.67 The United States has been facing music from its key competitors, i.e. Russia and China and the problems that it has are global in nature. However, if the United States focuses towards developing its relations in the Indo-Pacific Region and the Transatlantic Regions, then chances are that it may emerge as a winner over its competitors in the International AI regime. The third quarter recommendation Report by NSCAI also elucidated that it is imperative for the United States to work with its allies in the Indo-Pacific Region which would enable it to face new threats and at the same time also enable it to work in close consonance with each other, which would not only enable the countries to grow, but would also enable them to become more powerful as they would work together sharing their values and alternatives. AI is something which would enable these countries to identify innumerable challenges that their country is facing at this conjecture and would therefore enable it to respond to these challenges in a much faster and effective manner. As mentioned above, the United States needs to take into consideration the adoption of a comprehensive strategy, covering all possible aspects when it comes to marshalling its AI cooperation globally. The United States should work towards advancing a free, open and innovative society which would share resources pertaining to defence and security as this would act as a strong agent when it comes to fostering its relations with its key allies. It should also work towards improving its data sharing capabilities and should promote the developments that they make, by sharing their technical expertise and should also create AI applications and work towards developing them so that they come handy and improve the conditions of mankind.

“The importance of strengthening partnerships in the Indo-Pacific region continues to grow as Russia and China increase their own collaborative work around advanced technology.”- Samuel Bendett & Elsa Kania, The Resilience of Sino-Russian High-Tech Cooperation, War on the Rocks, (Aug. 12, 2020), https://warontherocks.com/2020/08/the-resilience-of-sino-russian-high-tech-cooperation. 67


Artificial Intelligence and Policy in India

185

The United States needs to focus on developing its relations in the Indo-Pacific Region and should specifically focus on creating a strategic tech alliance with India. The Third Quarter Recommendation report submitted by the NSCAI focused on how imperative it is for the United States to give topmost priority to developing its bilateral relations with India. The Report concentrated upon how the United States has a time-established relationship with India and how the geopolitical significance of India, i.e., India being deemed to be regarded as the world’s largest democracy and the second most populated country in the world is a factor which cannot be held in hindsight. The relations between the United States of America and India, can be said to be based on, “a shared commitment to freedom, democratic principles, equal treatment of all the citizens, human rights, and the rule of law” and it can be said that the interests that these two countries have when it comes to promoting global security, stability, and ensuring that their respective economies prosper with the help of advancements in technology and AI, which will enable both these nations to prosper in trade, investment, and connectivity.68 The United States needs to bolster its relations with India since India is a nation which can be deemed to be regarded as a Major Defence Partner of the U.S. and these two nations have previously worked in close consonance to each other and developed their relations with the U.S.-India 2+2 Ministerial Dialogue, which began two years ago, i.e. in 2018 and this includes under its ambit, the U.S. Secretaries of State and Defence and at the same time also considers the Indian Ministers of External Affairs and Defence, and the United States of America-India Comprehensive Global Strategic Partnership, which was launched this year in the month of February, 2020.69 The relations shared by the United States and India are quite cordial and it can be said that these two countries already have a strong base when it comes to diving into the field of science and technology. The relations between these two nations have already been established via the nations’ Indo-U.S. Science and Technology Forum (IUSSTF), which was established in the year 2000. Similarly, in the year 2005, the science and technology cooperation between these two nations grew due to the annual U.S.-India Cyber Dialogue; and the U.S.-India Information and U.S. Relations with India: Bilateral Relations Fact Sheet, U.S. Department of State (July 28, 2020), https://www.state.gov/u-s-relations-with-india. 69 Media Note, U.S. Department of State, Intersessional Meeting of the U.S.-India 2+2 Ministerial Dialogue (September 11, 2020), https://www.state.gov/intersessional-meeting-of-the-u-s-india-22ministerial-dialogue/; Joint Statement: Vision and Principles for the United States-India Comprehensive Global Strategic Partnership, The White House, (Feb.25, 2020), https://www.whitehouse.gov/briefingsstatements/joint-statement-vision-principles-united-states-india-comprehensive-global-strategicpartnership/. 68


186

Artificial Intelligence and Policy in India, Volume 2

Communication Technology Working Group.70 It is imperative to throw light upon the fact that in the last few years, India has tremendously worked towards the betterment of its AI ecosystem which also takes into consideration the crucial investments that have been made by innumerable organizations which are originally from the United States of America 71 and these organizations face a potential immediate threat from its neighbours, i.e., China. However, India could be deemed to be regarded as an active participant when it comes to taking innumerable multilateral efforts revolving around the development of Artificial Intelligence such as GPAI and at the same time, India could be deemed to be regarded as a part of the upcoming D10 coalition. Apart from this innumerable record, India also houses a lot of technological experts and it is interesting to note that Indian citizens approximately account for over 70% of the H-1B visas which have been issued by the United States in a year.72 This just shows that the two nations, i.e. the United States of America and India have immense potential when it comes to developing their relations and growing together in the field of AI since their relationship is already pretty strong. The Third Quarter Interim Recommendation Report which was submitted by the NSCAI focused and recommended the Congress to develop a comprehensive policy with India in consonance with the Department of State and the Department of Defence and Commerce must be the premier patron when it comes to leading this agreement with India. The Report recommends that India and U.S. enter into a strategic tech alliance called the U.S.-India Strategic Tech Alliance (UISTA). The Report further opined that the task of the UISTA would be to work towards making India the main point of attention when it comes to dealing with Indo-U.S. Science and Technology Forum, https://www.iusstf.org/about-iusstf; United States and India Sign Science and Technology Cooperation Agreement, U.S. Department of State (Oct.17,2005), https://2001-2009.state.gov/r/pa/prs/ps/2005/55198.htm; Joint Statement: 2016 United States-India Cyber Dialogue, The White House (September 29, 2016.), https://obamawhitehouse.archives.gov/thepress-office/2016/09/29/joint-statement-2016-united-states-india-cyber-dialogue; Joint Statement from the U.S.-India Information Communications Technology Working Group, U.S. Mission India (Sept 29, 2016), https://in.usembassy.gov/joint-statement-u-s-india-informationa-communicationstechnology-working-group. 71 Andrew Trsiter, Code vs. Covid-19, Bill & Melinda Gates Foundation (2020), https://www.gatesfoundation.org/TheOptimist/Articles/coronavirus-andrew-trister-data-science. Google recently announced that it will be endeavouring to launch an AI based research lab in the State of Bengaluru which will be led by Manish Gupta, a fellow from Society for Experimental Mechanics, and Milind Tambe, Director of the Harvard Centre for Computation & Society. Google launches Artificial Intelligence Research Lab in Bengaluru, Times of India (Sept. 19, 2019), https://timesofindia.indiatimes.com/business/india-business/google-launches-artificialintelligence-research-lab-in-bengaluru/articleshow/71203154.cms. 72 Characteristics of H-1B Speciality Occupation Workers-Fiscal Year 2019 Annual Report to Congress, U.S. Citizenship and Immigration Services (Mar. 5, 2020), https://www.uscis.gov/sites/default/files/document/reports/Characteristics_of_Speciality_Occupa tion_Workers_H-1B_Fiscal_Year_2019.pdf. 70


Artificial Intelligence and Policy in India

187

the U.S. Foreign Policy issues and this tech alliance would further aim to foster the geopolitical role of India and improving the technology between these two nations, which would enable these two nations to work with each other coherently, sharing their resources, expertise and investments. This can be deemed to be regarded as an extremely beneficial affair for India as well since India being the largest democracy and having the second largest population, houses innumerable minds which have the requisite knowledge, however there is a lack of resources in India, which enables India to progress in the field of Artificial Intelligence and Machine Learning. These nations if join hands and come together, then they should aim to work with each other in close consonance by holding high-priority meetings on a periodical basis which would help both these nations in understanding the positions at which they individually are and they could then together use their respective resources in order to fill in the gaps that exist in their respective AI ecosystems. The United States can develop a robust, comprehensive strategy which can work on issues dealing with the advancements in technology in the Indo-Pacific region and at the same time, the UISTA can develop and bring into practice, concrete, operational avenues which could work effectively for both the nations. The Report elucidated upon certain projects that they could undertake in order for the UISTA to become successful and they are, “advanced joint research and development projects around AI; talent exchanges and talent flow; undertaking and analysing a range of issues pertaining to innovation, incorporating upcoming technological advancements by making investments and bringing export control measures into place; carrying out the analysis with regards to each and every investment that has been made and dealing with intellectual property rights.”73 At the same time, the report also enumerated upon how it can endeavour towards establishing AI applications and how it could use AI in order to deal with misinformation.

Conclusion The Report presented and recommended by the National Security Commission on Artificial Intelligence delved into all possible ways which could enable the United States of America to become a global leader in Artificial Intelligence. However, what matters the most is that the United States of America should foster primarily towards making itself a powerful AI nation and it could do this by joining hands and improving its bilateral relations with its key allies and this would go a long way in developing the United States and enabling it to achieve its goals.

73

NSCAI Interim Report and Third Quarter Recommendations Report, https://www.nscai.gov.


Artificial Intelligence and Policy in India, Volume 2

188

14 Policy Analysis on AI and the Weaponization of Genetic Data Dev Tejnani1 1Programme

Coordinator, Indian Society of Artificial Intelligence & Law, India.

research@isail.in

Synopsis. This is a policy brief for the Civilized AI Project.

Introduction In the year 2016, 2017, 2018 and 2019, genome editing was deemed to be regarded as a worldwide threat which marked a position in the annual Worldwide Threat assessment which was carried out by the United States of America Intelligence Community. Genome editing can be deemed to be regarded as one of the most promising developments which has been made in the field of biotechnology in recent years, however, it is also a huge threat. This threat was specifically cited by the US Intelligence and they deemed genome editing a threat to the US national security. It is imperative to understand what is meant by genome editing and why has the US regarding it as a national security threat. “Genome editing”, means or deals with the tools and techniques that biotechnologists could use in order to change the genomes or edit the genomes, i.e. the DNA or the RNA of various plants, animals, and bacteria. There have been various technologies which have evolved over the years that have aided biotechnologists to edit genomes, however, the development of CRISPR in 2013, led to makeshift changes and changed the way the biotechnologists could edit genomes. It brought about significant development in the field of biotechnology, thereby improving the speed, cost, accuracy and efficiency of genome editing.74 CRISPR, or Clustered Regularly Interspersed Short Palindromic Repeats can be deemed to be regarded as an age-old mechanism which was used by biotechnologists. They basically used bacteria to remove viruses from their DNA. “How does Genome Editing Work?”- https://www.genome.gov/about-genomics/policyissues/Genome-Editint/How-genome-editing-works. 74


Artificial Intelligence and Policy in India

189

Various researchers came up with theories wherein they discovered that they could replicate this process if they created a synthetic RNA strand which could be matched with a target DNA sequence in a living organism’s genome. The researchers used this synthetic RNA strand which was also deemed to be regarded as a, “guide RNA'' and attached this to an enzyme which could cut the DNA. After the guide RNA was able to locate the targeted DNA sequence, the enzymes which were to cut the DNA, could cut the genome at the very location where it was found. DNA could then be removed and a new DNA could be added to a living being. CRISPR can be deemed to be regarded as a powerful tool which has the ability to successfully edit genomes and it also has the capabilities to take on research on a broad range of plants and animals and at the same time it also has the capacity to take on research on humans.75 It is imperative to understand that a large percentage of genome editing and the research pertaining to genome editing primarily focuses on aspects pertaining to the elimination of genetic diseases. However, with the advancements in technology and the development of tools like CRISPR, the alteration of pathogen’s DNA has become possible and this means it could be more contagious and could spread like wildfire if left unattended by the researchers working on it. Furthermore, it is pertinent to note that the other potential uses of CRISPR include the various aspects pertaining to the formation of “killer mosquitoes and plagues that have the ability to wipe out staple crops, at the same time, it also the capabilities to develop a virus which could snip at an individual’s DNA.76 However, the underlying question that arises here is whether genome editing really does deserve to be considered as a potential threat which could be deemed to be at par with nuclear weapons or cyber hacking. A number of members in the scientific community as also elucidated in the paper, enumerate upon how genome editing could be a dangerous invention. With advancements in the field of biotechnology, it is imperative to understand genome editing could destabilize the traditional risk equation in this field, however if it is used carefully then it cannot possibly pose a threat to the world, this however, does not mean that the misuse of genome editing and the advancements in the field of biotechnology is not a cause for concern. It is imperative to note that even if the technology pertaining to genome engineering of biological pathogens is used with utmost care and precision, it does not mean that such technology backed my sufficient research could be deemed to be regarded as something that can be converted into weapons or in simpler words, it is not necessary that such a technology could be easily weaponized. However, if a particular organization is striving to create a pathogen on purpose which is hazardous and could perhaps take innumerable “U.S. Scientists use CRISPR to Fix Genetic Disease in Human Embryos for the First Time.”https://time.com/4882855/crispr-gene-editing-human-embryo/ 76 “Top U.S. Intelligence Official Calls Gene Editing a WMD Threathttps:www.technologyreview.com/2016/02/09/71575/top-us-intelligence-official-calls-geneediting-a-wmd-threat”. 75


190

Artificial Intelligence and Policy in India, Volume 2

lives, then under such circumstances if a country does not have enough resources to mitigate such a pathogen, then such a thing could be deemed to be regarded as the most dangerous creation of biotechnology.

Bio-warfare before Genome Editing The advancements in technology and machine learning devices such as CRISPR have shown immense possibilities in the field of bio warfare, however, biological weapons have been the primary cause of concern of a lot of countries even before gene editing was developed or known. It is pertinent to note that the first time a biological pathogen was used as a means to attack and create a warfare weapon can be traced back to 600 BC. It was during the 600 BC, when Solon, an Athenian statesman, used a pathogen to kill its enemies during the siege of Krissa. The statesmen poisoned the enemy water supplies. Another event which is pertinent to note, is when the Mongol Army during the siege of Caffa in 1346 AD, catapulted plague-infested corpses into the city, which also further led to the 14th Century Black Death Pandemic which claimed over two-thirds of Europe’s population. It is interesting to note that biological weapons were banned internationally by the 1925 Geneva Convention, however, the state bio warfare programs were still carried out in large numbers and at the same time, there was a huge increase in the number of cases, wherein countries resorted to the use of bioweapons and this took place during the second World War and the Cold War. However, in the year 1972, 103 nations signed and entered into the Biological Weapons Convention treaty when there was an uprise in the number of cases wherein biological weapons were used and countries found concrete evidence supporting their contentions with regards to the use of biological pathogens to kill enemies. The treaty enumerated specifically upon provisions which banned the creation and use of biological weaponry. It also aimed towards banning research activities pertaining to the formulation of defensive activities relating to the creation of biological arsenal, however defensive research activities were later made permissible. In fact, the Biological Weapons Convention (hereinafter referred to as the, “BWC”) provides a condition which imposes a duty upon the signatories to submit information pertaining to the research that it carries out with regards to its biological research programs and this research needs to be submitted to the United Nations, and violations, if any, need to be reported to the UN Security Council, which may lead to an inspection, however, there is a catch here. The permanent members of the UN Security Council have the veto powers and they can veto the inspections. This just shows that there are no proper guidelines when it comes to the enforcement of an inspection over a particular country’s biological research activities. Furthermore, the demarcation which separates the aspects of permissible defensive biological research from the offensive aspects is quite murky and at the same time it could be deemed to be regarded as a subject of


Artificial Intelligence and Policy in India

191

controversy. It is pertinent to understand that the actual numbers with regards to the biological weapons produced by a particular country still remains unknown and a pathologist named Dr. Riedel was of the opinion that, “the number of state-sponsored programs that have engaged in offensive biological weapons research, has increased significantly during the last 30 years.”77

Multiple Uses It can be said that biological warfare can be deemed to be regarded as a potential threat and it is going to remain a threat for a significant amount of time, however, genome editing technology on the other hand could hypothetically bring about innumerable advancements and at the same time it could escalate things. Genome editing falls under the ambit of research and technology which can be deemed to be regarded as “dual use”- which means it has multiple uses and it also has the ability to create something phenomenal and at the same time it also has the ability to cause destruction. Genome editing can be deemed to be regarded as a technology which could open multiple avenues and it could enable a number of industries to flourish, however, the intention of the organizations making use of this genomic data will go a long way in determining whether the technology surrounding genomic data and genome editing would be a positive aspect or a negative aspect and ultimately the factor which determines whether an activity is positive or negative is the perspective of the individuals analysing it. A particular activity could be deemed to be regarded as a positive activity in the eyes of a few individuals, however it could pose as a negative activity to others. It is imperative to note that genome editing could be used in order to make the world a better place to live in, for instance, genome editing could be used in order to curb the existence of disease-carrying mosquitoes or it could be used to make antibodies or medicines which could perhaps be developed in order to cure incurable diseases and this application or use of genomic data could be something which would be appreciated worldwide, however, certain cultures across the globe could perhaps consider this to be a sacrilegious practice and would probably strive to abolish it. In order for the scientific community to accept that genome editing could be used for making the world a better place to live in, it is imperative for biotech scientists across the globe to come together and the scientific community together should find the key to the solution, by taking risks and indulging into discussions with regards to the research activities that it could carry out.

Genome Editing with Ease A growing concern that arises here pertains to individuals who are not scientists. “Biological Warfare and bioterrorism: a historical Review, written by Stefan Riedel, MD, PhD, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1200679/” 77


192

Artificial Intelligence and Policy in India, Volume 2

These individuals could take up dangerous research activities by themselves in the field of genome editing since there are a number of “do-it-yourself” (DIY) genome editing kits which are easily available on the market and these kits are priced relatively low, which could enable anyone, anywhere to edit the DNA 78 of an individual or of an organism using the CRISPR technology. However, what is unknown at present is whether these kits could be deemed to be regarded as a potential security threat or not and these threats could be evaluated based on two major criterions which are- the likelihood and the potential impact that such a kit may pose. Whether the “highest” or the “greatest” risks lie will entirely depend upon these two aspects or criterions. If one takes risk as a factor when it comes to fathoming the likelihood of impact, the most known or predictable attacks could be made by the low-powered actors and this impact may not be quite significant and may perhaps be based on traditional approaches, with the help of DNA pathogens that are already prevalent or existent and under such circumstances the risks could easily be characterised or assessed. DIY Genome editors could experiment on a large number of aspects and their research may be broad, however, it is quite unlikely that they would be able to produce a biological agent which could have the capacity to cause widespread havoc. However, what could actually be deemed to be regarded as a serious threat is when companies or organizations that have the power and the resources to carry out a sophisticated and technical analysis, put their resources into genome editing. A lot of biotech companies have the requisite resources and may also possess the technological competence that is required for a firm to excel in the field of bio warfare weapon manufacturing, however, it is pertinent to note that such resources are not easy to acquire at present, however, such a threat is still something that governments and countries need to assess and look out for.

Bioweapon Programs A lot of countries are carrying out state-wise programs wherein they are striving towards creating a large-scale bioweapon armoury and this could be a huge threat, perhaps a double threat since there always arises the possibility of an accidental release of such technology which could then be misused by organizations or individuals in order to carry out their malicious activities. It is imperative to throw light upon the fact that the accidental release of such technological formulas has previously led to certain malicious activities being carried out by certain countries. In the year 1979, there was an accidental release of aerosolized anthrax by the Sverdlovsk (now Ekaterinburg) bioweapons “Mail-Order CRISPR Kits Allow Absolutely Anyone to Hack DNAhttps://www.scientificamerican.com/article/mail-order-crispr-kits-allow-absolutely-anyone-tohack-dna/” 78


Artificial Intelligence and Policy in India

193

production facility which is based in the Soviet Union. An air filter which was clogged was removed by the maintenance team, however, the same was not replaced and this caused a huge havoc. Somewhere around ninety-four people were affected by this accidental release of aerosolized anthrax and approximately sixty-four individuals out of the ninety-four died along with a number of livestock.79 The Soviet Secret Police played an active role in covering up the tracks pertaining to the outbreak of the aerosolized anthrax, however, years later the Soviet Union Administration took responsibility and admitted the real cause of the outbreak. Similarly, a facility under the control of the US biodefense “failed to kill the anthrax that it allegedly sent out with the hope to carry out various lab trials, however, the facility in turn ended up sending a really devastating anthrax around the globe.” Luckily, no individual was infected by this deadly anthrax and in 2015, a government investigation80 uncovered that over the course of the last ten years, “approximately 86 facilities situated in the United States and seven other countries, in turn, received low concentrations of live anthrax and spore samples, which were assumed to have been completely deactivated.”81 These incidents fall nowhere in comparison to the activities carried out by Japan. In the 1930 and 40’s, Japan intentionally used biological weapons and approximately 30,000 people were killed in China by the biological weapons used by Japan and this incident took place during the period of the second World War. The Japanese wanted to perhaps only target a few villages based in China, however, the technology wasn’t quite advanced back then and the Japanese had no clue as to how to control the spread of the epidemic which it had caused. In fact, there are a number of reports which insinuate that as a result of the release made by the Japanese Army, a number of Japanese soldiers of the Japanese Army were themselves affected by the biological weapon that it had unleashed and were severely infected in the biological massacre caused by Japan in the year 1941. 82 Despite there being a ban imposed upon the production of biological weapons, a lot of countries are making use of the advancements and developments in the field of Artificial Intelligence and Machine Learning and are conducting research and manufacturing of genome based biological weapons. In fact, it is rumoured that the Soviets are making a complete use of AI in order to carry out research and are using tools that would answer key questions pertaining to the capabilities that a country needs to possess in order to make biological weapons. The BWC only prohibits offensive research, however, under the garb of a defensive program, an “The 1979 Anthrax Leak in Sverdlovskhttps://www.pbs.org/wgbh/pages/front/shows/plague/sverdlovsk. /” 80 US Department of Defence Archives- “https:www.defense.gov/Portals/1/features/2015/0615_labstats/Review-Committee-Report-Final.pdf”. 81 US Department of Defence Archives- “https:www.defense.gov/Portals/1/features/2015/0615_labstats/Review-Committee-Report-Final.pdf”. 82 “Biological Warfare and bioterrorism: A historical Review, written by Stefan Riedel, MD, PhD, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1200679/” 79


194

Artificial Intelligence and Policy in India, Volume 2

individual or an organization with the help of machine learning input genetic data into its system and carry out a full-fledged research and development program which would enable it to figure out what devices it could make in order to develop its biological weaponry. After carrying out the requisite research, a country solely needs to have the capacity to scale up the production levels quickly if it wishes to have an upper edge over other countries. It is rumoured that the Soviets have built “a set of state-based commercial infrastructure which would enable it to make vaccines” and it has been carrying out such activities on a daily basis, however they could very easily shift their resources from making vaccines to making weapons which could be used to stock up their bio weapon armoury. In fact, a lot of countries have been rumoured to carry out such secret operations and a lot of scientists and biotechnologists are of the opinion that countries do have the requisite powers to build something in order to accelerate its growth in the field of bio weapons, in fact this has been made possible due to the constant advancements in the field of Artificial Intelligence. CRISPR technology is one such example and it has proven to be highly effective and certain countries are very well making use of this technology along with its own research, however, a few countries have a fully prepared and properly set-out biological weapons program which it could use and unleash it into the real world, however, they need to first develop a way to turn their existing infrastructure towards a weapons program if they aspire to develop further in this field. However, what is pertinent to note here is that all these components would in fact be permissible under the provisions of the current international law regime.

Biological Weapons Convention It can be said that the reality with regards to the fact that bioweapons can be used and developed is quite unsettling and raises innumerable questions with regards to the efficacy of the Biological Weapons Convention. The fact that there exists a ban on biologically generated weapons is something which is certainly the need of the hour, however, a number of countries are secretly working and carrying out research activities in this field since Machine learning and artificial intelligence has reached its zenith and with the help of these technologies, biotechnologists have been able to develop computer algorithms that improve with experience and research. These algorithms are specifically designed in order to help biotechnologists analyse huge sets of data pertaining to genomic sequencing. Machine learning algorithms have been proven to be useful when it comes to analysing large sets of genomic sequencing data. It is imperative to understand that supervised learning methods which specifically deal with gene identification requires biotechnologists to input labelled DNA sequences which enable them to identify the start and end locations pertaining to a particular gene. Furthermore, the algorithm is coded in a way which enables the model to identify and


Artificial Intelligence and Policy in India

195

understand the general properties of the genes and it helps the scientists or the biotechnologists to understand the DNA sequencing patterns and the locations of the stop codons. After this, the model learns and understands properties which enables it to automatically analyse additional genes from the data sets which have been provided to it and therefore it resembles the genes in the training patterns that have been embedded into the system. The BWC does not prohibit all of these activities and research is very well allowed under the provisions of the BWC which certainly motivates organizations to take advantage of the loopholes which persist. Furthermore, for deep learning algorithms to function in a proper and systematic manner, loss functions (which show how a prediction can be deemed to be regarded as accurate) and risk functions (which show the average loss incurred when the system is put to test) are taken into consideration within the model of the system to adjust for the false predictions that the algorithm may make. In fact, when data which is essential to carry out test runs is not available, unsupervised learning methods are adhered to and these methods could be used to discover genes of interest and also it would help a scientist in understanding other important information pertaining to a sequenced genome. However, it is necessary to understand that the ban on biological weapons was somewhere down the line motivated by the ban which was imposed on chemical weapons, however, chemical weapons and biological weapons were traditionally dealt with together and the 1925 Geneva Protocol was one piece of document which banned the usage of both, however, the ban on Chemical Weapons was eventually dropped from the BWC after the original proposal for the BWC was submitted by the UK in the year 1969. It is therefore necessary to understand that biological weapons, if used in a proper sense and if domestic laws in each country are developed regulating the acts of biotechnologists and the companies involved in the research pertaining to biological weapons, then biological weapons could be the next thing forward. The secrecy surrounding the discoveries and the research of biological weapons programs has led countries to carry out their own research. Interestingly, before the World War I, the British started carrying out research in the field of bioweapons, subsequently the Germans became aware of this and they started funding their own research in the field of bioweapons, however, during the pendency of the war, the British stopped pursuing their research and this fact was unknown to the Germans and the Germans therefore went on researching and began making bioweapons under the garb of keeping its attempt to win over its competitors. By the time the Second World War started, Germany had no inventories of bioweapons left, however, its allies were of the opinion that Germany still possessed bioweapons and this led to the United States commissioning its Defence wing to begin research on bioweapons.


196

Artificial Intelligence and Policy in India, Volume 2

Conclusions Genome editing could be deemed to be regarded as a “game changer” when it comes to bioweapons, however, it is something which could be deemed to be regarded as an enabling technology only for short to medium term and perhaps in the long term as well, however, there arises the risk of it being used for bio warfare, however, at present, the impact that it has on countries, makes it an innovation which is faster, cheaper and at the same time reliable and plus it somewhere brings back the traditional approach, but it is imperative to understand that with the advancements in artificial intelligence and machine learning, biotechnology is ought to evolve and so will bio warfare. Machine learning is something which is quite complex and algorithms could be made in a way which could change the way a particular data set is analysed. However, the method of machine learning that biotechnologists adhere to will depend on the nature and the characteristics of the data set that is available to them and at the same time it will also depend upon the aim and the purpose that biotechnologists have behind generating or developing such pathogens. For instance, it will become feasible for governments to test and alter specific sets of genes in their populations and imagine the government striving towards making an aerosolizing genome editor which could specifically knock out genes which are harmful for a population, however, it may also have its repercussions.


Artificial Intelligence and Policy in India

{ Miscellaneous Works }

197


Artificial Intelligence and Policy in India, Volume 2

198

15 AI and Fintech Governance in India: A Critical Review Bikram Bhadra1 1Research

Contributor, Indian Society of Artificial Intelligence & Law, India.

research@isail.in

Synopsis. This is a Discussion Paper for the Indian Strategy for AI and Law Programme.

Introduction “Our Government believes Artificial Intelligence, in different forms, can help us achieve the $5 trillion economy benchmark over the next five years, but also help us do it effectively and efficiently.”83 The aforementioned excerpt from the speech of the current Minister of Commerce & Industry and Railways, Mr. Piyush Goyal, delivered at the inaugural event of the National Stock Exchange Knowledge Hub places on record the vision of the current Government in respect of AI. In addition to the above, the Minister remarked that the Prime Minister has also had various meetings with various ministries and officials in order to understand the importance of Artificial Intelligence and how the same can be put to good use in the functioning of the Indian Economy.84 From the above excerpts, we may quantify the credence AI brings along with itself. However, if we examine the flip side of the coin, Mr. Elon Musk85 who is an engineer and technology entrepreneur has a completely contrasting view on AI, ET Bureau, AI, Machine Learning Can Help Achieve $5 Trillion Target: Piyush Goyal, THE ECONOMIC TIMES (Jan. 6, 2020), https://economictimes.indiatimes.com/news/economy/indicators/aimachine-learning-canhelp-achieve-5-trillion-target-piyushgoyal/articleshow/73129228.cms?from=mdr 84 ibid 85 A renowned name in the world of technology and the Founder and CEO of SpaceX, CEO of Tesla, Inc. and other undertakings. Elon Musk is a well-known name in the realm of Artificial Intelligence and has performed several in-depth researches on the same 83


Artificial Intelligence and Policy in India

199

“I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane. And mark my words; AI is far more dangerous than nukes. So why do we have no regulatory oversight? This is insane.”86 Although Mr. Elon Musk has a very strong view about the pitfalls of AI, he also provides a solution. In his opinion, ‘regulatory oversight’ has the potential of keeping AI in check. Thus, this expert opinion can also be one of the reasons why the regulation of AI is considered necessary. However, in order to understand the regulatory aspect of AI, it is pertinent to understand its technological counterpart first.

Precautionary Principle or Proactionary Principle The Precautionary Principle is that type of strategy which endorses the view that the government must adopt precautionary measures in those cases where there is evidence of scientific value that certain hazards are associated with the use of or deployment of certain policies. In the current situation, the use of AI in FIs is the case which is being evaluated. It is now a known fact on the basis of scientific evidence that several risks are associated with the use of AI.87 The Precautionary Principle in such a situation would recommend that the Indian Government must take necessary precautions to curb the use of AI in the FIs and only permit such usage which is risk free. Such an approach would be deemed even more necessary because of the very high nature of stakes which are involved in the FIs. It is true that such an approach also has its downside. The use of AI technology will never see the light of the day in FIs. The Proactionary Principle endorses the view that is contrary to that of the Precautionary Principle. It supports the spirit of inquiry and reform.88 It upholds the freedom of people to perform more and more innovations in technology which are valuable and essential to humanity.6 In all such cases, the government endeavours to protect the freedom of the people to innovate and progress. It is interesting to note that one of the Fundamental Duties of the Indian citizens as Catherine Clifford, Elon Musk: ‘Mark my words – AI is far more dangerous than nukes’, CNBC (Mar. 13, 2018), https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerousthan-nuclear weapons.html 87 Francis D’sa, The Dangers of Artificial Intelligence, THE ASIAN AGE, (Jan. 2019), https://www.asianage.com/technology/in-other-news/080119/the-dangers-of-artificialintelligence.html 88 Excerpt from the Article 51A(h) of the Indian Constitution 86


Artificial Intelligence and Policy in India, Volume 2

200

mentioned under the Indian Constitution also endorse the same view.89

In addition to this, it is worth noting that until before the Financial Crisis of 2008, the FIs were very conveniently functioning on the Precautionary Principle in India. Not many technological developments were introduced. But, as soon as the advent of technological start-ups began, the FIs were constrained to accept the deployment of services provided by them because they were facing tough competition and a looming threat of loss of customer base. Therefore, it can be seen that competition is one yardstick which push for technological development.

Research Methodology Artificial Intelligence is the acquired ability of the computer system to process information and data and generate outcomes which are inherently similar to those derived by the way of human thought process. The human race has recently become acquainted with this technology and it is at a very nascent stage of development. Currently, different sectors are in process of deploying the technology and are attempting to replace the redundant tasks which are performed by human beings to be done by this technology. In this race of deployment of AI, the Finance Sector has topped the list in India. The production of services that combine financial services and technology are defined as Financial Technologies (FinTech). The Finance Sector which includes all services like sanctioning of loan, current and savings account, wealth management, insurance, net banking etc., has been greatly relying upon this technology for meeting its day-today requirements. Since all the aforementioned transactions involve wealth in some or the other form, high stakes are involved here. Thus, the requirement for having a systematised environment and the presence of controlled operations becomes manifold. But, due to the fact that AI is a new technology and its deployment is also at a very nascent stage in India, we have negligible to no laws governing it. In addition to this, it has been found that because the technology is at a nascent stage of development, several risks are associated with its functionality and outputs. This is the reason why the requirement of interpreting the role of AI and Fintech governance becomes paramount and of utmost importance. The technology of AI has the ability to change the manner in which the financial services and products are delivered, processed, operated, assessed etc. The Financial Institutions are currently utilising the AI technology in the development of the chatbots, analysis of creditworthiness, determination of 89 ROMAN V. YAMPOLSKIY, ARTIFICIAL INTELLIGENCE SAFETY AND SECURITY 28, (CRC

Press, 2018)


Artificial Intelligence and Policy in India

201

Social Loan Quotient, verification of KYC information, regulatory compliances, processing of large volumes of data, identification of suspicious transactions, predictive data analytics for predicting consumer behaviour, increasing of efficiency, contract management, betterment of wealth management services, increasing cyber security etc. In short, the use of AI in Financial Sector is becoming deeply embedded and indispensible. However, as per a recent survey performed by PwC India, 93% of the participants said that they are unsure of the deployment of AI techniques by Financial Institutions and have glaring concerns of data privacy. Thus, it can be seen that the faith of the consumers may dwindle on the Financial Institutions because of this practice of increased use of AI. Hence, the need for performing the research on the aforementioned topic becomes indispensible and absolutely necessary. There is an urgent requirement to assess the manner in which the deployment of AI is done through Fintech and evaluating whether such a use requires regulatory intervention or not. If yes, then whether development of AI should be given priority or precautionary protection of the interest of the larger group must be given priority?

Research Objectives This Research shall have the following objectives: • To understand the AI technology and the allied principles on which it operates in a simplistic manner. • To investigate the areas where Fintech deploy the use of AI and the hazards and risks associated with such usage. • To explore the existing laws, regulatory framework and judicial precedents pertaining to the use of AI in the Financial Sectors of India and a few other jurisdictions. • To find deficiencies in the existing laws and regulatory framework which govern the use of AI in the FIs in India & a few other jurisdictions and to identify the risks associated with such deficiencies. • To provide practical solutions in respect of the lacunas and loopholes identified in respect of this Research upon concluding the same.

Research Questions • What is AI and how is its use relevant to Fintech? • Why is the synergy of AI and the Financial Institutions a worrisome issue? • Are there any laws or policy in India and a few other jurisdictions which pertain to the regulation of use of AI and Fintech? • Should India adopt a precautionary or a proactionary policy for developing AI in Financial Sector and why?


202

Artificial Intelligence and Policy in India, Volume 2

• Is there any manner in which the use of AI by the Financial Institutions be regulated without hampering its development process?

Hypothesis At the given juncture, India’s legal and regulatory policy must be precautionary and not proactionary for the deployment of Artificial Intelligence in the Financial Sectors.

Research Methodology The Research is doctrinal in nature and the methodology adopted is Secondary Research Methodology. This has included analysing and drawing interpretations and inferences from sources like- books, articles, websites, journals, interviews, press releases, discussion papers, government reports, newspaper reports, reports of international bodies, judicial pronouncements, annual reports etc. The Research is an in-depth research. An attempt has been made to overcome the limitation of availability of very less insight on the market practices followed by the FIs in relation to use of AI. This has been done by way of perusal of several articles and governmental reports on the same.

Literature Review According to the OECD (2019) “AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future.” With the potential of AI in mind, many public and private institutions have investigated the application of AI on Financial Services, resulting in various research reports comprising unique perspectives and methodologies. In order to find existing literature for the present Research, the keywords, ‘Artificial Intelligence’, ‘Fintech’, ‘Regtech’, ‘Machine Learning’, ‘Big Data’, ‘AI’ and/or ‘Data Analytics’ have been used synonymously along with a combination of few of the other keywords like, ‘Banks’, ‘Financial Institutions’, ‘Financial Sector’, ‘NBFC’, ‘Banking and Finance’ etc. The searches with these keywords have been performed on various search engines and information repositories like, Google, Bing, IEEE, SSRN, National Digital Library of India, ProQuest, Directory of Open Access Books, Google Scholar, Project Gutenberg, AidData, Oxford Open etc. Thousands of articles with similar keywords were identified; however, the same were filtered in order to find the most relevant literary sources which are: 1. Report of Committee – B on Leveraging A.I. for Identifying National Missions in Key Sectors’ by the Ministry of Electronics & Information


Artificial Intelligence and Policy in India

203

Technology (July, 2019)90: This Report elucidates the role that AI is capable of playing in the various sectors in India. The Report analyses the juxtaposition of AI in around 17 sectors including, Agriculture, Water Resources, Education, Specially Abled, Transportation, Railways, Energy, Disaster Management, Legal and Finance. The findings in relation to the Finance Sector record that India is seeing an increase in the number of financial fraud cases resulting in heavy losses. The reason cited for such an increase is the presence of loopholes in the latest digital payment channels which are exploited by the scammers. The Report further records how essential it has become at the current stage to develop AI for curbing them. It points out that our Banking sector requires such an Artificial Intelligence technology which is capable of predicting the potential loan accounts which can turn into bad debts. In short, it can be said that the Report has a futuristic approach and has identified the problems which are currently being faced by the Finance sector. The current Research intends to provide a legal view on the issue. 2. Opportunities and Risks of Artificial Intelligence in the Financial Services Industry’ by Christian B. Westermann (November, 2018)91: The aim of this Article is to highlight the opportunities and risks concerning the use of AI in the financial industry. The Article notes that AI is mainly being used for four purposes namely, assistance, augmentation, automation and autonomous intelligence. The Author acknowledges that use of this technology has risks likeperformance based risks, security related risks and control related risks associated with it. Therefore, in order to curb them, the Author suggests that the installation of a framework containing policies, procedures, controls and minimum requirements for each group of enterprises must be put in place. The Author keeps this Article open-ended and has created room for more researches on the subject. This is because, the use of Artificial Intelligence is itself at a very nascent stage. Accordingly, value addition shall be done to it by the way of current Research. 3. ‘AI, Machine Learning and Big Data’ by Priti Suri, Arya Tripathy and Janarth Visvanathan (April, 2019)92: The Authors of this Article note that there is anticipation that only big companies will be using AI. They are of the opinion that data protection and privacy is essential and must be recognised when AI is being developed. The Authors also place reliance upon the European Union General Data Protection Regulation in order to fathom the importance Report available on the website of Ministry of Electronics & Information Technology, Government of India: https://meity.gov.in/artificial-intelligence-committees-reports 91 Article available on the website of PricewaterhouseCoopers, Switzerland: https://www.pwc.ch/en/insights/fs/opportunities-and-risks-of-artificial-intelligence-in-thefinancial-services- industry.html 92 Article authored by a team from the law firm, PSA (Priti Suri & Associates) which is based in New Delhi and Chennai and the same has been published on the website of Global Legal Insights 90


204

Artificial Intelligence and Policy in India, Volume 2

of the Right to Privacy. The Article also points out that the applicability of the Competition Act, 2002 would also be paramount in AI because the reliance on Big Data is capable of posing potential risks to market competition. This is also because pricing algorithm can be used to the detriment of the other competitors because of asymmetrical access to data. The Article records that the current legal structure for data protection would not be sufficient to protect the public’s interest. The Article proposes to accord civil liability to those persons who manufacture or sell Artificial Intelligence products in order to increase accountability. The current Research would be adding value to the Article by exploring the current legal remedies available. 4. Report of the ‘Artificial Intelligence Task Force’ by the Task Force on Artificial Intelligence for India’s Economic Transformation, constituted by the Government of India’s Ministry of Commerce and Industry (2019)93: The Report strongly notes that the most fundamental challenge in introducing any technology in any Indian sector is the structural obstacle in adopting it. It further notes that some challenges linked with adoption of AI in India are collection, validation, correlation, archiving, standardisation, accessibility and distribution of the relevant data. These have been considered to be substantial challenges because AI is completely dependent on data. The Report further analyses the juxtaposition of AI in ten important Indian domains including, that in FinTech. The Report further goes on to note that the challenges faced by FinTech Companies are issues pertaining to balancing of the scale of production and innovation and the activity of anticipation of the demand in the market. It also notes that data confidentiality and data access are also significant challenges in this field. In this Research, we shall go one step ahead and explore the legal implications of use of AI. 5. Report of the ‘Steering Committee on Fintech Related Issues’ by Department of Economic Affairs, Ministry of Finance, Government of India (2019)94: The Report acknowledges that there is a surge in the usage of chatbots in India’s financial sector. As per the Report, AI is deployed in services like, chatbots and virtual assistants, credit rating services, regulatory compliance, increasing efficiency in business processes, wealth management practices and cyber-security. The Report recommends that all FIs must adopt Regulation Technology. The Report also highlights that the regulators must take a neutral view while regulating and encourages them to develop regulatory sandboxes. The Report also encourages the regulators to make databases available openly and be allowed to be accessible by all FIs. The Report available on the website of Artificial Intelligence Task Force which has been constituted by the Ministry of Commerce & Industry under the Indian Government: https://www.aitf.org.in/ 94 A Steering Committee under the Chairmanship of Mr. Subhash Chandra Garg was set up by Ministry of Finance to explore Fintech Related Issues. 93


Artificial Intelligence and Policy in India

205

Report records its apprehension on consumer protection extended by the deployment of Artificial Intelligence technology. This is because there is information asymmetry, loss of time and rampant incentivising in our society. The current Research shall rely on the Report in order to identify the latest trends in the Finance industry to suggest method of regulation of AI. 6. Abhivardhan’s book on ‘Artificial Intelligence and Ethics and International Law: An Introduction’, having ISBN No. 9789388511629, provides an eagle eye view on the dimensional ambit of innovations and approach through Artificial Intelligence and Internationalism. It showcases an illustrative introduction to the idea of AI and International Law. The book covers the wider aspects of principles of data protection and their relativity with AI in a legal discourse and provides innovative solutions towards a better future. Since the book also presents generic insights and relativity of the relationship of AI with other technical and legal innovations such as blockchain, data visualization, social media, AI ethics and so on, thus offering the essential and significant resources needed for my Research, thereby putting an edge to the overall theme of my Research Paper on AI and Fintech Governance. 7. “Formulating AI Norms: Intelligent Systems and Human Values” by Trisha Ray (September, 2019)95: The Authoress points out that the framework governing AI must be such that it should include three major factors: Legality, Ethicality and Robustness. In terms of Legality, the policy must include the set standards which may be followed while creating AI along with punishment for violation. In terms of Ethicality, the policy must be socially inclusive and should require the AI to not be discriminatory. In terms of Robustness, the policy must include technical measures for safety of the stakeholders. The current Research aims at adding value to this Article by suggesting policy for AI regulation.

History of AI Between the years, 1930 to 1950, several researches were carried out and breakthrough discoveries were made in the world of computer science. From developing the first functional computer which is controlled by programs to developing a computer which can play the Chinese mathematical strategic gameNim to publication of various researches which mention about a model where the computers assist humans in their day-to-day operations, all of these events laid down the cornerstone for introduction of AI.96 Another major benchmark in the

96

STUART J. RUSSELL & PETER NORVIG, ARTIFICIAL INTELLIGENCE: A MODERN APPROACH 9-16 (Prentice-Hall, Inc. 1995)


206

Artificial Intelligence and Policy in India, Volume 2

road to development of AI was the research paper titled, ‘Computing Machinery and Intelligence’ authored by Mr. Alan Turing and published in the year, 1950. This research paper had a futuristic approach and posed the question of ‘Whether machines can think?’ The research paper also proposed the famous ‘Turing Test’ which states that if a computer can make a conversation with a human such that the human is unable to differentiate whether he/she is interacting with another human or a machine then such a machine would be capable of being categorised as a ‘Thinking Machine’. Thereafter, in the year, 1956, Mr. John McCarthy, a computer and cognitive scientist had organized the Dartmouth Summer Research Project at the Dartmouth College in order to discuss, clarify and develop the concept of ‘Thinking Machines’ with a diversified group of researchers hailing from different disciplines. It was in this workshop that the term, ‘Artificial Intelligence’ was coined by Mr. John McCarthy and therefore, he has been regarded as one of the founders of AI. From here onwards, there was no turning back and one after the other, discoveries so much so that, the AI technology is used by us even in our daily lives. Thus, it is interesting to note how the advent of AI took place in a summer workshop. It is also interesting to see how the dreams of a group of people were augmented into reality by the next generation computer researchers. An attempt to understand the meaning of AI would be done in the following part.

What is Artificial Intelligence? The definition of AI would be dependent on who is defining it. There is no single globally recognised definition of AI.97 Some of the plausible definitions are cited below: • The well-known applied mathematician and computer programmer, Mr. Richard Bellman had defined AI in 1978 as, “The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning.”98 • Mr. Patrick Winston who is a famous computer scientist and professor had defined AI in 1992 as, “The study of the computations that make it possible to perceive reason and act.”

Pei Wang, On Defining Artificial Intelligence, JOURNAL OF ARTIFICIAL GENERAL INTELLIGENCE 10(2) 2, 1-37 (2019) 98 Supra 30 p. 5 97


Artificial Intelligence and Policy in India

207

• The IEEE which is a very well-known association of electronics and electrical engineers have defined AI in June, 2019 as, “Artificial Intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”99 Machine Learning and Deep Learning The concept of Machine Learning simply means, “the ability to learn without being explicitly programmed”.100 Therefore, ML is the process of feeding in data, statistics and information in order to facilitate the computer system to ‘learn’ the method in which a task must be performed and it also assists such a computer system to get better at such a task. These facilitations are made without having to programme the computer system at each instance. The computer system learns on its own accord, from the data which is available and its past experiences. In short ML has the power to impact all those sectors which rely on data. The Financial Sector is the best example here. This is because abundant data processing requirements are presented in the said sector. It is pertinent to note that Ml is a field of AI. Deep Learning on the other hand is a type of ML that relies on an artificial neural network which contains several layers through which the data is processed and this processing allows the computer system to learn, make connections and categorise inputs in a ‘deep’ manner. In simple words, DL is the technological equivalent of the human neural cell network.101 It has been so designed on the aforementioned lines. In simple words, DL is the more advanced form of ML. DL comparatively requires less or negligible human intervention as opposed to ML. Therefore, in short, both, ML and DL are specialised forms of AI. This is because AI is the earliest form of the technology. Thereafter, breakthroughs in AI were achieved due to which AI was termed as ML. Again more breakthroughs took place in ML due to which it was termed as DL. Therefore, AI is a set of which ML is a subset and DL is a subset of ML.

Attributes of Artificial Intelligence One may safely define AI as that branch of computer science which may possess the following attributes: McCarthy, Basic Questions, STANFORD UNIVERSITY (Nov. 2007) Kumar, Punit Shukla, Aalekh Sharan & Tanay Mahindru, Discussion Paper: National Strategy for Artificial Intelligence, NITI AAYOG 14 (June 2018), 101 NVIDIA, Deep Learning, DEVELOPER NVIDIA (2020) 99

John

100 Arnak


208

• Automation of Activities

Artificial Intelligence and Policy in India, Volume 2

AI is believed to reduce or nullify human intervention so that all designated and other tasks are completed by AI without having to programme the machine specifically for all such tasks. • Study of Databases AI is fully capable of utilising the information available in databases in order to predict the future trends and act accordingly as humans would do upon obtaining such deductions. • Prediction and Adaptability102 AI has the capability of discovering patterns and using them to predict the future behaviour and adapting to such patterns. • Decision Making AI is capable of making decisions in a standalone manner after pattern discovery has been successfully performed without human intervention.45 It also has the capability to suggest the future trends in an accurate manner upon which, decisions can be taken.103 • Human-like Continuous Learning Process AI is capable of continuous learning like the human beings. It can learn from the past experiences it has had and the data which is fed in the systems. Therefore, on the basis of the above, it is clear that AI has the capabilities which are somewhat similar to that of its human counterpart. Human beings also make decisions on the basis of their accumulated information. This is the reason why the equivalent attributes of a computer system have been termed as Artificial Intelligence. The only difference being this that the humans have intelligence as a cognitive value while a computer system have been so programmed by humans to possess it and therefore, it is called as an Artificial Intelligence.

Types of Artificial Intelligence104 The categorisation of AI has been done in three different types based on the Javier Andreu Perez, Fani Deligianni, Daniele Ravi & Guang-Zhong Yang, Artificial Intelligence and Robotics, UK-RAS NETWORK ROBOTICS & AUTONOMOUS SYSTEMS (2016), 103 Rik Marselis & Humayun Shaukat, Machine Intelligence Quality Characteristics: How to Measure the Quality of Artificial Intelligence and Robotics, SOGETI (2018), 104 Dr. C. Vijal, Artificial Intelligence in Indian Banking Sector: Challenges and Opportunities, INTERNATIONAL JOURNAL OF ADVANCED RESEARCH 7(5) 1583, 1581-87 (April, 2019) 102


Artificial Intelligence and Policy in India

209

‘intelligence’ possessed by such computer systems: • Artificial Narrow Intelligence This kind of AI has a very basic functionality. It only responds in accordance with the data/information which has been made available to the computer system. For example: Chatbots and Personal Assistants like- Siri, EVA (HDFC Banks’s conversational banking chatbot), Keya (Kotak Mahindra Bank’s conversational banking chatbot), SIA (State Bank of India’s conversational banking chatbot) etc. • Artificial General Intelligence This kind of AI has a comparatively high functionality and comprises of those computer systems that are capable of performing skills that require high cognition value. All such machines are capable of learning continuously like human beings. For Example: Self-driving cars, Auto-pilot mode etc. • Artificial Super Intelligence This kind of AI has not been yet developed. It is aimed at giving the computer systems with intelligence which is greater than human beings. For the purposes of the current Research, we shall only be focusing on Artificial Narrow Intelligence because the same is being utilised in the Indian Financial Sector.

Advantages of Use of AI There are several advantages of using AI. Some of them are listed below: • AI is packed with intelligence, efficiency and speed. It has the capability of optimising all assigned tasks and delivering accurate results when accurate data is fed in its system.105 • AI is capable of handling huge volumes of data in one go. Therefore, it has an enhanced ability and superior processing power.106 • A well-programmed AI system is capable of self-programming itself if need be. This can be done without human intervention. Like human beings, the AI also

Kaj Stala, Advantages of Artificial Intelligences, Uploads and Digital Minds, INTERNATIONAL JOURNAL OF MACHINE CONSCIOUSNESS 4(1) 276-80, 275-91 (2012) 106 Zheng You & Shaojun Wei, White Paper on AI Chip Technologies, BEIJING INNOVATION CENTER FOR FUTURE CHIPS 5-7, (2018) 105


210

• • • • • •

Artificial Intelligence and Policy in India, Volume 2

learns from its past experiences and changes its algorithms as per the requirement without any kind of human stimulated trigger.107 A consequence of the aforementioned advantages is that of cost reduction. As soon as, AI is deployed, the benefit of cost reduction can be yielded. This is because it reduces the number of human beings that would be required to do a job. The deployment of AI would help in rational decision-making. The humans in most cases have their own biases. Therefore, AI would come to rescue and it offers such products and services that are beneficial in decision-making. The use of AI also saves humans from doing tasks which are repetitive in nature by automating them. For example: Processing and analysing the same data again and again. The AI technology is available at our disposal round the clock. There are no time restrictions in respect of it ability. The AI technology has the ability of transforming the Financial Sector by increasing the productivity and how the same operates. AI has also been recognised for increasing cyber security. The Indian Finance Sector relies heavily on AI and currently it is the only sector which has the highest reliance on AI.

Thus, the advantages of AI as have been listed above make it the most sought after technology which the human beings are aware of. This is because it proposes to resolve most of the technological challenges which are being currently faced. The resolution of these challenges is essential to make room for more advanced technology to come in. AI so far is also the most promising technology. For the world of entrepreneurship and companies, AI is being considered to be a boon because they have a promising outlook and propose to resolve most of the challenges which are being faced by the businesses. This is also the reason why economically and fiscally, AI is being considered to be a revolutionary technology.108

The Symbiotic Relationship Between AI and Fintech A bank officer from a public sector bank was quoted mentioning in his blog in the year 2013,

“Bank Officers never see ‘Sunrise’ and ‘Sunset’ nowadays… They have no 107

108

Kim Martineau, Toward Artificial Intelligence that Learns to Write Code, MASSACHUSETTS INSTITUTE OF TECHNOLOGY NEWS, (June, 2019) James Manyika & Jacques Bughin, The Promise and Challenge of the Age of Artificial Intelligence, MCKINSEY & COMPANY, (Oct. 2018)


Artificial Intelligence and Policy in India

211

fixed working hours. Even Saturdays have now become a full working day for them. Their genuine leave is denied or rationed… Bank Officers at the lower levels are mercilessly punished for smaller mistakes… Despite of all this, the Bank Officers are not paid adequately.”109

From the above account, it may be observed that the bank officers in 2013 were yearning for attention because the amount of workload they had was too much and the same had adversely impacted their work-life balance. These observations were made in the year 2013, when AI was an alien concept to the Indian public sector banks and the bank officers had no option but to do tasks manually. It was only after the advent of use of AI in the banking and finance industry that the automation of the back-end workflows happened and this along with other facilitations resulted in increased efficiency and released pressure off the bank officers.110 Not just this, AI has also provided an edge to the FIs by enabling them to offer personalised products and efficient customer service, hence increasing their customer base. The banking business in India has changed dramatically over a period of last twenty-five years. This change has been majorly brought about as a result of technological advancement. AI appears to be much more promising than any other technology which has been introduced till date. Experts have also regarded AI to be more profound than fire or electricity. This is the reason why the Finance Sector wants to deploy AI in its functionality. Dependency of Fintech on AI There are a couple of reasons in which the Financial Institutions (FIs) justify the usage of AI. Some of these reasons are: • The Financial Crisis of 2008 made the world realise about the high level of systemic risk that is associated with the FIs, majorly banks and lending institutions. Due to this realisation, most countries of the world started considering them to be ‘too big to fail’ and increased the regulations and compliances to which they were subject to. As a result, the FIs felt burdened and were unable to cope from the situation. The companies specialising in AI then came to their rescue and offered solutions to manage their affairs. Their association is still continuing and getting stronger from then to now. • The banking industry has become very unpredictable, financial failure of institutions is rampant and the NPA problem is increasing manifold.111 In such 109 Pannvalan, Pitiable Plight of Public Sector Bank Officers Today, ALLBANKINGSOLUTIONS.COM

(2013) Srihari Subudhi, Banking on Artificial Intelligence: Opportunities and Challenges for Banks in India, INTERNATIONAL JOURNAL OF RESEARCH IN COMMERCE, ECONOMICS & MANAGEMENT 9(7) 2, 1-5 (July, 2019) 111 Saloni Shukla & Shilpy Sinha, A Major Crisis May be Brewing for Indian Banks, THE ECONOMIC TIMES (Aug. 2019) 110


212

Artificial Intelligence and Policy in India, Volume 2

a scenario, making effective decisions becomes very difficult. Therefore, in order to save the FIs from various hazards, reliance on the decision-making skills of the AI is made. The FIs have a requirement of maintaining and handling large volumes of data. From the customer’s KYC details to the pattern followed by them in making various bank transactions, the FIs are responsible for tracking and reporting it all. The AI specialises in handling large volumes of data. This is what attracts the FIs to rely upon AI. As mentioned above, the FIs have to act like a repository of a lot of customer data, in such a situation the FIs are tempted to bend towards the use of AI.112 This is because the whole model of AI is based around data handling, data analysis and data management. The ease of use and speed offered by AI is what the FIs require. In addition to the foregoing, the ‘Reporting Entities’ under the Prevention of Money Laundering Act, 2002 are also required the store the transactional information of each customer until the lapse of five years.113 This proves even more burdensome for them. Therefore, the use of AI becomes inevitable. As mentioned above, the FIs handle huge amounts of personal data of their customers and potential customers. In today’s time, there is a looming threat of this personal information being hacked by the unscrupulous persons. Therefore, the FIs also have the responsibility of keeping their repository of data safe from one and all kinds of such threats. Again, AI comes to the rescue of FIs because the manner in which cyber security and surveillance is maintained by them, FIs feels the requirement of having it. Needless to say, mere data encryption will not suffice for securing such a data. In today’s time, the FIs have two-fold responsibility. One is towards the customer and the other is towards its regulators. The second responsibility gives rise to a lot of compliance requirements from the FIs. Therefore, huge amounts of data have to be summarised for the purpose of meeting compliancerelated requirements and also to keep abreast with the data requirements. Therefore, AI is a one click solution to all such requirements and the FIs rely on it. It is now common knowledge that the future computer systems shall inadvertently rely on AI and it shall be the future of technology.114 Therefore, in order to keep up with the technological trends and also to have an edge over their competitors, the FIs choose to opt for AI.

Thus, the aforementioned were some of the reasons why the FIs rely upon AI. The Sam Kumar, Demystifying Big Data in Banking, BBC, (2019) As states in Section 12(3) of the Prevention of Money Laundering Act, 2002 114 AI Multiple, Future of AI According to Top AI Experts of 2020: In-Depth Guide, AI MULTIPLE, (Mar. 2020) 112 113


Artificial Intelligence and Policy in India

213

AI rely upon FIs in order to make their own place in the market so that more and more people start utilising it for their day-to-day functions. If that happens, then there would be demand for further innovation in the field of AI and in this manner, AI shall benefit and be programmed to obtain its fullest potential. Therefore, AI is dependent on the FIs for its further development and innovation while FIs depend on AI for numerous reasons, some of them have been stated above.

Area of Deployment of AIs by FIs Globally, the AI technology is believed to have gained momentum after the Financial Crisis of 2008. This is primarily because in the aftermath of the Crisis, many FIs were badly hit and they laid off employees. These employees who had knowledge of technology and the banking industry collaborated to establish FinTech companies. These FinTech companies were well-equipped to deliver tailor-made products to the banks and FIs as per their requirement. Initially, most of these products were programmed to assist banks and FIs in the regulatory mechanism as was laid down in the Dodd-Frank Wall Street Reform and Consumer Protection Act of the US.115 This was one of the first occasions where AI was used in the financial sector at a large scale. From that time till today, there is no looking back. AI has been continued to be used at more and more avenues by the FIs. It is pertinent to note that there are umpteen avenues where AI can be deployed in FIs. From transforming both, the back-office to the front-office of FIs, it is capable of it all.116 Some of the avenues where the FIs are currently utilising AI: • Risk Management Banks and FIs are considered to be systemically important in the economy of every country. They are the essential part of that economic chain which if broken could result in a collapse of the whole system. Therefore, the burden of them remaining solvent is manifold. In addition to this, the recent Yes Bank crisis has been believed to have had a domino effect on many small and big businesses, including the government run, Life Insurance Corporation of India. Therefore, FIs are currently utilising AI to identify the risk prone businesses while sanctioning loans. The same is believed to have an accuracy rate of 83.95% currently.117

Thomson Reuters Legal Insights Europe, The 2008 Financial Crash Spawned a New Industry in the City – The ‘RegTech’ Boom, THOMSON REUTERS (October, 2019) 116 Dr. G. Anbalagan, New Technological Changes in Indian Banking Sector, INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH AND MANAGEMENT 5(9) 2, 7015-21 (2017) 117 Fengyi Lin, Deron Liang, Shuching Chou & Wing-Sang Chu, The Role of Non-financial Features in Business Crisis Prediction, INTELLIGENT SYSTEMS SOFTWARE LABORATORY (2018) 115


214

• Clearing of Cheques118

Artificial Intelligence and Policy in India, Volume 2

AI companies like DeepQuanty have come up with solutions in relation bank cheques. Their AI technology has the skill of extracting handwritten information on cheques and sorting these cheques on the basis of their issuing banks. The accuracy promised here ranges from 90% to 100%. In cases where AI is utilised for processing of cheques, it shall accelerate the speed of the whole process as compared to its human counterpart. • Sales & Marketing It is now an established fact that AI is transforming the operations of each and every sector and it is notable that marketing is also one of them. As per a report by Salesforce, there is a growth of around 44% since 2017 in the adoption of AI by the marketing professionals and there is a predicted growth of around 257% in such use by the year 2022.119 • Increasing Efficiency The banks (for example, HDFC Bank) have already started using AI for increasing their operational efficiency. This is also because there is a transition happening in the way banking is done, from completely physical mode to digital mode. Therefore, the requirement of having operational efficiency becomes paramount and the banks have already started deploying AI in order to tackle the same. • Credit Rating120 Credit Rating is a predictive analytic. The Credit Rating Agencies utilise the available information to predict the future financial position. However, in the past, there have been several avenues where the credit rating agencies have performed poorly and given unreliable information. The damage was irreversible because the accounts had already turned into an NPA. Therefore, in order to provide an alternative solution, the use of AI in predicting the creditworthiness of the potential customers is being deployed. The AI technology scans the data of potential customers which is uploaded by them on social networking websites. The ML technique would be utilised to tag certain types of social media posts to be associated with a defaulting behaviour. On the basis of this data, the AI shall predict the risk associated in sanctioning a loan to a potential customer and thereby, shall enable the banks to use AI constructively. It is pertinent to point Tenzin Norzom, Deep Quanty Launches AI-powered Product that can Read Cheques (May, 2020) ANI, IIM Calcutta and TalentSprint to Make Marketing Professionals AI Ready, CIO.COM BY THE ECONOMIC TIMES (February, 2020) 120 Bureau, Business Line, Ahmedabad, Banking Sector to Adopt AI Solutions to Improve Operational Efficiency, THE HINDU BUSINESS LINE (January, 2019) 118

119


Artificial Intelligence and Policy in India

215

out at this juncture that such a scanning performed by AI of the private data of an individual is not only violative of his Right to Privacy but is also a constitutional wrong. • Information Security and Cyber-security121 In today’s time, most of the data is stored in clouds and that has its own risk of being hacked.71 Therefore, there has to be an alternative which protects the information and fortifies cyber security of FIs. The baton for the same has been picked up by AI. The AI technology is programmed to identify those vulnerabilities in the security system which can be exploited and upon identification of the same; it also plugs those loopholes before they are exploited by any unauthorised persons. The technique of DL is utilised here. The whole security system is surveyed by AI to identify threats. However, this surveillance system may also falter at times due to the database on the basis of which it operates. • Customer Service122 The FIs are using AI for enhancing their customer services to attract more and more customers because of the level of efficiency AI offers. The AI works in such a manner that aggregates and categorises each customer’s activity in relation to their account and upon such categorisation, the AI studies the patterns followed by the customer and accordingly, resolves queries of the customer. This process is also instrumental in increasing customer satisfaction, increasing customer loyalty, increasing customer engagement and hence for the overall benefit of the FI. The major requirement in order to meet all the above is the data requirement. The challenge here is in relation to such data aggregation. It must be pointed out at the outset that all the data so formulated may or may not pass the data privacy requirements as set out by the Indian Judiciary and Legislature. • Chatbots and Virtual Assistants A chatbot is powered by AI and has the ability to can simulate a conversation, by the way of written or oral means, with a user in conversational language which is understood by him/her through the media like- messaging applications, websites, mobile apps. A chatbot is often described as one of the most advanced and promising medium of interaction between humans and machines. Virtual 121 Inferences drawn by the way of study of the AI model of DefenseStorm and DarkTrace which

122

are platforms for ensuring cybersecurity for banks. Accessible at: https://www.defensestorm.com/why-defensestorm and https://www.darktrace.com/en/technology/ respectively Anthony Spadafora, Companies aren’t keeping their cloud data secure, TECHRADAR PRO (October, 2019)


216

Artificial Intelligence and Policy in India, Volume 2

Assistants on the other hand are those AI powered computer systems which are capable of understanding the human diction, language and pronunciation and interact by verbal means only. They are capable of resolving the queries of the human user. However, the query resolution capability of both, chatbots and virtual assistants is dependent upon data and information which is provided to the computer system. If the query is beyond the scope of such a data then the chatbot or the virtual assistant will not be able to resolve the same. Thus, this is the lacuna of AI technology.123 However, the researchers are certain that the same can be resolved by the way of further innovation. Thus, the chatbots and virtual assistants are widely being used currently in the finance sector. Many public and private banks have already utilised it for customer service and delivering timely and satisfactory results. • Financial Advisory Another important area where AI is used by the FIs is for providing advisory to the potential customers by tailor-making products as per their requirement. The AI technology automates the portfolio strategies. It also scans and continuously keeps on suggesting ways to balance the portfolios as per such customer’s risk appetite. It is also true that such a technology is at a very nascent stage of development and would require some time to be operational with its full potential.124 • Compliance and Audit The Compliance is one of the most crucial aspect of operating a Financial Institution. Each concerned regulator has laid down several compliance requirements to be met by the respective FIs. It is also true that compliance is a very complicated and time taking process because the same is done manually in both cases. However, to break the monotony, the AI technology has proposed a solution for handling all the compliance related matters by itself and this solution is simply referred to as, ‘RegTech’125. The main aim of RegTech is to make the regulatory compliance easier. However, the same is still at development stage and has not been completely automated. The RegTech would be capable of keeping a track of all the requirements set out by the respective regulator of the FIs and shall assist in processing large amounts of data with efficiency and speed. This will also help in preventing any kind of compliance related lapses and in this manner shall reduce wrongful reporting and prevent the possibility of happening

IQVIS, Chatbots & Customer Service: A match made in Heaven, IQVIS (2018) Roger Wohlner, How AI is Shaping the Advisory Landscape, INVESTOPEDIA (Oct. 2019) 125 Combination of the term, ‘Regulatory Technology’ 123

124


Artificial Intelligence and Policy in India

217

of any case like Infrastructure Leasing & Financial Services Limited126 was being repeated. • Fraud Detection and Prevention of Money Laundering127 Currently, there are several AI products and services available in the market which assists in fraud detection. The banks also have been given an upper hand by all such products and services to build a custom fraud detection structure which is suitable for their needs and use. These products and services make it very convenient for the banks to ink their existing databases with them and they can immediately start delivering results. Upon analysis of these databases, the AI will be capable of giving an insight as to where a fraudulent transaction could have been performed. Based on this intel, the banks may further take steps to investigate such a transaction. In a similar manner, transactions having elements of money laundering can also be tracked successfully by AI.

Impact on AI on the Economy The purpose of using technology since eternity has been to do tasks with ease. Technology has largely impacted the way we function. The advent of AI has much more in store than just making tasks easeful. AI offers enhanced level of productivity, efficiency and speed. These features of AI are not only reflective of its potential at an organisational level but shall also prove indispensable at the Global Economy level. A recent report on the subject states that the use of AI has the potential of adding around 13 Trillion USD to the World Economy by 2020.128 Although, it is true that AI has the potential of furthering economic well-being, we must also not forget that it is a disruptive technology and may lead to disruptions in the field of employment and consequently on the economies too.129 This is because the jobs lost dues to the use of AI would have to be compensated or adequately by the governments only. Therefore, a wholesome picture is required to be examined. In addition to the foregoing, it must be noted that we are currently at a very early stage of research and development of AI. We are yet to tap its full potential. While we get there, the transitioning stage would be very difficult to achieve. In this stage, there would be huge gaps between the classes of individuals, FIs, economies, businesses etc. This is because some groups might have the resources The Serious Fraud Investigation Office had reported in this case that the there were major audit related lapses 127 Inferences drawn by the way of study of the AI model of Feedzai this is a fraud management platform. 128 Jacques Bughin, Jeongmin Seong, James Manika, Michael Chui & Raoul Joshi, Discussion Paper: Notes from the AI Frontier Modeling the Impact of AI on the World Economy, MCKINSEY GLOBAL INSTITUTE 13, (2018), 129 Irving Wladawsky-Berger, The Impact of Artificial Intelligence on the World Economy, THE WALL STREET JOURNAL, (2018) 126


218

Artificial Intelligence and Policy in India, Volume 2

to deploy the use of AI while some others might not have such resources. As a result of this parity, there would be huge gaps amongst all classes equally. Some FIs, people or businesses may further be pushed into poverty and some fight touch the sky due to the advantages garnered from AI. Thus, this will increase pressure on the Global Economy and has the potential to lead to a standstill. In all such cases, it would be paramount for the respective governments to intervene and provide funding to all those classed which cannot afford AI. This will also be helpful in maintaining competition in the market and would prevent any kind of undue advantage from being taken. As per the report cited above, the following are the categorisations of use of AI88: • Only 10% of persons have benefitted by the use of AI and are super active in its adoption, • Around 20% of persons are very slow in embracing the AI technology and are acting cautiously before fully utilising it, and • Around 70% of persons have not invested in AI. This maybe for the simple reason that their capital restrictions do not permit them to do so. From above, it abundantly clear that the introduction of AI is limited to the elite few. At the current times, most of the aspects in relation to AI are speculative in nature and all the speculations would depend upon how and at what pace is the development in technology growing. Therefore, the world economies must also look into that aspect and decide the way forward. However, the same can only be made possible when the governments have formulated a policy which encourages growth of adoption of AI and also provides relaxations for the same, wherever required.

Existing Legal Framework in India The AI technology has been associated the most with the keyword, ‘Disruption’. Even the Discussion Paper by NITI Aayog titled as, ‘National Strategy For Artificial Intelligence’ refers to AI as a disruption on 14 occasions. This is for the simple reason that AI is speedily modifying the ways in which we used to operate in the past are fast changing. It is true that the word, ‘Disruption’ has a negative connotation attached to it and even the Black’s Law Dictionary defines ‘Disruptive Conduct’ to mean, “Disorderly conduct in the context of governmental proceeding.”130 Risks Associated with the Use of AI by Fintech: As we have already seen, the AI technology has improved the way in which the FIs function and to say the least, the picture looks dream-like. And it must be kept 130

National Strategy for Artificial Intelligence – June 2018 by NITI Aayog


Artificial Intelligence and Policy in India

219

in consideration that with great power comes great responsibility. Therefore, we must also not forget that there are several risks associated with that dream-like picture. Let us now study some of them: • Right to Privacy: The operations relating to AI depends upon the availability of data. In most cases, the data on which AI relies is not collected and accumulated with consent. Therefore, it is a clear violation of the Right to Privacy which is a constitutional right. • Right to Equality: We have seen in a few cases that the data which was fed in the AI system caused the output given to be discriminatory. At a preliminary level, we are aware of the fact that the AI system is capable of discriminating amongst equals. This is a violation of the Right to Equality which is a constitutional right. Such discrimination happens because the AI analyses the data wrongly, becomes unstable or is unable to yield correct results because of the way it has been programmed. • Glitches in technology: The AI technology at the end of the day is a machine and like other machines is prone to accidents, hangups, crashes, glitches etc. In a study conducted by Deloitte it was observed that only 6% of the participants did not feel any issues with the technology whereas, 94% of all the other participants faced some or the other kind of issue with the operation of AI.131 Therefore, there are issues in the operation of AI itself. This could also be detrimental to the analysis which is carried out by the AI and may adversely affect the results which would be relied upon by the FIs. Thereby, making the whole exercise futile and when the FIs act upon such faulty information, then many other adversities would arise from the same. • Easy to Manipulate: It must be noted that for a fully functional AI, data is everything. In cases where this data input is unclear, the AI would get confused and show unexpected results. A similar study was conducted by the Indian Institute of Science which concluded that the AI technologies can be easily confused and if that is the case then the human race is in trouble. It further added that AI is easy to hack also. Therefore, on the basis of the foregoing, it can be seen that undue advantage of the technology can be taken by any miscreant to meet their ends and sabotage the whole purpose of the task. • Ethical Considerations: By now it has been established that the deployment of AI is not a pure substitute of its human counterpart. Although it is designed in a manner in which human beings are wired but that is not sufficient to allow it to have a touch of humanity. In addition to this, AI is also prone to making silly mistakes which can be avoided by its human counterpart. We also do not Thomas Davenport, Jeff Loucks & David Schatsky, Bullish on the Business Value of Cognitive, DELOITTE STATE OF COGNITIVE SURVEY 12 (2017), 131


220

Artificial Intelligence and Policy in India, Volume 2

know how we would protect ourselves from the unintended consequences of results of actions of AI. This is because AI is capable of programming itself so therefore, it could prove to be a lethal combination. In addition to the above, we have no action plan for coping up from such a “complex intelligent system”. We are also stuck in understanding whether the robots must be accorded any rights at all or not. This consideration has come up because recently, Saudi Arabia had given citizenship to its first robot, Sophia.132 Unattractive Intellectual Property Protection Laws: The current regimes for protecting the IPR pertaining to AI are not at all specialised. Hence, innovations in AI are being attempted to be protected under the current regime relating to IPR. But, it must be noted that AI is a specialised type of technology which may include algorithm, hardware and a lot other components. Therefore, its protection under the Copyright Law or the Patent Law could prove to be insufficient. Therefore, an urgent requirement is felt for putting in place a specialised regime for protecting AI. This will also encourage the innovators in the field of AI and provide them an incentive of being recognised for their work.133 Responsibility for Errors: As mentioned above, AI is prone to making errors at the current times. In cases where a loss happens due to the use of the output provided by AI to FIs, then in such cases, it is not clear that who shall be bearing the responsibility for such a loss to the FIs. The loss could also be in terms of failing the compliance mandate. There is also a requirement to deal with the ethical challenges here. The person who is programming the AI must do so without having any bias and if that is not happening then the outcome would not be meeting the ethical challenges. Unemployment: Many people from the working class are worried about securing themselves an employment in case AI is introduced in a full-fledged manner. This is because we have seen how AI would replace the human beings with increased efficiency for performing certain tasks. The employers of such personnel would definitely opt for AI instead of the human alternative because AI is speedy, efficient and convenient. As a result of this, several people from the FIs might lose their job and be unemployed. This grave threat which is seen. FIs with Less Capital Disadvantaged: Because AI is an upcoming field, the professionals having expertise are not easily available and in case they are available, they are expensive. Therefore, only big FIs will be able to afford the AI technology. This will further increase the gap between the large scale and small scale FIs in terms of deriving advantages and benefits out of the technology. As a result, the customers will also prefer the big FIs because due to the use of AI, they shall be able to price their financial products and services

Robert David Hart, Saudi Arabia’s Robot Citizen is Eroding Human Rights, QUARTZ, (Feb. 2018) James Nurton, The IP Behind the AI Boom, WORLD INTELLECTUAL PROPERTY ORGANISATION MAGAZINE (Feb. 2019) 132 133


Artificial Intelligence and Policy in India

221

minimally. This may also trigger the requirement for an intervention by the Competition Commission of India. This can also happen because the data which is relied upon by AI is closely held by some companies only.134 • Legal Challenge: In many cases, the wrong output generated by the AI could also open the same for legal challenges against the FI. Hence in all such cases, the FI would be subject to unnecessary litigations which could have been avoided. • Cyber-security: AI technology is not safe from the risk of hacking. The FIs handle huge amounts of data relating to its customers which is highly confidential in nature. Therefore, the risk of such confidential data getting leaked is high. It is clear that AI is like a double-edged sword we need it for the benefits it provides and we do not need it because of the risk it poses. As seen from above, the risks posed by AI are not very grave and are capable of being tackled by the way regulatory and legal means. If the legal infrastructure is able to regulate the same, then the use of AI can be done for beneficial progress of FIs in India without the looming threat of the aforementioned or any additional risks. If this happens then it would be a win-win situation for all the stakeholders.

Statutes It is pertinent to note that there is no dedicated and specialised statute for monitoring the functionality of AI in India, let alone that relating to FIs. Therefore, we are left with no option but to analyse the other related existing statutes for provisions governing AI: Constitution of India The Constitution of India is the lex terrae and all the other laws emanate from it. Therefore, it is essential to test whether the use and regulation of AI in India is permitted by the COI. A perusal of Article 51A of the COI135 is instrumental in this study. This Article under clause (h) lays down that it is the duty of every citizen to “develop scientific temper, humanism and the spirit of inquiry and reform”. In simple words, it is the Fundamental Duty of the Indian citizens to be inquisitive to learn, intellectual and have an inquiring mind.136 Therefore, this interpretation of the Fundamental Duty makes it instrumental for all the Indian Citizens to develop a spirit of inquiry for understanding and using the AI technology. This will also be beneficial for the consolidated growth of the Vishal Mathur, Is the Indian Government Set to Regulate Artificial Intelligence and its Ethics, NEWS18 TECH, (Feb. 2019), 135 Titled as- “Fundamental Duties” 136 Avinash Nagra v. Navodaya Vidyalaya Samiti etc. 1997 (2) SCC 534 – Division Bench of the Supreme Court of India on 30.09.1996, Para 10 134


222

country.

Artificial Intelligence and Policy in India, Volume 2

The Information Technology Act, 2000 The IT Act, 2000 does not address the issue of AI, more so because the Act, 2000 was last amended in 2008 and the introduction of AI in India took place only after 2008. Therefore, there is a desperate need for amending the IT Act, 2000. However, it must be noted that the interpretations derived from the IT Act, 2000 can be used indirectly for regulating AI in India as well. The following provisions shall come in handy for the same: • The definition of ‘Computer’ includes “optical or other high-speed data processing device” which is capable of performing “logical, arithmetic and memory functions”.137 Thus, the aforementioned definition would include the AI technology within its purview. Therefore, the IT Act, 2000 provisions would be applicable to machines operating on AI technology. • The IT Act, 2000 also defines ‘Data’ which is the heart and soul of AI. It has been defined to include “representation of information, knowledge, facts, concepts or instructions” which is used with a computer. Therefore, the definition of ‘Data’ is also wide enough to include all such information which is fed in the AI powered computer to generate outcomes. • An “Intermediary” has also been defined under Section 2(w) of the IT Act, 2000. It includes all such persons “who on behalf of another person” receive, store or “provide any service” in relation to such data will be considered to be an Intermediary. By the way of an analysis of this definition, it can be clearly inferred that all the persons providing AI related services to FIs would be covered under this definition and would be considered to be an Intermediary because they provide a service to the FIs and store or receive the data in order to provide their AI powered services. • Provisions of Chapter IX of the IT Act, 2002138 relating to the levy of penalties and compensation can also be made applicable for the purposes of AI because as mentioned above, the term ‘Computer’ includes AI. As a result of this, damage to the AI is protected. • The operation of Section 43A of the IT Act, 2002139 shall safeguard the failure to protect data by body corporates. The operation of this provision would be detrimental in promoting the development of AI. This is because it mentions that a body corporate140 which is in possession of data, dealing with such data Section 2(1)(i) of the IT Act, 2000 Titled as- “Penalties, Compensation and Adjudication” 139 Titled as- “Compensation for failure to protect data” 140 As per Explanation (i) of S. 43A of the IT Act, 2000, “Body Corporate” includes company, firm, sole proprietorship and AOP 137

138


Artificial Intelligence and Policy in India

223

or in any manner handling data which contains “sensitive personal data or information” has negligently secured the data and this has resulted in wrongful loss or gain to any person then in such a situation, the body corporate would have to compensate the person whose data has been breached.141 • The person providing AI related services and products to FIs shall be considered to be an Intermediary under the IT Act, 2000. Accordingly, the applicability of Section 67C of the IT Act, 2000 to such persons is inevitable. The provision makes it a responsibility of the Intermediary to preserve and retain the data provided to it by the FIs for such time as may be prescribed by the Central Government. The applicability of this provision must be relaxed and not extended to the persons providing AI related products and services because the amount of data they handle is huge. Such persons in all cases may not be having the infrastructure to meet these requirements. Therefore, it will be detrimental for the promotion of AI in India. • The Section 72A of the IT Act, 2000142 makes it mandatory for all service providers acting under a contract for data related assignments must ensure that he/she/it is acting in a bona fide manner at all times. It is the duty of such a person to ensure that no wrongful gain or loss is caused to any person in relation to such a data. A breach of this provision is punishable with an imprisonment of upto 3 years, fine upto Rs. 5 Lakhs or both. This provision will have full applicability on the persons providing products and services in relation to AI to FIs. This is because such transactions are in majority of the cases governed by the way of a lawful contract. Therefore, the data privacy requirements have been extended to AI service providers as well under this provision.

Critical Analysis: A perusal of the aforementioned provisions proves that the

current infrastructure of the IT Act, 2000 is insufficient to tackle the growing and ever-changing regulatory expectations for regulating AI. The most overlooked part is the fact that the IT Act, 2000 was last amended in the year 2000 and since then, no amendment has been made to the same. In a world where technologies are changing rapidly, such a plight is more than alarming. Intellectual Property Laws. The only protection of the AI technology which has been developed from the intellect of a person or a group of persons is by the way of the Intellectual Property Law, particularly, Copyright Law and Patent Law. It is true that the AI is powered by a set of algorithms which are protected under the Copyright Law in India because the same is included within the definition of ‘literary work’ 143. In cases 141

Section 43A of the IT Act, 2000

142

Titled as- “Punishment for disclosure of information in breach of lawful contract” Section 2(o) of the Copyright Act, 1957

143


224

Artificial Intelligence and Policy in India, Volume 2

where the algorithm is has a hardware or a technical aspect attached to it than in those cases, it can be accorded protection under the Patents Act, 1970 if satisfies the conditions mentioned under Section 2(1)(j) of being novel, must involve an inventive step and must be capable of industrial application. Thus, now the person creating such an AI has to decide whether they opt for protection under the Copyright Law or the Patent Law, depending upon the type of AI which has been developed. Critical Analysis: From above, it has been observed that the current protection accorded for AI under the existing regime is not an all-round protection for protecting the AI programmers’ interest.111 This is for the simple reason that algorithms form the heart and soul of AI. It is easy to “take inspiration” from an existing algorithm and to reword the same using a different syntax.112 Therefore, this factor is deterring the researchers of AI from carrying out further research on the subject. At this juncture, it is essential to have a system in place which motivates the researchers to carry out research on AI and not deter them. Data Protection Bill, 2019 The Data Protection Bill, 2019 has been cleared by the Union Cabinet in December, 2019. This Bill, 2019 contains Section 40144 which mentions that the Data Protection Authority of India shall create a Sandbox in public interest for encouraging innovation in AI, ML and any new technologies. The Section also envisions to provide exemption to such Sandboxes from the mandate of data protection norms as set out in the Bill, 2019 under Section 22145 sub-clause (3).146 This discretion of relaxing the data protection norms would lie solely with the Data Protection Authority of India. This is indeed a big breakthrough for the innovators of AI. This is because AI feeds on user data to deliver outcomes. If such a relaxation is given to the AI innovators and this Bill, 2019 becomes a statute; it shall definitely foster the growth of AI in the Indian financial sector.

Initiatives Taken by Government Bodies India has various organisations which have been commissioned with the task of researching and developing AI in India. This task involved a lot of in-depth study on the subject. Therefore, we currently have the following bodies which have been so commissioned with the assignment of working on AI: Task Force on Artificial Intelligence for India’s Economic Transformation The Task Force on AI for India’s Transformation had also been constituted by the Government of India’s Ministry of Commerce and Industry on 24th August 2017147 and it submitted its Report on 19th January 2018 which has also been Titled as- “Sandbox for encouraging innovation, etc.” Titled as- ‘Privacy by design policy’ 146 Under Section 40(2) of the Bill, 2019 147 C.R. Chaudhary (Minister of State of Commerce and Industry), Finalisation of National Artificial IntelligenceMission, PRESS INFORMATION BUREAU, DELHI (July 2018), 144 145


Artificial Intelligence and Policy in India

225

analysed in Chapter 1 of this Research.148 As per the said Report, it was recommended to constitute an “Inter-Ministerial National Artificial Intelligence Mission” which shall be the nodal agency and a single point of contact for all the programmes relating to AI in India. Thereafter, in a meeting it was decided that NITI Aayog shall be responsible for formulating “National Strategy Plan for AI” in India. The MEITY had also shown interest in conducting the said research therefore; it also constituted 4 committees within it, the reports of which have been discussed hereafter. NITI Aayog had also submitted its report on 4th June 2018 which discussed the following. • NITI Aayog NITI Aayog is the Indian Government’s think tank for formulation of policies. In June, 2018, NITI Aayog had published a Discussion Paper149 titled as, “National Strategy for Artificial Intelligence #AIFORALL” which discusses aspects likeinception of AI, brief about the technology, analysis of use of AI in different sectors, challenges in implementing AI, recommendations for meeting these challenges, proposing roadmap for development of AI in India etc. It is critical to note that the Discussion Paper acknowledges that Financial Services are at the top when it comes to deployment of AI in their day-to-day transactions against all other sectors but the Paper neither mentions about the particulars of its use nor does it lay down any guidelines pertaining to it. However, it mentions a roadmap for making data available to start-ups for the purpose of developing AI. It also proposes to set up a “National AI Marketplace” which is the repository of data for all fields from medical related to customer behaviour. It further deals with the issues pertaining to AI like- privacy, security, AI’s inherent biases and transparency. The Paper gives suggestion in relation to Privacy which are in line with Justice Srikrishna Committee Report. It is pertinent to note that the Finance Ministry had sanctioned Rs. 7000 Crores for the AI programme of NITI Aayog as has been briefed above.150 • The Ministry of Electronics & Information Technology The MEITY had published a report titled, “Leveraging A.I. for Identifying National Missions in Key Sectors”. This Report elucidates the role that AI is

148

149

150

Report available on the website of Artificial Intelligence Task Force which has been constituted by the Ministry of Commerce & Industry under the Indian Government, can accessed from: https://www.aitf.org.in/ Arnak Kumar, Punit Shukla, Aalekh Sharan & Tanay Mahindru, Discussion Paper: National Strategy for Artificial Intelligence, NITI AAYOG 14 (June 2018), MoneyControl News, NITI Aayog gets Rs. 7000 Crore for AI Project: Report, MONEYCONROL.COM


226

Artificial Intelligence and Policy in India, Volume 2

capable of playing in the various sectors.151

• Conflicts Between NITI Aayog and MEITY As seen from above, both NITI Aayog and MEITY had given their view on how AI must be developed in various key sectors in their respective reports. NITI Aayog was of the opinion that AI must be developed by the way of setting up of research centres, institutional centres and a cloud computing service whereas, MEITY wanted to introduce it firstly in the Agriculture sector. Because of the conflicting area of work, there were conflicts in the opinions of the two bodies. The bodies had also requested for funding to develop and promote AI in India from the Expenditure Finance Committee of the GOI. The Expenditure Finance Committee had sanctioned Rs. 7000 Crore as budget for NITI Aayog’s AI Programme and thereafter, MEITY had also approached the Expenditure Finance Committee for a proposal of sanction of Rs. 400 Crores. However, the GOI noted that there is conflict of interest here. Therefore, it set up a committee to resolve and tackle this conflict for implementing the National AI Mission.152 This committee was responsible for separating their area of work. Hence, it can be seen that the GOI has been acting proactively for rooting out duplication in this field of research and prompting only one body to helm it and making it the point of contact. • Department of Economic Affairs, Ministry of Finance The Department of Economic Affairs, Ministry of Finance, Government of India has also undertaken some comprehensive work on the field of AI. 153 Thus, the aforementioned were some of the efforts taken by the Indian Government Bodies in relation to AI use in the field of FIs. From the above, it may be inferred that the ministerial bodies are proactive in taking up researches in relation to AI in India. This is also the rightmost thing to do for any government because the citizens will also feel encouraged to take up research and innovation in the field of AI if certain incentives are allowed by such a government to instil people to undertake such a research. The same must continue for the betterment of the country as a whole. Also, actions must be taken as per the observations mentioned in such reports. If no action is taken then the whole exercise of conducting research would be futile. Report available on the website of Ministry of Electronics & Information Technology, Government of India: https://meity.gov.in/artificial-intelligence-committees-reports 152 Surabhi Agarwal & Yogima Seth Sharma, Govt. sets up panel to resolve MEITY and NITI fight over AI, THE ECONOMIC TIMES (Oct. 2019 153 A Steering Committee was set up by Ministry of Finance to explore Fintech Related Issues. The Report submitted by the said Committee can be accessed from the following link: https://dea.gov.in/sites/default/files/Report%20of%20the%20Steering%20Committee%20 on%20Fintech_1.pdf 151


Artificial Intelligence and Policy in India

227

Steps taken by the RBI The RBI is the regulator of FIs like- banks, Non-banking Financial Companies, Asset Reconstruction Companies etc, Microfinance Institutions, e-Wallets etc. It has taken a few steps which are helpful in guiding the manner in which the FIs regulated by it are supposed to operate in matters pertaining to deployment of AI. Some of these directions are: • Storage of Payment System Data154 The RBI had given explicit directions on 6th April 2018 to all the system providers to store all the data in respect of electronic payments in the servers which are located in India. This move was done to ensure better monitoring of the data maintenance. This circular will also be applicable to the service providers of AI companies. They will also have an obligation to base their servers in India. It is a welcome move because data privacy in some other country’s jurisdiction would be difficult to ensure. Therefore, it is better that the same be stationed in India. • Devising of Regulatory Sandbox155 The RBI by the by the way of a Report dated 13th August 2019 had released the ‘Enabling Framework for Regulatory Sandbox’. As per the Report, a Regulatory Sandbox is instrumental in the live testing of the financial products and services in a controlled manner where certain regulatory relaxations may or may not be permitted. This discretion of relaxations to be provided shall be decided solely by the RBI. The RBI shall work in close coordination with the stakeholders and customers in order to conduct field tests to collect evidence on such a product or service and its viability on different fronts. The Regulatory Sandbox has been devised primarily for testing the emerging technologies like Artificial Intelligence and Machine Learning applications. The Report while facilitating the growth of AI also mentions that consumer protection would be one of the crucial factors of consideration for the entities operating sandboxes. These entities have also been directed to take insurance to protect the consumers’ interest. Therefore, it can be seen that the RBI has balanced the interest of all the interested parties here. From consumer welfare to incentive for AI development, all has been taken care of by the way of this Report. However, till date, RBI has only invited applications to test the Retail Payments in Sandbox.156

Judicial Precedents 154

RBI/2017-18/153

155 Department of Banking Regulation, Banking Policy Division, Enabling Framework for Regulatory 156

Sandbox, RESERVE BANK OF INDIA (August, 2019), Amrut Joshi, RBI Opens Application to its First Cohort Under the Regulatory Sandbox, MONDAQ LIMITED (January, 2020)


228

Artificial Intelligence and Policy in India, Volume 2

The Hon’ble Chief Justice of India Mr. S.A. Bobde had addressed the Supreme Court Bar Association in November, 2019 and urged the advocates to use AI for a better administration of justice.157 He added that the legal system powered by AI shall facilitate in improving efficiency in the system and would also be beneficial for the Advocates. The Hon’ble Chief Justice went on to remark that AI, however advanced it becomes, will never be able to replace the wisdom and knowledge possessed by the judges. In his opinion, the Judiciary will always remain unsubstitutable by AI and the presence of human judges is essential in courts for maintaining the human touch in the judicial pronouncements. Therefore, it is hereby established that the penetration of AI in the Indian Judiciary will only be in the form of assistance and not a wholesome engagement. The ICAI has taken cognisance of the disruptive nature of AI and believes that it has full potential of transforming the methodology by which auditing is performed.158 This is because in their opinion large databases are being created by different organisations for everything. And due to the presence of these large databases, the traditional audit procedures are rendered less effective and lose its efficiency. Thus, a requirement for formulating new audit procedures arises.137 In addition to this, the ICAI also believes that AI could also bring about significant advantages if the technology is used for the process of auditing in order to remove the repetitive time consuming tasks. On this note, it encouraged all the members to be receptive to the AI technology and harvest its advantages in order to make auditing more easier, smarter and effective.138 Another important foresight of ICAI is this that soon AI shall replace the functions performed by the auditors which includes, data collection, data processing, data analysis and data dissemination. Thus, in such a situation, the prime focus of an auditor would be to understand the methodology of functioning of such an AI computer system which is performing the audit and accordingly, the role of the auditor shall change.139 Justice Puttaswamy (Retd.) & Anr. v. Union of India & Ors159 This case is also known as the AADHAAR case because the constitutional validity of the AADHAAR Act, 2016 was tested in this case. The AADHAAR Act, 2016 was adjudicated to be constitutional only to the extent that the AADHAAR number would be used for authenticating an individual’s identity for availing subsidy, service or any kind of benefit from the Government. In the midst of all this, the five-judge bench had interesting observations in relation to the Right to Privacy. The same is crucial to this Research in order to understand the validity of the collection, use and storage of personal data done by AI companies. The case has Smriti Srivastava, Supreme Court to Use Artificial Intelligence for Better Judicial System, MONDAQ LIMITED (January, 2020) 158 Professional Development Committee, ICAI, Quick Insights on Professional Opportunities for Chartered Accountants, ANALYTICS INSIGHTS (Nov.2019) 159 W.P. (Civil) No. 494 of 2012 157


Artificial Intelligence and Policy in India

229

established the Right to Privacy as a Constitutional Right. It has also laid down a Proportionality Test which would determine whether a particular act would be a violation of the Right to Privacy or not. Any person asking for any personal information of any individual would be said to be violating their Right to Privacy if: • There is no legitimate goal for collecting such a data. • There is no rational connection between the data collected and the goal identified. • There is no alternative to the said data which is sought. • There is no adverse impact of such a data collection exercise on the holder of that data. Not only this, the Supreme Court had also directed the companies collecting data to minimise such a collection, limit the purpose for which the data has been collected, retention of such data should be for a limited time period and proper security of the collected data must be maintained. Critical Analysis: On the basis of the aforementioned Proportionality Test if we weigh the practice of data collection for the purposes of developing AI then in such a case, the act of the AI companies of data collection would fall flat and will not be able to meet the requirements of this Test. AI has not been recognised as a legitimate goal, nor has the exception for data collection been given in the said judgment. Another essential point to note is this that the FIs enter into a contract with their customer before storing and sharing their personal data with the AI companies. In most cases, the clauses of this contract provide the consent of the data owner for sharing their data with third parties. These contracts are standard in format and therefore, no negotiations can be helmed by the owner of such data. Hence results in an inescapable situation for the customer. Though reluctantly, but he is still made to sigh the data sharing agreements. This further provides an edge to the FIs for sharing their customers’ data with AI companies. Another situation which is possible in this case is that the data collected is that of a potential customer and not an existing customer. In such a situation, there is no consensus between the FI and the potential customer for data sharing. Therefore, in all such cases if the data is shared by the FI to an AI company then it would be in clear violation of this judgment. It is pertinent to note that this judgment has ruled out the requirement of the AADHAAR Card to be given as a compulsory KYC document and this part has been made operational with retrospective effect. Therefore, even today, the customers can approach their banks and request them to redact the AADHAAR related information which they had supplied at the time of fulfilling KYC norms.160 160

KYC Norms have been prescribed by the RBI and under the Money Laundering Act, 2002


230

Artificial Intelligence and Policy in India, Volume 2

The fact that this judgment impacts the way in which the AI companies operate cannot be ruled out. However, till date, the Government has not set up any enforcement mechanism for enforcing the directions as mentioned in this judgment.

Secretary vs A. Suthanthiramoorthy161

The judgment in this case was delivered by the Madras High Court in a Writ Appeal which was filed by the Tamil Nadu Public Service Commission, Chennai. The facts of the case are: The Commission had invited online applications for recruiting an Assistant Horticulture Officer. Upon filling the form, the writ petitioner appeared for the written examination and secured a good rank. But later on, it was discovered that the writ petitioner had wrongly entered his date of birth and the date of issue of his community certificate. On the basis of this misstatement, his candidature was rejected. In order to challenge this rejection, the candidate filed a writ petition. It was held in this case that such a mistake of the candidate must be overlooked. The Hon’ble Judge also recorded in this case that there is a huge difference between the evaluation performed by a human being and that by a machine. The Hon’ble Judge further observed that if no mistake of a human was allowed to be corrected then one must rather use AI instead of doing it themselves. In a way, the Hon’ble Judge had taken a jibe at the Commission for rejecting the candidature of a person because of a human error. However, in this process, it was taken on record that the content of the data in AI remain unaltered and AI is an errorless technology. Thus, the case laws which are available on this subject are negligible because on the simple reason that the AI technology is still in the process of development. The judicial precedents in relation to it shall come at a later stage where any kind of loss or harm has been caused to any party.

Conclusions Dr. Vinay V., founder of Ati Motors162 once remarked,

“We hold machines to a higher standard. Deep Learning algorithms lack explainers – when it works, we don’t know why it works; when it doesn’t work, we don’t why. Our inability to analyse the failure is an issue.”

Upon a perusal of the foregoing, one would infer that the plight of AI in our society at the current time is very confusing. This confusion is existent because 161

162

W.A. No. 3285 of 2019; Judgment delivered on: 26th September 2019 Ati Motors is equivalent to being Indian version of Tesla, Inc. and specialises in manufacturing self-driving cargo vehicles for construction and industrial sites


Artificial Intelligence and Policy in India

231

there is some link in relation to the research on AI which is missing. The words of Dr. Vinay V. give us a picture that AI to AI researchers is like a mobile phone to common public. This is because the solution to every mobile phone related glitch is solvable after switching it off and restarting it. In the second situation also, no one understands as to why the mobile phones starts working and in cases where it does not work, there is no idea as to why it did not work. Thus, similar is the plight of the AI researchers in relation to the AI technology. The only problem which can be made out here is the fact that the development of the AI technology is at a very nascent stage at the moment. The expertise is in the process of growth and this growth shall happen in full swing only after more clarity on the subject is achieved. From this Research two major factors/trends can be inferred: one is this that the AI technology is here to stay and its benefits outnumber its drawbacks and the second is this that the functionality of Fintech have increased manifold due to the introduction of AI. On the basis of the aforementioned takeaways, it is safe to say that the AI technology must remain in existence and should not be completely wiped out, as has been endorsed strongly by Mr. Elon Musk.163 This is for the simple reason that one must try and balance the disadvantages associated with the technology. Also, it is too early to decide whether AI should be completely scrapped off because it is at a nascent stage of development. If such a notion was considered, none of the technological inventions which are existing today would have existed. This is because there is some or the other disadvantage associated with all kinds of technologies. It is also pertinent to note that once we have already set out on the path of discovering and developing the AI technology then the task must be completed. There are many safety nets which can be applied to protect from the hazards posed by the use of AI in Financial Institutions. Now that it has been established that we require the presence of AI in our society, it is essential to explore ways and means in which the same can be regulated. An attempt to regulate AI is essential because all kinds of developments in the field relating to AI for the use of FIs must be done in a controlled manner. Exercising of control must be done either by the Government or the task must be delegated by the Government to any association of engineers of AI in relation to FIs which is recognised in India. This would ensure that the developments are happening in a controlled supervision of experienced persons. This is for the simple reason that the reliance of FIs on AI is increasing manifold and AI is not untouched by risks, so therefore, it is only prudent for us to be vigilant from the start and take sufficient precautions before the FIs become completely dependent on this technology. There was disambiguation in relation to the following subjects which has now been clarified upon this Research and the same can be briefed as follows:

163

Kelsey Piper, Why Elon Musk Fears Artificial Intelligence, VOXMEDIA (Nov. 2018)


232

Artificial Intelligence and Policy in India, Volume 2

Definition of AI By the way of this Research, it has been observed that neither the computer researchers nor the law has defined the meaning of AI. It is pertinent to define AI because it is a developing technology and it is essential for everyone to understand the concept of the same by the way of a definition which is universally acknowledged. Personhood of AI. Personhood cannot be accorded to AI utilised by the FIs in any circumstance. This is because it will create a new host of issues which would be very difficult to address in relation to whether loans can be granted to it to whether it can be arrested. This has the capacity of altering our complete understanding of human life and the rights associated with it. Therefore, something as critical as providing personhood to AI must not be attempted. Precautionary or Proactionary. In the current case, after conducting the aforementioned research it can be concluded that because the development of AI in FIs is at a very nascent stage so therefore technology growth must not be hampered. But because there are certain risks associated in giving AI a freehand also. Therefore, it is concluded that AI must be governed by the way of maintenance of a balance between the Precautionary approach and the proactionary approach. This will ensure the development of AI as well as the monitoring of such a growth.

Hypothesis The hypothesis of this Research has been tested to be false. This is because we require innovation and development in the field of financial AI. This is because it will help in easing things out for the FIs and shall also help in managing and/or avoiding the current distress which most of the players in the Financial sector are undergoing. Therefore, the Hypothesis has tested negative accordingly. Thus, on the basis of the above findings, it is concluded that the AI technology is here to stay and all those who do not take the benefit of it shall lag behind. Therefore, it is of utmost importance to allow the technological development to continue without hampering its growth. However, it would become indispensable for the regulators to intervene in cases where the AI is not acting ethically or there is an inherent problem in its operational system. Therefore, selective intervention must be followed by the regulator. The regulator must also monitor each and every on-going activity so that it is in loop about the latest trends which are being followed.


Millions discover their favorite reads on issuu every month.

Give your content the digital home it deserves. Get it to any device in seconds.