The Indian Learning, Volume 1, Issue 1 (2020) (e-ISSN: 2582-5631) - Digital Edition

Page 1


The Indian Learning e-ISSN:Â 2582-5631 Volume 1, Issue 1 (2020) July 31, 2020 Abhivardhan, Editor In Chief Sarmad Ahmad, Chief Managing Editor

Digital Edition

e-ISSN: 2582-5631 Volume: 1 Issue: 1 Website: Publisher: Abhivardhan Publisher Address: 8/12, Patrika Marg, Civil Lines, Allahabad - 211001 Editor-in-Chief: Abhivardhan Chief Managing Editor: Sarmad Ahmad Date of Publication: July 31, 2020 Š Indian Society of Artificial Intelligence and Law, 2020. No part of the publication can be disseminated, reproduced or shared for commercial usage. Works produced are licensed under a Creative Commons Attribution-NonCommercialNoDerivatives 4.0 International License. For more information, please contact us at



contents ARTICLES


05 Policy Analysis: Asilomar AI Principles, EDW 2017

14 Interview with Helen Edwards, Sonder Scheme


An Analysis of Recommendation CM/Rec(2020)1 of the Committee of Ministers of the Council of Europe

07 India's Model of Internet Governance: A Highlight

24 A Commentary on ‘How to Design AI for Social Good: 7 Essential Factors’ by Luciando Floridi et al. 29 “Gamification” of Recruitment Process: A Potential Disruption or A Hype?


IPR - The invisible hand at the helm of Information Technology Industry's Business Strategy


Artificial Intelligence Needs Law Regulation

10 How "Mind-Reading" AI Systems Are Different than What We Have Seen in the AI Industry 3


and more...

Editorial Board Abhivardhan, Editor-in-Chief Chairperson & Managing Trustee Indian Society of Artificial Intelligence and Law Sarmad Ahmad, Chief Managing Editor Research Member Indian Society of Artificial Intelligence & Law Kshitij Naik, Managing Editor Nodal Advisor Indian Society of Artificial Intelligence and Law Ritansha Lakshmi, Managing Editor Research Member Indian Society of Artificial Intelligence & Law Associate Editors Ankur Pandey, Associate Editor Research Member Indian Society of Artificial Intelligence & Law Baldeep Singh Gill, Associate Editor Chief Experience Officer Indian Society of Artificial Intelligence & Law Abhinav Misra, Associate Editor Research Directorate Member Indian Society of Artificial Intelligence and Law



Naina Yadav Amity Law School, Delhi

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”[1] The “Asilomar AI Principles” are a result of a conference organized by Future of Life Institute’s on “Beneficial AI” in Asilomar. These principles essentially presume a widely accepted concurrence of a technic optimistic conception of the future, which on one hand accepts A.I. as an inevitable fate, however, on the other hand, remains unsettled and involves the risk due to absence of socio-economic analysis that only a few will determine it. The 23 principles that have been adopted by several scientists and researchers signify a proposal for a voluntary commitment for development, research and application of AI. These principles are most definitely not certain and are open to different interpretations by various researchers, scientists etc. Ethical issues related to AI and to describe morally derived beast practices related to AI R&D are addressed by the Asilomar principles and also allows a broader scope of interpretation. The principles are alert for the threats of an AI arms race and of the recursive self-improvement of AI. They include the need for “shared benefit” and “shared prosperity from AI. The Asilomar principles are divided into various components, those being research issues, ethics and values, and long-term issues. Research issues include research goal, funding, science policy link, culture and race avoidance wherein “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.”


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Policy Analysis: Asilomar AI Principles, EDW 2017

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

A.I.’s R&D needs that only “useful” A.I. should be created. The link between “directed” and “beneficial” created in the first principle is irrational since both are complete two divergent categories2. For instance something can be uncontrolled but still useful while some things which are controlled in a way can be harmful and of no use. Hence, the aim of AI research should be to control the intelligence as well as to ensure that it acts as sustainable at every place in an ecological and social manner. In addition to this principle, a reasonable research policy is in need which would simply define what does ethically responsible innovation in the area of AI mean extensively. Coming to the next principle, which deals with “accompanying research”, it is necessary to include issues of irreversibility and risk assessment in research to have a better understanding of all present and appropriate challenges. The next principle talks about “constructive and healthy exchange between AI researchers and policy-makers”. Further, the next principles talk about the assumption to create a culture of cooperation, trust and transparency among AI researchers and developers. Symptomatic of this is the principle “ Teams developing AI systems should actively co-operate to avoid corner-cutting on safety standards.” The last principle of research talks about race avoidance. As generally the research projects are tied to deadlines, which often pushed developers to corner-cut through safety standards to reduce time to bring it in the market. This can actually turn out to one of the major reason behind AI casualties. Principles surrounding ethics and values include firstly safety wherein it is important for developers to concentrate extensively on making systems safe throughout their operation. They should be safe wherever applicable and feasible. Next, under the principle of failure transparency, AI developers should spend a considerable amount of time on determining the reasons a particular AI system might malfunction or cause harm. This would help to derive better practices fro AI. Next principle talks about judicial transparency wherein it should be necessary to ensure that there is valid reasoning to any involvement by an autonomous system in judicial decision-making and can be audited by a competent human authority. To an extent on similar lines are the principles of liberty and privacy, shared benefit, shared prosperity, human control and so on.Principles surrounding long-term issues include the capability caution wherein one should avoid strong assumptions regarding upper limits on future AI capabilities. The best approach is to avoid setting affirm assumptions regarding upper limits on future AI capabilities. The next principle talking about importance says that technology isn’t simply considered troublesome but withholds the potential to create a profound change in the history. Researchers are suggested to make practical plans to manage the surrounding technology with important resources. Other principles include risks, recursive self-improvement and the common good, which should be followed with the absolute strategy for the development of AI. The researchers and scientists of the Asilomar Principles agree that AI will absolutely change life on Earth and the creation of a strong AI must be assumed. These principles are a brilliant starting point for deliberations on how to further develops AI in the world. These principles, in my opinion, require a regulatory framework. The research funding must also launch some inter and trans-disciplinary research in the field of law, economics sciences etc. to promote public dissemination of the knowledge gained. These principles should furnish opportunities for people all around the world for AI to develop for the coming decades and centuries. THE INDIAN LEARNING/JULY 2020 6

Aditi Sharma Maharashtra National Law University, Nagpur Internet Governance has been a hot gossip of the era. To take control of the global common of ‘cyberspace’ or not is a decision that no one can decide yet. The accepted definition of internet governance is decisionmaking through rules, regulations, norms, principles over the use of the internet by various governments, civil societies, non-state actors, and private companies. Today, the evolution of the governance method has reached such an extent that it has turned into a complex mechanism, where it has become difficult to standardize the norms. Though the prevalent models of internet governance rule the respective regions today, the debate on the sovereignty of the countries, net neutrality, privacy, internet standards, and Magna Carta of digital rights is still ongoing. This issue was kept as an official diplomatic agenda, for the first time, in the World Summit on the Information Society, 2003–2005. There was an extensive debate regarding the governance of cyberspace and the internet. It resulted in the adoption of the definition of internet governance, as mentioned above. This summit also proposed a model for internet governance. But it led to the emergence of several controversies in the subject. Differentiation of the private-owned model and the government-owned model was the only result of it.


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

India's Model of Internet Governance: A Highlight


THE INDIAN MODEL: AN EPITOME OF UNIQUENESS The Indian market has always been tricky for tech companies to read. At the same time, it is also alluring for them because mostly, they have exhausted their own country’s market, and they seek a market in other countries for their product. But India has always put forth various challenges for these companies. Recently, a nation-wide ban on fifty-eight Chinese apps on national security concerns. It was not the first time that this has happened. Almost a year ago, the government formed some restrictive guidelines for the e-commerce operation of Amazon and Flipkart because they wanted to protect the domestic market by restricting the foreign companies operating in India. It shows that the government is keenly observing the tech companies’ actions in our country. It is also a witness that the Indian government has not restricted its strategy of internet governance to any one of the five models. 8 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Currently, there are five models through which the internet is governed: 1. Model of Cyberspace and Spontaneous Ordering: The first model moots that the internet is a selfgoverning realm and does not require active players for its monitoring and governance. 2. Model of Transnational Institutes and International Organization: The second model highlights that the internet inherently transcends its national borders, and hence the only body that can govern it must be an international organization, and this governance must be based upon international treaties. 3. Model of Code Thesis and Internet Architecture: The third model moots that communication protocols and software most inherently understand internet architecture, and hence the companies must govern the manner of its operation. 4. Model of National Governments: The fourth model states that since the subjects of various governments use the internet, these governments must have a hold over the governance of the internet. 5. The Market and Economics Model: The fifth and the last model advocates that the internet is essential when it comes to driving market regulation and economics. Hence, its regulation must be left to market forces. Depending upon the nature of the economy and social conditions of a country, they have adopted one of the models accordingly. For example, the United States has adopted the third model, in which the government has granted almost complete control of the governance to tech companies, and it has minimal control over cyberspace. People have free access to information on the internet. The result of adopting this approach was that today, the U.S. based tech companies have spread their market across the globe. Everyone country has some tech support of almost each ‘FAAMG’, the biggest five tech giants, and all of them are U.S. based. The results that this approach has shown clearly indicates that this is the best approach. Various countries adopted a similar approach until China showed its support towards the fourth model and started censoring not just its own tech companies but also almost every western tech company. Moreover, because of its large market, it showed a neck and neck competition to the U.S.



The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Since it became challenging for foreign tech companies to endorse their products in the Indian market, they executed a new strategy. They keenly observed the market and found that there is one Indian telecommunication network that dominates the entire country, Jio. Hence, they started investing heavily in it. It led to almost fourteen significant investments in Jio in the last three months, Facebook and Google being the most significant stakeholders. The story behind building Jio as the dominant industry is quite wary. But it’s Chairman and Managing Director, Mukesh Ambani, says that he has made Jio “moving up the stack from fixed-cost infrastructure to high-margin services”. From providing its customers with India’s first cloud-based video-conferencing application, Jio Meet, to launching the latest innovation enabling holographic video calling, Jio Glasses, the company is aware of how to allure its customers. This “Made in India” initiative of Reliance Industries has also enabled the inclination of the government, which highly promotes Digital India, in taking its initiatives towards making policies for internet governance. Therefore, it is not erroneous if we say that from time to time, this Indian Model of Internet Governance has adopted a mix of all the five approaches, making it a unique model. It has set an example to those who debate over the preeminence of one approach over others.

Abhishikta Sengupta, Research Member, Indian Society of Artificial Intelligence and Law

Researchers at the University of California have taken a significant step towards turning one of the most popular science-fiction tropes into reality, by developing an Artificial Intelligence system which can analyse a person’s brain activity and turn it into text THE INDIAN LEARNING/JULY 2020 10

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

How "MindReading" AI Systems Are Different than What We Have Seen in the AI Industry

From Science Fiction to Science: Current and Possible Applications of Mind-Reading Artificial Intelligence Systems and Related Ethical Concerns At present, the newly developed AI system is in its elementary stage, confined to a very limited sentence set and having reducing accuracy with each new word. However, with the rapid and profound advancements in the field of Artificial Intelligence, it may not be long before the development of mindreading technology – a glimpse into our brains to discern our thoughts. In light of this, it is significant to note that autonomy over our thoughts and minds has always been inherent and fundamental to our nature of being human. This raises several noteworthy ethical questions, as the development of AI platforms in this domain could have serious and far-reaching implications for our privacy.The most apparent application of such an AI system would be to the domain of Criminal Law. There has already been mounting interest in the technology of Brain Fingerprinting, wherein AI will be able to interpret brain activity relating to certain stimuli which may be relevant to the concerned crime and thus perceive concealed information. This would have monumental applications during interrogations by authorities. Objects and phrases related to the crime which only the perpetrator would be familiar with and which are significant to him or her would yield a certain response in the brain. This could find its use in combatting terrorism, as it could detect an individual’s brain activity in a public place, perceiving thoughts of using explosives or firearms. Researchers at the Carnegie Mellon University have worked on a system that would identify from a person’s brain waves whether they are familiar with a certain place, such as the victim’s residence. Law enforcement implementations such as profiling suspects may be found in AI which can use someone’s memory to recreate a face, on which there has also been researched. THE INDIAN LEARNING/JULY 2020 11

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Although this is miles away from a comprehensive reading of the mind, as this system is trained to decipher only 250 of the 20,000-35,000 words that form the average active vocabulary of a speaker of the English language, and because so far, the system operates on the neural patterns which are detected only when an individual is speaking out loud, this exciting research paints a hopeful picture for those who no longer possess the ability to speak due to neurological disorders, paralysis and so on. This system, having an accuracy of 97% if developed further would revolutionize the way they communicate. Previous attempts at developing tools to enable such individuals to express themselves, includes, most notably, the ones used by Stephen Hawking, who could select letters with the movement of merely one muscle, enabling him to form words. However, an AI system which simply interprets one’s thoughts directly could vastly simplify such a long drawn process, and allow participation in conversations of even great rapidity, as well as control over other linguistic elements.

The very basis of the pharmaceutical industry’s success has been a progressive transformation from treatment to enhancement. If the use of such AI technology becomes rampant as a form of speech prosthesis, it is only a matter of time before it follows suit to become the privilege of multimillion dollar corporations in their attempt to gauge customer needs. In fact, the emerging field of Neuromarketing is already working to determine the best marketing strategies for various target markets. Allowing such an AI system to develop into a mere capitalist venture dominated by the major market players who invest in its development in furtherance of their own personal business objectives will no doubt brand this all-important system as a prerogative of the rich and lead to social stratification. This provokes the questions – Who is accountable for the use of this AI technology? Is it the government? Neuroscientists? Will future generations be permitted to have brain scans performed on their children to determine if they’re lying to them? Should it be permitted to be used for commercial purposes at all? Were such technology to fall into the hands of terrorists, it could be used to reveal information possessed by hostages, removing the barrier of the usually used extended process of coercion and torture, which most soldiers are trained against. Furthermore, according to Artificial Intelligence experts, AI may be able to develop by itself in the future, thus blurring the lines between human thoughts and machine thoughts altogether. THE INDIAN LEARNING/JULY 2020 12

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

This, however, questions the protection against self-incrimination that is guaranteed by legal systems across the world. Contrastingly, if such scans are disallowed at the outset, this robs wrongly accused individuals of an opportunity to prove their innocence. What must be addressed in order to answer this question, and something that has been contemplated by researchers is whether brain activity would fall into the category of testimony, or the category of DNA samples, blood and hair, which an individual can be compelled to give. Due to an absence of precedent, in this case, no conclusive answers are present. The ongoing debate on the validity of such technology thus continues. The further development of AI systems to give a voice to the speech impaired, and those afflicted with Locked-In Syndrome, tetraplegia, Alzheimer’s, Parkinson’s disease, multiple sclerosis and so on, although holding potential to provide an enormous benefit, would raise certain ethical questions. How will the system be able to differentiate between what the individual wishes to speak, and their deeper thoughts which they wish to keep private? How can informed consent be obtained from a person who has lost the ability to communicate? A likely evolution of this system may lead to accessing and modifying memory. Along these lines, another therapeutic application which is equally weighed down by ethical considerations is the treatment of war veterans suffering from Post-Traumatic Stress Disorder (PTSD). University of California, Berkeley researchers have successfully managed to create, reactivate and delete memories from the brains of rats. Although this would provide immense relief to people suffering from PTSD, such a technique if applied to humans reeks of potential misuse. This technology in the wrong hands may find any degree of malicious use, from manufacturing fake information in a suspect’s brain in order to confuse or incriminate him, to wiping his mind of the memory of an event entirely.

The possible applications and allied ethical concerns are endless. Whatever be the implementation, apprehensions of privacy invasions arise. Researchers have stated that covert “mind-reading” technologies are in their developmental stages, such as a light beam which is simply required to be projected on an individual’s forehead in order to interpret their brain waves. If we are unaware of the use of such tools on our minds, this would surely breach privacy laws. Moreover, what if further developments in this technology enable it to control our minds with our knowledge and assent? It may seem like dystopian fiction, but if AI can read our thoughts, how far are we from the possibility of a controlled state, acting unconsciously in pursuance of a totalitarian regime?

Conclusions and Future Perspective The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Like any evolving technology, this Artificial Intelligence system holds the potential to be of colossal benefit if used ethically. In the right hands, it could entirely revolutionize anything from speech prosthesis to the criminal justice system. Although a comprehensive mindreading technology arising from the system in question seems to be a tremendously remote possibility at present, as it increases in accuracy and usage, the plethora of issues in relation to it, ethical or otherwise, are too numerous to ignore. It has become increasingly necessary to discuss where such technology can be used, and more importantly, by whom. There is a pressing requirement for consensus on certain ethical guidelines for its development, use and dissemination.


Abhivardhan, Baldeep Singh Gill and Kshitij Naik from Indian Society of Artificial Intelligence and Law had a special interview session with Helen Edwards, founder of Sonder Scheme. The interview was conducted in April 2020. Abhivardhan: Please introduce yourself and your connection with AI as a field. Helen: I am based in US and Sonder Scheme is a boutique company focused on the human-centric design of AI. We are focused on AI ethics and we think that getting ethics up front is what really matters but very much of our whole business is around speaking workshops and a design system for including more people and more ideas in the front part of an AI design and we have a process that we have put together that help people do that. Our speaking workshops and speaking design system are available online that we use to help people understand how to design human-centric AI.

Abhivardhan: Impeccably our interest in Artificial Intelligence ethics stems from social sciences because our focus is particularly on the legal and anthropological side. So I’ll start the interview now and my first question would be, “How would you like to elaborate about Sonder Scheme and a little about the market which is actually concerned with ethics i.e. how technology companies in the US envision about AI ethics and what do they see currently in these situations as well?”


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Interview with Helen Edwards, Sonder Scheme

Baldeep: How in the post-COVID-19 situation, the employment and entrepreneurship sector related to Artificial Intelligence (AI) will be affected in developed countries or D9 countries? Helen: If we look at the core basics of AI, it learns from data. What we see with COVID-19 is that it is a phenomenal disruption in the data and the deep discontinuity in data, one should wonder what the AI is going to do with the data and what conclusions the AI will make with such data because the COVID-19 data is outside the trained model of AI. We have to look at if there is a reduction in the accuracy, number of false positives and false negatives in supervising models. The first thing we have to do is to relook at some of the AI models and if I would be running a company with big models I would be asking questions relating to change of accuracy and the impact of data discontinuity and start measuring the impact. The other important thing is that COVID-19 at many other places is amplifying existing effects or speeding up the things, so one of the things is that AI has made a lot of progress and has been able to speed up the process of basic science like understanding how proteins and cells work.

I expect to see more progress and more support towards the basic science and the contribution that AI is making. So in the post COVID world, it would be crazy to expect anything other than some degree of slowdown for some entrepreneurs and funding startups across the board. So surviving this for an AI entrepreneur is the same as anybody. 15 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

Helen: I think that in particular in the last few years, people have become much more aware of the power of the platforms and the power of machine learning sitting behind these platforms. So, on one hand, you have got public awareness of ethics, the speed and scale and of AI research behind those platforms and the impact on commoners or democracy or media. On the other, you’ve got a much more aware of the process of AI now. Treatment of people who see in these automated decisions through government programs and we has many cases in the US in Michigan and Oregon and in various other states where social welfare systems have been automated and people have no idea about the impact that it poses on the vulnerable communities. So, if we imagine that at either ends of the spectrum that how the big platforms in the future of antitrust, the future of democracy, how things are working in government in codifying social systems. It all comes down to the same thing that more and more behavioural problem and social problems can be encoded in AI and can cause harm before people even notice. Public awareness is caused the companies to think about AI ethics. Employee ethicists have a good intention but it depends on how you measure the impact of those ethicists that is a particular point of interest today and a lot of people are focusing on the companies and how these companies measure the impact of ethicists because either they are able to make decisions about the products or is it for the reputational reasons.

Kshitij: How a Human-centric AI can be made better? Will Humancentric AI be beneficial for developing countries? Helen: If we go back to the core of human-centric design, it is about understanding and development going on around human-centric AI. It is a process of not making assumptions but it is a process of observance and understanding how to build something that supports the user and why that it strips away the assumptions of the observer of AI. In human-centred AI design, we take a step further, because we start out with the initial idea of that AI is different because it creates a role, an additional help, back and forth with the user that alters the way that preferences are formed and alters choice, so, alters human agency ultimately. Because we start with a core idea and an understanding that AI creates these roles and its not a traditional technology, it is about understanding what happens when we let Artificial Intelligence starts to learning. So, in traditional design, as a designer, you can be focused solely on intent i.e., you do not have to worry so much about the consequences but, in AI design you can not just focus on. You have to look at how AI works which is that, it learns from the data that is in the world and how it changes its roles.

The most valuable material is not plastic or glass but it is human behaviour and when you’re working on something whose behaviour is somewhat based on its post design experience as a designer you can not differentiate and you can not step away and say here is my intent and it does not matter what actually happens as long as I intend for a good outcome or intend for this to happen. You have to actually anticipate all of these consequences and it is the core where the humancentred AI design starts. It is about understanding and thinking about the intent, the consequences and it’s about the power and what shifts around that power is algorithms operate and its data changes. 16 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

I think there’s another part of it that is important around is what happens with employment and the speculation about more robots coming in. People are investing more on robots rather than other people and there is a historical precedent today and last century a lot of mechanisation of US agriculture because of the labour shortage and this time we’re in a potential labour shortage and also concerns about health and safety. We can also know that digital surveillance and tracking will become more pervasive and it’s a huge area of discussion but whether it is in society or whether it is in tracking the workers at the workplace, people would actually want to know how and whether they came in contact with the virus until there is a vaccine which we hope there will be as soon as possible. So even some of the most vocal advocates of privacy and digital privacy are talking about the vaccination rather than digital surveillance, I think in the post COVID world, it will take time to adjust towards digital surveillance and tracking happening in each country.

Abhivardhan: So I have an interesting question. When we say that we have to make a responsible and ethical AI, how in limited terms, can we really achieve a responsible and ethical AI and in this century we see an AGI or strong AI? Helen: Predicting the rise of AGI, we do not even try to predict it this because it is an old saying that all forecasts are wrong but some are useful. So, when it comes to forecasting AGI, I look for where are the other people ideas on this useful. I think there are a couple of things. First of all, go back to the point of ethical and responsible AI. You can look at it from the existential level i.e., how we can make AI safe and not [dangerous for] humans. You can look at how you actually plan and resolve the harm that happens every day. We put our 99.5% focus on the harm that can happen now rather than existential harm. Ethical AI is now a process and you can Google ethical AI and you’ll be inundated with checklists and ideas and frameworks and principles. One of the reasons that we put so many efforts into our tools and our online product is that so much of that is completely overwhelming. It is not necessarily practical and it does not help you to get to a really tangible concrete answer about how to make a decision that is fair. So at the baseline level, we have ethical AI is now something that at certain levels, is technical and feasible. We can do make use of our system. Ethical sourcing of data that’s all stuff that is pretty well understood and everyone should be doing it. There are also meeting standards for making fair AI. That kind of stuff is relatively easy to do - you give it to your data scientist or you can put a team together who make the decision on how you want to label data or how you want to provide feedback. Those are relatively codified things even that still takes a lot of discussions to get there. 17 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

So, human-centred AI design is really stepping back and say let’s think about these things upfront i.e., thinking about accountability, inferences in privacy and how privacy changes and, thinking about biases and fairness and equality in the first part of the design so that we can really understand what is the role of the human and what is the role od the machine. A lot of poor automation decisions, poor AI decisions and the worst decisions are made because there is a poor differentiation what a human does and what a machine does. This happens with personalisation because we do not know how we feel about something until it actually happens. So you’ve got to keep thinking through all of these processes, different categories and groups, data and the biases that might happen in that groups of data to allow yourself to design upfront for the consequences of recommending to someone as an individual something that they do not actually want or not recommending something to an individual that they did want and it actually excludes them. Such a thing happens in pre-recruitment because of the way that biases either come in from a company looking or come in from the platform that is doing the advertising and that you end up in the situation with people that would have been interested in seeing a particular job interview or job placement do not actually end up seeing it. So, all of these things we see happening here I see no reason why they would not happen in any other country. It is going to happen at a bigger scale. These different social positions and all of these when we work with clients and when we work into consequences, a lot of people say that we cannot see what is going to happen. These all these are unintended consequences but a lot of them are not and they are actually quite visible right at the outset because all our ideas is just amplified and exacerbate and find these divisions in society that already exist. So, our idea is that human-centred design applies everywhere but you can see different outcomes and different results just simply because there are different social lines in different countries.

Baldeep: In relation to creating a responsible and ethical AI, I would like to know your thoughts about algorithmic bias and the Gender Shades Project. How will we solve the problem of algorithmic bias? Helen: In some ways, I have done a lot of research on this in the last few months. What really comes down to is that humans are biased and AI is reflecting the bias that humans have. So the study that brought the technologists attention was one done at Princeton. back in 2016 where they analyzed the rise of bias in language because of human bias. So for example flowers were more pleasant than bees and more associations between African and American names and European sounding names who were more suited to get an interview for a job. So these biases and the one which you referred to the Gender Shades Project by Joy Boulamwini at MIT, there was another important statement bring into people’s general consciousness that algorithms, all AI will reflect some kind of biases and there is no such thing as having it to be unbiased AI because in the end even if you remove bias from a technical perspective, you are still left with some form of judgment from humans about what bias? But it is about thinking back and saying how do you think about [the problem]. This is it just a technical challenge because of their good debiasing techniques where you can visualise, see, ban and remove obvious parts of bias. For eg. you can just type and understand that part of the reason of that bias and see that algorithm is not enough seeing dark coloured skin or differentiate between genders and some of that as a matter of bias and the representation of the data. So collecting more data and training algorithms, training the AI more on the darker skin. Significant improvements were achieved after she published her work. Some guy at the Microsoft went back and retained and improved the product significantly. So we need to consider that there is historic bias and representation bias in the data and many examples are there like Amazon's recruitment algorithm that they ended up removing as the women were being hired at a lower rate and that was a limitation and we see that every year, we have a lot of examples of racial bias, gender bias. 18 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

But if you are thinking that AGI is an intelligent but different form and complementary to human intelligence and learns and self supervises fashion and draws inferences from sources and ways that are causally relevant. To complete, we are on our way to doing that and when it appears, I do not know whether AGI will be able to recognise the context and an AGI that can have some causal relevance. I do not think people want an AGI. I do not see any evidence that there is value in an AGI. The more sophisticated are the uses of AI, the more we become quite attuned about to what is the cause behind this, what is the justification, what is the accountability. At the moment, justification and accountability state and even the explainability might be very much based on justification and the [structure of] accountability that people look towards the humans. So, when we think about the idea of more existential issues around AI, I look to Alex Rutherford on beneficial AI. I think he’s got some fascinating ideas to measure the part of the machine to be programmed to be uncertain about what human wants. So the AI will always check-in the person and the moment to know that they are not there. That takes us back to intentions and consequences and human agency and the fact that we are fundamentally unpredictable. We do not know how we are going to feel when it actually happens.

There is a human-centred design which makes you much more aware about the bias up front and makes you consciously consider on how you are going to offset historical or representational bias and data and it makes you consciously consider issues of fairness and making decisions about fairness that you’re going to have equal odds and what are the mathematical descriptions of fairness that you can decide upfront. Then there are legal and regulatory solutions and there is a huge amount happening in that area. Obviously some of the regulatory protections are there now against discrimination. In the US we have employment law and various other laws that protect against discrimination.

So a lot of these ways that we detected discrimination in the past are necessary reliable and there’s certainly a huge amount happening at grassroots and a lot of people stepping up and legal challenges coming into play, though pretty slow it is working. The fourth level is the level of society. So what are the entrepreneurs doing, are the entrepreneurs coming up with ways to reveal bias, to fight bias, to show bias, to build more ethical AI upfront and what are communities doing to make AI fairer. There is a huge reaction towards using facial recognition technologies in cities or in schools and the pushback on that is definitely there but it is very community-driven.

Kshitij: We understand that AI bias can enable discrimination, and can violate the constitutional principle of equality. Is there any way to incorporate AI’s dynamic ambit into a legal structure? Helen: There's a lot of interesting application for AI in the legal space when you look at how rapidly AI works and how Incredibly fast AI can assist cases, look for the Patents and can forecast behaviours and figure out outcomes about how judges are likely to rule, a few years ago there was quite a flurry of research in that space and it was startling how much progress could be made by just applying basic kick searches, and these kick searches could go through huge amounts of documents to help humans in due diligence and predict legal outcomes, what I have seen when I was doing the Quartz work how you forecast human behaviour and what you do with that outcome.


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

This is a fascinating area because the closer people look, the more they realize that AI presents a unique challenge whether it is proxy discrimination or affinity profile that people have referred to as friction-free racism because you can hide behind these affinities that are not true. Then you have got even more interesting research coming out of the UK where it talked about how from a legal perspective how we rely so much on our intuition to pick up something that feels discriminatory but AI works in a totally different uninsured level.

Abhivardhan: So we know that in this field we have a career for AI Scientists, there are developers, programmers and many more people in the technical field bust most of them deal with only the technology, but when we talk about ethics we understand that there are AI Ethicists. What qualifications do Tech and consulting companies look for an AI Ethicist? Helen: I think what we see with AI Ethicists is a real mix, there are philosophers a lot of people from academia have come in and I think that's really interesting talking to these people because they, and even I was originally an engineer so talking about Philosophy is interesting to me. I spoke with one philosopher at Princeton. I think companies that can afford to that, it's really terrific to introduce that sort of person in your leadership team. The most common way that i see ethics rise is through the legal channel so the Corporate counsel and the other channel is the Brand image and how their AI is perceived, but now I see that we get a new sorf of introduction of ethicists that are more generous like people work as consultants or just problem solvers in general, Analysts, people with humanities background, corporate consultants, Political scientists and those general process based thinkers are good at studying literature and taking a senior role and can work with people from across the organization, and they're good at getting people together and making people realise about what they need to care about it , they actually get people doing different stuff, not completely changing the process but adding to it and extending the process, Bringing more diversity so that there's a different level of thinking introducing more decision points and then at the end of it i think what the most important things that the ethicists who are making the biggest difference are able to do is able to stand in front of the leadership and actually demonstrate that they made a decision or a decision that they brought to be either through their process they were able to change the course of a product for a group of users and they added value so in actual words they are able to make the course change and not just be there as a reputation enhancement. 20 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

One of the researchers that I talked to had flipped the whole idea upside down they looked at it in a different way and thought if they were not good at forecasting decision making then they should focus on the person whose decision is closest to the data and who has the most say in the decision and they went through the hypothesis of will the AI be better if this will be done and will be better production, higher production and accuracy at forecasting the judges decision, and interestingly that is what they found that the AI was able to forecast the judges decision and I think that leaves us with the question across the justice system and even we use it in design that ‘if you have the power then it is your decisions are the ones that can be looked at’ from outside by an AI, that can count as performance management and in performance management companies they take the data of the person making the decisions ‘the managers’ and they apply it to the employees and use that as a higher accuracy production and you use that to look for bias in making decisions for example. As we put more AI into these systems and we keep humans thinking about what it is telling us then were able to be quite innovative about the way that we use the data, we don't just look at the groups that the algorithms are acting on we look at the groups that are using the AI and we look at what the researchers call study maps and so we look at those people and so we start looking at patterns of human behaviour in kind of the opposite way that it was originally intended and that is very interesting.

Baldeep: We are witnessing a boom in AI startups in India. What potential challenges do you see based on your experience in the US and what ethical rules would you like to recommend? Helen: I think the first thing is having ethics right upfront not having it as a part of a human-centric design process. You've got to make sure that you have a diverse group of people at the table don't just start with data reinforcement go into it with the idea that you've got to optimize for more than one thing and be prepared to understand and to measure and have a good debate about how to choose what to optimise and how to optimise.

He talks a lot about taking the self-regulatory structure in the financial market and applying it to regulating AI so that you can put real-time monitoring that suits you and you don't have to monitor absolutely everything and you don't even need to understand the model of the data. And when you look at some of the things that he talks about, the self-regulation of AI through these mechanisms that are self-regulatory and are already existing in the financial market you can really see how that would apply whole AI and so I think part of your answer to your question is how do Indian Entrepreneurs and Indian regulators and Indian tech companies get a hit of that kind of thinking and start thinking about the kind of designs that could be made with that idea.

Kshitij: Can India and the US embrace a private-level AI Partnership in some areas? Helen: I think that there are going to be partnerships because some of the core things that matter are already in place and there's a huge business boom. India is a Democracy, so we share many values and we always look to find talent and to solve problems so I think there's a lot of opportunity for partnerships no sure exactly how you guys probably might have a better idea of the core opportunities that are available and I think there is a good foundation to discover opportunities.


The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

Think that the thing that is emerging in the US, of course, the pandemic has slowed things down but before the epidemic started, people were starting to build some pretty clear ideas about what kind of regulatory systems might need to be built to ensure that AI is equitable and peoples claim on the fairness of the AI should be validated and it's pretty interesting to look at some of Markel Klein who is a computer scientist who also spent most of his research into the use of AI in the financial market.

Critical Take by Research Members Manohar Samal, Research Member

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Firstly, it’s euphoric that the institution took up this interview. I was extremely pleased to see a variety of discussions taking place on aspects like AI in post-epidemic recovery effect of AI induced surveillance, the consequences of applying artificial intelligence in different nations, inclusion of aspects like accountability in the AI design, significance of an unbiased AI, the theory of agency in predictions by AI, the regulatory mechanism for AI, its effect on employment and the disintegration of AI behaviour by scientists and companies to create ethical and accountable AI. This interview reflected on laudable and intriguing research problems faced in AI ethics. I strongly agree with Helen when she stated that AI amplifies and helps find social divisions and that the consequences which arrive from such use is different due to varied social lines amongst countries. This aspect paramountly shows that a novel approach keeping in mind the ground realities and unique social structures of India in conducting AI ethics & Helen Edwards, Sonder Scheme law research is the only way forward for formulation in India’s model. This is my humble opinion.

Mridutpal Bhattacharya, Research Member Her answer as to whether the biasing in AI algorithms can be perfected is seconded by me, but I have a question as to the possibility for an AI to be self-conscious enough to draw up it's own algorithm following a basic initialization with a predetermined algorithm? In that case, the bias would be cut out without having humans compromise on their quality of work, for humans believe what they believe, & thus follow their biases which can't be altered without altering their personality & in turn work efficiency.

Ananya Saraogi, Research Member Taking the objective of Sonder Scheme into consideration, the questions raised broadly stuck to the area of AI- Ethics and Law. The area of concern was raised in the form of questions beginning from the humanitarian aspect of AI to its sustenance in the future. The status of AI in the present situation of COVID-19 was also dealt with. The whole interview was focussed on the viewpoint of the people dealing with AI in the US. For example, when the question was raised on AGI the first question raised by Ms Helen was what is the understanding of AGI for the interviewers. While the COVID-19 point was being discussed there was a query that could also be raised. The query is as to whether with a great fall in the economy, would the investment in AI get affected? Or would the investment grow at a greater rate looking into the uses of AI especially in case of the growth of GDP?


Mehak Jain, Research Intern, Indian Society of Artificial Intelligence and Law.

Artificial Intelligence (hereinafter ‘AI’) has the potential to tackle and solve social problems and this is being increasingly recognised by AI system developers and designers. Projects using AI for social good range from models predicting septic shock[1] to targeting HIV-education at homeless youths[2]. 23 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

A Commentary on ‘How to Design AI for Social Good: 7 Essential Factors’ by Luciando Floridi et al.

The third essential factor for an AI4SG system is Receiver-Contextualised Intervention. As the term suggests, intervention by software should not be in a way that it disrupts its user’s autonomy. This is not to be confused with non-intervention by software at all. Total non-intervention leads to efficiency is limited. A desirable level of intervention is one that is not unnecessarily intervening and one that is not entirely non-intervening; the ideal level is somewhere in between, one that achieves the right disruption while considering user autonomy.To achieve so, this study suggests that designers building these decision-making systems should consult their users and understand their goals, preferences and characteristics, and respect their user’s right to ignore or modify interventions. 24 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

However, there is a limited understanding of what makes an AI system socially and ethically good. This study has analysed 27 case examples of artificial intelligence for social good (hereinafter ‘AI4SG’) to deduce seven factors which may serve as preliminary guidelines to ensure that AI is able to achieve its goal of being socially good. Each factor relates to at least one of the five principles of AI- autonomy, beneficence, nonmaleficence, justice, and explicability. It is also pertinent to note that the seven factors and mutually-dependent and intertwined.Let us review and analyse each factor. The first factor, Falsifiability and incremental deployment is derived from the principle of non-maleficence. Founded by Karl Popper, the concept of falsifiability enlists that to prove a hypothesis, you must be able to disprove it. To put it in simpler words, a hypothesis is deemed scientific if one of the possible outcomes disproves the very hypothesis itself.This study deems falsifiability as the essential factor which can affect and improve the trustworthiness of an AI system. Since falsifiability requires testing of critical requirements (such as safety), it acts as a booster for trustworthiness. After identifying falsifiable indicators, the same should be tested in an incremental manner, i.e. from the lab by way of simulations to the real world. A classic example of incremental deployment is Germany’s approach[3] to regulating autonomous vehicles. The manufacturers were first required to run tests in deregulated zones with constrained autonomy. With increasing trustworthiness, they were then allowed to test vehicles with higher levels of autonomy. The next factor, derived from the ethical of non-maleficence again, is safeguarded against the manipulation of predictors. Data manipulation has been a persistent problem over a long time. A model which is too easy to comprehend is also open to being too easy to fool and manipulate. Other risks such as extreme reliance on non-causal indicators also pose a threat to AI4SG projects. This necessitates a pressing need for the introduction of safeguards, such as limiting the number of indicators to be used in a project, to avoid unfavourable outcomes.


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

The next factor is Receiver-Contextualised Explanation and Transparent Purposes and it is derived from the ethical principle of explicability. The study exemplifies the importance of right conceptualisation while explaining AI decision-making by providing examples and underscores the need for AI operations to be explainable to make their purposes transparent. The right conceptualisation for an AI4SG system cannot be uniform and varies since there are many different factors such as what is being explained to, and to whom. Any theory comprises 5 components: a system, a purpose, a Level of Abstraction, a model, and a structure of the system. Level of Abstraction (hereinafter ‘LoA’) i.e. the conceptual framework and a key component of the theory. It is chosen for a specific purpose. For example, whether an LoA is chosen to explain a decision to the designer who developed it, or some general user. Thus, AI4SG designers should opt for an LoA that fulfils the desired explanatory purpose. Transparency in the goal which an AI4SG system has to achieve is also necessary since opaque goals have the potential to prompt misunderstanding which may lead to harms. Level of transparency needs to be thought of at the design stage itself and it should be ensured that the goal with which such system is deployed is knowable to its receivers by default. The fifth factor is Privacy Protection and Data Subject Consent. Privacy has been hard-hit by previous waves of digital technology.[4] Respect for privacy is crucial for people’s safety and dignity.[5] It also maintains social cohesion by the way of retaining social structures and deviation from social normal without causing offence to a particular community.Issues pertaining to consent exacerbate in the timed of national emergencies and pandemics. West Africa faced a complex ethical dilemma with regard to user privacy when faced with Ebola in 2014. The country could have used call-data records of cell phone users to track the spread of the epidemic, however, in the process, user’s privacy would have been compromised and the decision was held up.[6] In other situations where time is not of the essence, user consent can be asked for before data has been used. The study highlights different levels or types of consents that can be sought such as assumed consent threshold and informed consent threshold. The challenge faced by researchers in Haque et al.[7] is a perfect example. The researchers, in this case, resorted to “depth images” (that de-identify a subject) so as to respect and preserve user privacy. Likewise, other creative solutions to privacy problems should be formulated to respect the threshold of consent established while dealing with personal data.Derived from the ethical principle of justice, Situational Fairness is the sixth factor. To understand situational fairness, we first need to understand what algorithmic bias is. Algorithmic bias refers to the discriminatory action of an AI system due to faulty decision-making induced by biased data. Take the example of predictive policing software. The policing data used to train the AI system, in this case, contains deeply ingrained prejudices on the basis of a panoply of factors such as race, caste, and gender. This might lead to discriminatory warnings/ arrests, furthering the continuance of an inadequate and prejudiced policing system.This underscores the importance of “sanitising the datasets” used to train AI. However, it is not to be confused with removing all traces of important contextual nuance which could improve ethical decision making.

References [1] Henry, K. E., Hager, D. N., Pronovost, P. J., & Saria, S. (2015). A targeted real-time early warning score (TREWScore) for septic shock. Science Translational Medicine,7(299), 299ra122. [2] Yadav, A., Chan, H., Jiang, A., Rice, E., Kamar, E., Grosz, B., et al. (2016a). POMDPs for assisting homeless shelters—computational and deployment challenges. In N. Osman & C. Sierra (Eds.), Autonomous agents and multiagent systems. Lecture Notes in Computer Science (pp. 67–87). Berlin: Springer. [3] Pagallo, U. (2017). From automation to autonomous systems: A legal phenomenology with problems of accountability. In Roceedings of the twenty-sixth international joint conference on artificial intelligence (IJCAI-17), (pp. 17–23). [4] Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Cambridge: Stanford University Press. [5] Solove, D. J. (2008). Understanding privacy (Vol. 173). MA: Harvard University Press Cambridge. [6] The Economist. (2014). Waiting on hold–Ebola and big data, October 27, 2014. [7] Haque, A., Guo, M., Alahi, A., Yeung, S., Luo, Z., Rege, A., & Jopling, J., et al. (2017). Towards visionbased smart hospitals: A system for tracking and monitoring hand hygiene compliance, August. 26 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

This is where situational fairness comes into play. Depending on the circumstances, the AI should behave in an equitable manner so as to treat everyone in a manner which is fair. The study presents an interesting example of how a word processor should interact with all its human users identically, without taking into accord factors such as ethnicity, gender, caste, etc. However, when used by a visually impaired person, it should be allowed to meander and interact in a non-equal manner to ensure fairness. Variables and proxies irrelevant to outcomes should be removed, but the variables that are necessary for inclusivity should be sustained. Lastly, Human-friendly Semanticisation is the seventh factor and it is derived from the ethical principle of autonomy. The crux of incorporating this best practice is determining what tasks should be delegated to an AI system and what tasks should be left over to human-friendly semantisation. Let’s consider the case of Alzheimer’s patients. Research [8] noted 3 points with respect to the carerpatient relationship. First, carers remind the patients of the activities in which they participate in e.g. taking medication. Second, carers by the way of empathy and support provide meaningful interaction to the patients. Third, annoyance due to repetitive reminders of taking medication may weaken the carerpatient relationship. Researchers[9] successfully developed an AI system which assured that reminders don’t translate into annoyance, leaving the way for carers to provide meaningful support to the patient. This is a perfect example of how human-friendly semanticisation can take place while leaving tedious and formulaic tasks to AI.In conclusion, this article gave a fair analysis and laid down the foundation for the future of research in the field of AI4SG systems. It brought forth seven essential factors and their corresponding best practices and analysed with the help of examples and case studies and suggested the incorporation of multiple perspectives into the design of AI decision-making systems to effectively reach the goal of an ideal AI4SG system. It lays down the groundwork for the future of AI4SG systems to reach the end goal of AI which is socially and ethically responsible and works for the better good.

[8] Burns, A., & Rabins, P. (2000). Carer burden in dementia. International Journal of Geriatric Psychiatry,15(S1), S9–S13. [9] Chu, Yi, Song, Y. C., Levinson, R., & Kautz, H. (2012). Interactive activity recognition and prompting to assist people with cognitive disabilities. Journal of Ambient Intelligence and Smart Environments,4(5), 443–459.

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)


Avani Tiwari, Research Intern, Indian Society of Artificial Intelligence and Law. The nature of recruitment is changing expeditiously with the growth of the workforce population. Technologyenabled change is the need of the hour for both, employer and employees. It is certain that right from children to adults, humans, fancy games. They find it engaging, get motivated when they get rewards and are instinctively drawn towards games. Lately “Gamification” has surfaced as a new buzz word among HR Professionals. This has led to the origination of new terminology, “Recrutainment” which is a combination of the words, “recruitment” and “entertainment”. of A lot of companies nowadays are resorting to ‘Gamification Strategy’ considering the advantages it has to offer. Latest technological trends like Artificial Intelligence, Virtual/Augmented Reality and the obvious to mention “the Internet Age” have added new dimensions to it. THE INDIAN LEARNING/JULY 2020 28

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

“Gamification” of Recruitment Process: A Potential Disruption or A Hype?

WHAT IS MEANT BY “GAMIFICATION/ GAMIFICATION STRATEGY”? A Gamification Strategy is defined as “the process of taking something that already exists – like a software application or online community – and using gaming techniques to motivate consistent participation and long-term engagement”.[1] Gamification works on Game Mechanics which are the elementary units of gamification. They make up the gameplay through rules and create an engaging experience through rewards. [2] Game Dynamics like competition, surprise, progress, achievement etc. are used in line with the Game Mechanics to persuade and engage participants via a gaming platform.Consider a platform where employees have customized dashboards where they can track their progress, compare it with that of others, receive rewards on reaching different levels. This is one such illustration of the use of gamification.

The recruitment process involves steps like identifying job vacancy, communicating it to potential hires, accepting and reviewing applications, screening, shortlisting and finally hiring. The traditional process requires candidates to submit their CV/ Resume, Cover Letter, complete assignments, answer questions etc. Some of the major complications that traditional methods pose are as follows: Time Consuming and Lengthy Process: Recruitment is a multi-staged process. A large number of applications make it cumbersome. This leads to spending a substantial amount of time recruiting than doing something more productive, innovative and income-generating. According to LinkedIn[3], “only 30% of companies are able to fill a vacant role within 30 days and the other 70% take anywhere between 1 - 4 months”. As per a survey in 2016, 56% of recruiters responded that lengthy hiring procedures lead to their inability to recruit good candidates.[4] It was also found that the lengthy hiring process would lead 57% of job seekers to lose interest in the job.[5] Bias and Lack of Diversity: Traditional methods of recruitment rely largely on humans and their decision often suffers from conscious or unconscious biases such as confirmation bias, affect hysterics, expectation anchor, beauty bias, intuition, judgment bias etc. Diverse teams tend to tackle problems efficiently and devise creative solutions. However, constraints like a place of conducting the hiring process for candidates, biases by recruiters tend to hinder the recruitment of diverse teams. As per LinkedIn Global Recruiting Trends 2017, recruiting more diverse candidates was ranked as a top trend in the near future by 37% of the recruiters. Hiring requires a skilled and dedicated workforce: Small organizations face difficulty in having a dedicated and skilled team of HR Professionals for recruiting the best suitable candidate for them. Aversion of candidates towards traditional application process: In a survey conducted by MRI Network in 2019[6], 26% complained that they were dissatisfied with the traditional process as it was lengthy and lags communication. THE INDIAN LEARNING/JULY 2020 29

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)


Out of those who were dissatisfied, 71% said that uploading a resume and still manually uploading information already available on the resume was an issue, 58% said that submitting a resume for a job which you are qualified but knowing that it will not be seen/reviewed by a real person was a problem and 40% said that the lengthy process was dissatisfying.


Filter out genuine applications: It is to be noted that, a considerable number of candidates apply to a number of places randomly without researching about the organization they are applying for. Most of them do not even have proper knowledge about the kind of work they are applying for and their interest in the same. They just send their CV and cover letter and depend on the contingency of getting selected. This increases the number of applications for the recruiters and is a burden for them. If games are used instead, only those who are really committed to getting the job and completing the game would stay and rest others would leave considering the time spent on completing the game and their interest for that job; less interested will step out; only relevant applications will remain. Overcomes the shortcomings of CV /Resume based recruitment: A CV can be misleading. It fails to give a complete and real insight into the talent and skills of a candidate. It is fruitful only to the extent of knowledge about a candidate’s educational background, professional experiences etc. But the question which needs to be addressed here is that, whether the educational background of a person is a true test of his/her skills? The question can be answered in affirmation to a certain extent, but what about the skills like planning, speed, goal orientation, flexibility, learning ability, accuracy, monotony tolerance etc. Can such skills be assessed simply through CV, cover letters and assignments? The obvious answer would be a “No”. Using games instead, which are designed to assess a particular skill or skill set would definitely be the logical substitution. Helps in building diversified team: Diversity in teams has a lot of benefits right from increased creativity and innovation, better problem solving and decision making to increased profits. Traditional recruitment suffers from a lack of ability to build diversified teams due to the following reasons:Various biases by recruiters (conscious or unconscious); Candidates having related educational background, from elite institutions and previous work experiences in the similar field are preferred, conveniently leaving out the ones who are making a career or who want to change their sector/industry. They may possess great skills and may have the ability to adapt. Pre-employment tests like personality tests might leave out an introvert but even they might have some skills that could add value to the organization.Interactive, fun and engaging: Gaming is more interactive and fun. It showcases company culture. If a company is a fun place to work, more quality candidates would like to work there. THE INDIAN LEARNING/JULY 2020 30

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

The traditional recruitment process is a tedious process. It involves reviewing a multitude of applications in the initial stages. Screening and Interviewing also tend to be exhausting for organizations. Despite that most organizations are not able to fill a vacancy in a reasonable time and incur heavy expenses during the whole process. Even when candidates are hired, it is not certain whether they are apt for such job profiles, whether they possess the practical skill for the same or not. CVs/Resumes can be misleading at times. To cater to such problems technology is displacing recruitment process. The COVID-19 crisis has accelerated the use of technology for the same. Gamification of the recruitment process is one such use of technology. Following are a few advantages that ‘Gamification of Recruitment Process’ has to offer:

Easy evaluation of candidates: Gamification reduces the need for experienced HR Professionals in certain stages of the recruitment process. One such example is the reviewing of applications which require skilled professionals for better result. However, by using a game one can do so without much difficulty, experience and with good quality. For example, sales, ethical hacking, stock trading jobs could use games/virtual platforms specifically designed for those purposes. Less time consuming: Time to hire gets reduced when games are used instead of CV/Resume based applications. Even the assignments that are required to be submitted as part of the process takes several days to get reviewed. Back and forth emails make the process tedious. Games can give automatic test results and screen immediately rather than shortlisting then screening, which is timeconsuming.

Everything has its pros and cons; technology is no exception. Even the gamification strategy, which is a mode to overcome the pitfalls posed by the traditional recruitment process, suffers from certain limitations. Assumptions of technology are unproven; There is no surety about the quality and the gains of gamification; When AI is used for gamification purposes, biases may arise as AI works on the basis of the data fed to it and such data might suffer from the bias of the programmer; Some use facial analysis as one of the test components while failing to consider that cool metrics has no or low relation to performance. Following are some of the points to be noted before implementing a gamification strategy: Make a reasoned/judicious decision as to the type and extent of gamification and always try to answer the question, does it actually solve the problem faced by us? All these decisions need to be made keeping in mind a long-term view. Not everyone enjoys competition, some might even go to extents that are not ethical or moral just to claim the rewards on completion of the game.Don’t go too deep too early; maintain a balance. Make sure you convince the candidates that the game used is the best or at the least a better way of selection; that the game is efficient and accurate in assessing the skill it aims to assess.

PRACTICAL EXAMPLES Colonel Casey Wardensky (then Chief Economist of the U.S Army) in 1999 with his team introduced "America's Army": the first military developed video game targeted at young teenagers. The goal of the game was to collect "honour points and those tenacious enough to work their way through the mandatory medical training were let free with digital grenade launchers, Humvees and heavy machine guns. The game was launched for free as a way of fastening the captivated future candidates.[7] The game is now coming up with its 50th version.


The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)


Back in 2004, a billboard that read “{first 10-digit prime found in consecutive digits}.com" . was placed by Google on Highway101 of Silicon Valley to entice and recruit brilliant math-minded people. The answer was which would lead them to a web page with another problem and solving that would lead to a page on Google Labs. The page said that Google was searching for the best engineers and being able to reach that page would mean that the visitor was the one.[8]

BOTTOM LINE A disruptive technology or disruptive innovation is defined as “an innovation that helps create a new market and value network and eventually goes on to disrupt an existing market and value network”.[11] While the hype is simply a slang used instead of the word “hyperbole” which means an exaggeration. The Cambridge Dictionary defines hype as “a situation in which something is advertised and discussed in newspapers, on television, etc. a lot in order to attract everyone's interest.” As per NASSCOM’s report on “Applied Games in India” published on November 2016, the global applied games market was expected to reach USD 5.5 billion by 2020, growing at a rate of CAGR 16 %. While the Indian applied gaming market was estimated to reach 66-69 million in 2020 and was expected to grow at CAGR 14-16%. As per the report, India could be the next hub for gamification and applied games. As per the 2019 Recruitment Trend Study, by MRI Network, 62% said that the use of external recruiters has not changed despite the introduction of new recruitment technology. 63% said that the length of the hiring process has not changed over the past years. Coming to the candidates, 74% of them said that they are satisfied with the traditional recruitment process. It can be inferred from the data presented above that gamification has given upsurge to new markets and it is expected to grow in future. But when it comes to the recruitment process, the majority (recruiters as well as candidates) are still reluctant to change their traditional recruitment process and are satisfied with the same. Considering Gamification Strategy’s advantages over the traditional process and the fact that many big organizations like ISB, Make My Trip, even the American Army is using the same, it would not be wrong to say that it is something more than just a “hype”. For the purpose of recruitment, although, it cannot be considered as a disruption in HR as of now, it evidently has the potential of becoming one. Amidst the COVID-19 situation, “Gamification” can probably even become the new normal. THE INDIAN LEARNING/JULY 2020 32

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Some other international examples include the British Civil Servant Exam aspirants answering MCQ based on video scenarios as a part of the selection process; McKinsey’s complex ecological computer games. Even India is not behind in this race, Make My Trip designed a captivating training platform with the help of “Mind Tickle" (a leading gamification player). The new hires were required play the game on that platform to familiarize themselves with the culture/values of the organization in an effective manner.[9] The Indian School of Business, Hyderabad is another such example. To reach out to thousands of its Alumni, it designed a hot air balloon race and a trivia to help participants interact with others and learn about the institutional updates.[10]. Various game components like rewards, leader-boards etc. were used. Some other Indian examples include HCL (gamification to predict whether a person will join after the interview or not); Marriot’s game “My Marriot Hotel” which simulates the real experience of running a hotel. It was designed to test and hire freshers.

Sarmad Ahmad, Chief Managing Editor, The Indian Learning [Editorial Exclusive] Introduction On the 8th of April 2020, the Committee of Ministers of the Council of Europe released a recommendations report that details extensively on the potential impact of the use of algorithms on human rights protected for in the European Convention on Human Rights, 1953. This report is inclusive, broad and holistic in its narrative, and covers a wide range of possibilities and scenarios wherein regular use of algorithms can hinder the regime of European Human Rights Law. THE INDIAN LEARNING/JULY 2020 33

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

An Analysis of Recommendation CM/Rec(2020)1 of the Committee of Ministers of the Council of Europe

The objective for this report is laid out in its beginning; citing the factors of the unprecedented use of digital applications across various sectors of modern life, and the evolving use of algorithmic systems, the impact and scope of use of which is still not fully comprehended, as two critical reasons behind the urgency of the need for a body of ethical standards on the use of algorithms. This is to ensure that the use of algorithmic systems doesn’t further fortify existing biases and discriminations which are still prevalent in the society that the body of Human Rights Law aims to curb and regulate through its enforcement. In their efforts to elaborate on the human rights impact of the use of these systems, the Committee of Ministers recognises that public and private sector initiatives are already underway towards the manifestation of bodies of ethical standards and regulations, but such attempts in no manner relieve the Council of Europe of its duty to protect the convention.

The beginning set of recommendations requests all the member states to conduct the following actions with respect to the guidelines presented later in the report, namely: To review their legislation, frameworks, policies, and their own practices with respect to the procurement and development of algorithmic systems, Create effective legislations that govern over private-sector effort towards the development of the algorithmic system to ensure their compliance with the international regime, Engage in regular and transparent consultations and dialogue with all involved stakeholders and members of their society, Prioritise the inculcation of public and private expertise in the subject area, Promote the importance and necessity of digital literacy, And take into account any environmental impact of large-scale digital services, and strive towards creating sustainable models with optimised use of energy and natural resources. As per the 2nd guideline under section (A) of the report, the body recognises algorithmic systems as:

“applications that, often using mathematical optimisation techniques, perform one or more tasks such as gathering, combining, cleaning, sorting, classifying and inferring data, as well as selection, prioritisation, the making of recommendations and decision making. Relying on one or more algorithms to fulfil their requirements in the settings in which they are applied, algorithmic systems automate activities in a way that allows the creation of adaptive services at scale and in real time”. This definition is very broad, inclusive and based in the rational-centric approach to AI, which detaches itself from a human-centric approach to assessing AI. This also essentially “future-proof” the scope of the definition’s application to any future developments we could witness in AI research & development, with any potential form of algorithmic systems, while likely to have unique traits and functions, fitting within the scope of this definition as it would still include all the elements as mentioned in it.


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)


Speculations The recommendations and guidelines listed out in this report are comprehensive, envisioned effective in their potential application and are truly progressive and one of a kind in the international regime on AI research and development. It serves as an effective catalyst between the establishment of human rights governance and 21st century technologically fuelled socio-economic development, ensuring that the order and protection guaranteed in the former don’t get lost in the chaotic and fast-paced developments of the latter. The report can serve as a guide for the drafting of effective legislations in all the European States, and many other states across the international regime, eventually creating a whole new domain of human rights law addressing the protection of it against the disruptive factors of the post-modern world.


The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

While Section A of the guidelines continues into explaining the intricacy, utility and potential issues that could arise out of regular use of algorithmic systems, Sections B & C of the guidelines effectively chart out recommendations towards member States and the private sectors within those states respectively. Alongside a common set of recommendations towards both recipients, namely the management of effective systems of data that harbour accurate analyses and modelling capabilities, the protection and observation of human rights, protection and respect of personal data and the maintenance of accurate representations of data, both parties are recommended a set of guidelines to observe independent of each other. States are recommended to make any legislation concerning algorithmic systems transparent and accountable, whilst upholding the principles of democracy and ensuring the consistent review of these systems to ensure a minimal occurrence of error. The private sector, on the other hand, is recommended investing their efforts into developing systems that eliminate biases and discriminations that amount to violations of human rights and ensure transparency and accountability towards consumers in the creation of goods and services that utilise these systems.

Aditya Gaggar, Advocate, Supreme Court of India. Introduction Before the dawn of intellectual property, the right of ownership was restricted just to tangible assets but the development of intellectual property rights has doubled the ambit of man’s rights, granting him exclusive rights to a variety of intangible assets too, such as musical, literary and artistic works; discoveries and inventions; and words, phrases, symbols and designs, etc, which were not capable of being owned earlier, so much so, that today it is this type of ownership which is most sought after. 36 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

IPR - The invisible hand at the helm of Information Technology Industry's Business Strategy

Introduction Over the thousands of years of man’s evolutionary history, man has constantly redefined himself and the things around him. From the ancient wheel to the ultra-modern spacecraft, all along, he has constantly indulged in creative innovation. Though this innovation had been slow initially, it has considerably picked up the pace in the past few decades, so much so, that it led Bill Gates to say,

Never before in history has innovation offered the promise of so much to so many in so short a time. Developmental History of IPR The concept of intellectual property has been traced back to the monopolies granted by the Byzantine empire. Similarly, in ancient Greece, a one year monopoly was given to cooks to exploit their recipes while statutory legislation in the Senate of Venice provided exclusive privileges to people who invented any machine or process to speed up silk making. Yet, all these were rare and widely spaced instances and the principle of protection of IPR was not such a commonly accepted one, then. The emergence of the modern concept of intellectual property, as we know it, has been fairly recent in the timeline. Intellectual property has quickly evolved from its status as an obscure, alien concept unknown to the nomadic community, to becoming all-pervasive in the modern Information Age where protection is sought for almost every new idea under the category of intellectual property rights. Lawrence Lessig has even gone to the extent of saying,

The Americans have been selling this view around the world: that progress comes from perfect protection of intellectual property. Commenting on the wide-scale IPR infringements openly taking place in China and emphasizing the importance of a positive IPR regime in the economic growth of any country, Dan Glickman is said to have stated,


The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

Today, one cannot exploit the true market potential of his creator unless he is able to protect his copyright. In a globalised economy, the protection of the intellectual property is of even greater importance, as it becomes impossible to do business in such a scenario without a strong IPR regime. IT industry is one of the most globally distributed industries with an equal spread over the market and the most rapidly evolving products that make it an IPR intensive industry. Rather, it would not be wrong to say that IPR is the most crucial factor in determining the business strategy of the software industry.In India and for that matter at most places, the world over a computer program is considered as a literary work and is usually protected under the copyright law. Common business strategies for protection of computer software include beta testing, trial versions, the release of limited feature or limited accessibility versions and the release of earlier versions for free.Even though there are immense benefits to following a business strategy focusing on IPR management yet there are numerous challenges to its implementation which include large scale piracy and complexity of the legal procedures involved.

If China wants to be a constructive, active player in the world economy, it's got to respect intellectual property rights or it makes it pretty impossible to do business with them. IPR PROTECTION TO INFORMATION TECHNOLOGY PROTECTION UNDER COPYRIGHT LAW

Courts have also held that a Web page's look, layout and appearance are protected by copyright, as are musical works stored or created electronically. Whether or not, computer languages, macros and parameter lists, communications protocols, digital type-fonts are protected by copyright is something yet to be decided by Courts.

PROTECTION UNDER PATENT ACT While some countries such as the USA, protect computer software like any other invention, the Indian Patents Act 1970 excludes a computer program or algorithm from patentability, despite the fact that a patent could probably provide the most comprehensive and secure protection. No international convention grants a patent to computer software either. However, in India, computer programs may be patentable along with hardware or where they focus on the systems, processes, and methods used to achieve a solution to a specific problem, rather than on the algorithms alone.



The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Under copyright laws of different jurisdictions world over, Software and Computer programs fall within the ambit of and are considered as literary works and accordingly are protected under the Copyright Act 1958. Copyright makes it possible to regulate, by subsequent contracts, the way the public can access these works. Also, it entitles the owner to prevent the copying of the protected work, the distribution of copies and the preparation of derivative works. Both the TRIPS Agreement, 1995 and WIPO Copyright Treaty (WCT), 1996 prescribe copyright protection for computer programs. It could be considered as the most widely used and appropriate means of software protection. Software programs, user manuals, databases, websites and other information technology work in India are protected under the Copyright Act as literary works. Courts recognize the writing of a computer program as a creative "art form". Updates or enhancements to software can also obtain independent copyright protection. The fact that a computer program is created using well-known programming techniques or contains unoriginal elements may not be a bar to copyrightability if the program as a whole is original. Under the Copyright Act, databases are given protection as "compilations". In order to receive copyright protection, databases must have been independently created by the author, the selection and arrangement of the components that make up the database must be the product of an author's exercise of skill and judgment and must not be a purely mechanical exercise. However, "creativity", in the sense of novelty or uniqueness, is not required. It is important to note that the creator of the database only acquires copyright in the database and not in the individual components of the database. Similarly, the underlying mathematical calculations, algorithms, formulae, ideas, processes, or methods contained in information technology are not protected, only their expression is.

PROTECTION AS TRADE SECRET An IT formula, pattern, compilation, program, method, technique, or process, may also be protected under trade secret law where duties of confidence exist either at law or by virtue of an agreement. Thus, an Owner can exploit his trade secret through confidentiality agreements, both with his employees and with his customers. Hidden aspects of web sites and software can thus be protected under trade secret law. In fact, coupled with copyright or patent protection, this is the most effective way to protect computer software.

PROTECTION AS DESIGN Integrated circuit topographies (or computer chips) which form an essential part of many software allied gadgets are protectable in India by The Semiconductor Integrated Circuits Layout-Design Act, 2000. Computer hardware designs and plans may also receive design protection.

IMPORTANCE OF IPR IN IT INDUSTRY As the inventor of WWW, Tim Berners-Lee, once acknowledged,

Intellectual property is an important legal and cultural issue. Society as a whole has complex issues to face here: private ownership vs. open source, and so on. IPR plays a role of vital importance in almost every industry and it is especially so in the IT industry. IPR promotes greater R&D (Research and Development) and innovation as the owners have improved marketability for their products and services. Even if the copyright owner does not want to enter the open market all by himself, he can license his work to others and sit back and enjoy his royalty. Also, this increases the healthy inter se competition between the companies who are forced to focus on product innovation to maintain the upper hand in the market. It also benefits the consuming public in general as it increases the standards, the quality and the variety of products available to them in the market. Thus it is a win-win situation for everybody. A strong IPR regime is even more important for a developing nation as it helps in technology transfer from the developed nations to the developing nation. Statistical data shows that a strong IPR regime promotes investment in the ICT sector of that country. It promotes rapid growth and development not only in the IT sector but in all other ancillary sectors too. It is even more important for countries like India where the IT sector forms such a large share of the GDP pie. 39 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Trade-mark rights arise under the Trademarks Act, 1999. Trademarks can be used to protect the goodwill associated with the names, slogans, symbols, and other marks used by businesses in the information technology industry. Software products are getting protection as a trademark. Windows, MS Office Suite, Acrobat, Work Share are all trademarks of their respective manufacturers. Domain names may garner trademark rights if they meet the statutory requirements for trade-marks. The IN Registry under the authority of National Internet Exchange of India (NIXI) has been appointed by the government of India for registration of .in domain names. For other generic domain names, the rules promulgated by the Internet Corporation for Assigned Names and Numbers (ICANN) apply. Trademark owners may be able to obtain relief for cybersquatters under trademark law (where the dispute is in respect of a .in domain name).

It is not as if it is only the developing nations that benefit from a strong IPR framework, rather the developed nations benefit infinitely more. It makes it easier for them to outsource their work to the developing nations without fearing piracy and market capturing. Also, they become the research hubs, hotbeds for innovation, the very fact that keeps them ahead of other nations.


IPR AND BUSINESS STRATEGIES Unless one is able to protect his copyright one cannot exploit the market potential of his creation. Moreover, being a software, further aggravates this fact, as it can be so easily copied, the protection of Intellectual Property Rights become all the more relevant. The IT industry uses the IPR to its fullest possible use by devising a strategy where they safeguard their newest software against piracy but simultaneously release their earlier versions for free. This helps them in exploiting the niche segment of the market, by realizing the best possible consideration from them. The software by its very nature has interdependence on other basic software and operating systems. The industry provides the earlier version for free or deliberately do not check the piracy. Free use of an earlier version of the software provides them dominance in the market due to its excessive use by those groups who have a lesser capacity to pay which is accomplished by giving the earlier version of the product free. The software industry is able to keep the majority of the users attracted to their basic software or operating systems in this manner. This ensures them a ready market for the future versions of their software which operate in the same environment as that of earlier versions of the software. An example of this strategy is the marketing done by Adobe Laboratories which provides its Adobe Reader and Flash Player’s earlier versions to the public free of charge but charges heavily from the professional users who use the latest versions of the program. A similar strategy is applied by Microsoft in its famous operating system, Windows. It is said that the software industry deliberately does not take anti-piracy steps against a majority of users who are not using their genuine program, even though they are fully aware of the same. This is apparently for the reason that the company is aware that the majority of such users are not capable of buying their latest software. Still, if they are harassed by the use of anti-piracy laws then it is very likely that they will switch over to different programs of similar use of their competitors. In this manner the protection of the IPR and for that matter, deliberate non-protection of the IPR in given circumstances is part of the business strategy of the IT industry.


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

In modern times the focus of almost every industry has shifted to management of their intellectual property rights or rather intellectual property resources. The same is even more necessary in the field of Information Technology which innovates most rapidly. This is pointed out by the outrageous spending the software companies infuse into their R&D department every year. The “inventors” in such companies also happen to be one of the highest-paid people. Moreover, almost every IT company also has a long list of IPR litigation running in various courts all the time with a hotshot, eminent and needless to mention, expensive lawyers fighting their cases. Many even have a whole separately dedicated department that just keeps a track of their IPR formalities that arise from time to time.

CHALLENGES TO ENFORCEMENT OF IPR IN IT INDUSTRY Though the protection of intellectual property rights and their enforcement has numerous advantages as already laid down, yet there are several major hurdles that stand in the way of their proper realization. Foremost among them of course, are the legal challenges to its enforcement that require expert advice and dedicated legal practitioners for its management which does not come cheap. The longevity of the protection afforded to IT products also varies from country to country, not in the least due to the difference in the ways of the grant of IPR protection to them. Bill Gates has been quoted as having lamented that,

Intellectual property has the shelf life of a banana. New regulations regularly throw up new challenges for businesses working at the cutting edge of technology, who must keep their one eye on the rapidly evolving market conditions and another on the courts. This increasing complexity of such regulatory laws is another major cause for concern for the IT industry. In the words of Eric Allman,

The intellectual property situation is bad and getting worse. To be a programmer, it requires that you understand as much law as you do technology. However, just as in an Isaac Asimov story, the greatest challenge, by far, to the protection of intellectual property rights and the forward march of technology, comes not from law but from the technology itself. Lawrence Lessig has brought this irony to light thus,

Notwithstanding the fact that the most innovative and progressive space we've seen - the Internet - has been the place where intellectual property has been least respected. 41 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

The IT industry often follows another strategy in which it lets out its unfinished product prior to its formal and full release into the market with limited features in a controlled manner called a Beta version. This gives them an opportunity to advertise their products while also taking feedback from the potential users which helps them in further improving the features of their products and fixing the bugs. This is also beneficial for the users as they get an opportunity to actually use the software for free for a prescribed amount of time, that too legally and find out its usability for themselves. Such a practice is common with companies providing internet utilities and other internet intensive tools and applications. A related approach is that of providing their software as a trial version that is available for free use for a limited time period after which it expires, rendering it unusable, with a dialogue box prompting you to buy the unlimited accessibility edition, instead. This is quite common in computer games. An IT company may also provide you with a toned-down version of the software with fairly limited functions for free called the free version and may suggest you buy the pro version for more features. Usage of this technique is commonly seen in Anti-virus software like AVG, Avast, Norton, Mcafee, Kaspersky etc.

Amarendar Reddy Abdulla, NALSAR University of Law, Hyderabad

Artificial Intelligence is gradually more present in our lives, reflecting a growing propensity to turn for advice, decisions altogether, to algorithms. AI is the ability established by machines, is smartphones, tablets, laptops, drones, self-operating vehicles, robots, face detecting machines, military AI, Online tool etc. that might take on tasks ranging from household support, welfare, policing, defense. The role of AI facilitating discrimination is well documented and is one of the key issues in the ethics dispute today.Artificial Intelligence is a tool of humanity is wielding with increasing recklessness. Though it is for our common good, code of ethics, laws, government accountability, corporate transparency, the capability of monitoring are few still unanswered questions. AI regulation is not just a complex environment, its uncharted territory from human leadership to machine learning emergence, automation, robotic manufacturing etc. AI is largely seen as a commercial tool, but its quickly becoming an ethical dilemma for the largely spread Internet area.Can International Human Rights or any Laws help to guide and govern new emerging technology Artificial Intelligence? The largest organization of Technical Professionals Institute of Electrical and Electronics Engineers’ latest report on ethically aligned design for Artificial Intelligence as its first principle that it should not infringe upon International Human Rights. Last year, Human Rights investigators from the United Nations found that Facebook exacerbated the circulation of hate speech and incitement to violence in Myanmar. UN’s International Telecommunications Union second annual AI for Global Good Summit in Geneva opined that in order for AI to benefit the common good, it should avoid harms to fundamental human values and International Human Rights provide a robust and global formulation of these values.Artificial Intelligence can drive global GDP and productivity but surely will have a social cost. A rising ubiquity of AI implementation appears to coincide with an accelerating wealth unequally disrupting the business world. AI creators are not employing best practice and effective management. When it comes to public trust, global institutions to protect humanity from potential dangers of machine learning are noticeably absent. Regulating AI may not be possible to achieve without better AI, ironically. A special emphasis must be laid to the prospective of treating AI as an autonomous legal personality, separate subject of Law and control. 42 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

Artificial Intelligence Needs Law Regulation



The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

Artificial Intelligence is designed on algorithmic decision making is largely in digital and employs statistical methods. Earlier, algorithms were pre-programmed and unchanging. This is rising to new challenges as the problems found with biased data and measurement error as their deterministic predecessors. On another side, its impact of error rates, for instance, US Customs and Border Patrol photographs every person enters and exits the US borders and cross-references it with a database of photos of known criminals and terrorists. In just 2018, approximately 8 Crores people arrived in the US, even if the facial recognition system is 99 % accurate, a 1% error rate would result in 8 Lakhs people being misidentified. What would be the impact be on their lives? Conversely, how many known criminals would get away? The number may be beyond the number after putting all the countries year-wise together. There are many documented cases of AI gone wrong in the Criminal Justice System. Use of machine learning for risk scoring defendants is advertised as removing the known human bias judges in the sentencing and bail decisions. Predictive policing efforts seek to best allocate often-limited police resources to prevent crime, though there is always a high-risk mission creep. AI provides the capacity to process and analyze multiple data streams in real-time; it is no surprise that it is already being used to enable mass surveillance around the world. The most pervasive and dangerous example is facial recognition software. Facial recognition software is not just being used to scrutiny and identity, but also to target and discrimination. AI can be used to create and disseminate targeted propaganda. Machine learning powers the data analysis social media companies use to create the profiles of users for targeted advertising. This is also capable of creating realistic-sounding video and audio recordings of real people. Hiring processes have long been fraught with bias and discrimination. Algorithms have long been used to create credit scores and inform finance loan screening. AI has impacted many human rights. The risks due to the ability of AI to track and analyze our digital lives are compounded because of the sheer amount of data we produce today as we use the Internet. With the increased use of the Internet to Things devices and attempts to shift toward smart cities, people soon will be creating a trail of data for nearly every aspect of their lives. In such a world, not only are there huge risks to privacy, but the situation raises the question of whether data protection will even be possible. GPS mapping apps may risk violating freedom of movement. A looming direct threat to free expression is through botenabled online harassment. Just as people can use AI-powered technology to facilitate the spread of disinformation or influence public debate, they can use it to create and propagate content designed to incite war, discrimination, hostility, terrorism, or violence. If automation shifts the labour market significantly, it may lead to a rise in unemployment. There is a danger that health insurance providers could use AI for profiling based on certain behaviours and history. AI-powered DNA and genetics testing could be used in efforts to produce children with only desired qualities. If AI is used to track and predict students’ performance in such a way that limits the eligibility to study certain subjects or have access to certain educational opportunities, the right to education will be put at risk. Human Rights cannot address all the present and unforeseen concerns pertaining to AI. Technology companies and researchers should conduct Human Rights Impact Assessments through the life cycle of their AI systems. Governments should acknowledge their human rights obligations and incorporate a duty to protect fundamental rights in national AI policies.

A Sociological Survey A sociological, demographic survey is conducted with an aim to establish the compulsion and necessity of Law Regulation of Artificial Intelligence. This survey is conducted from 460 respondents from whom 260 are male and 200 are female having basic computer science background through direct interaction, telephonic conversation, via mailing, social media etc maintaining scientifically standard methodology. The respondents of this survey are from US, UK, Germany, Singapore, Australia, South-Africa, Newzeland, Saudi-Arabia, Qatar, Malaysia, Srilanka, and India. 44 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Social institutions, professionals of various departments/sectors and individuals should work together to operate human rights in all areas. UN leadership should also assume a central role in international technology meets by promoting shared global values based on fundamental rights and human dignity. It is mandatory to identify the potential adverse outcomes for human rights, private-sector actors should assess risks and AI system may cause human rights violations. Taking effective action to prevent and mitigate the harms, as well as track the responses is also necessary. Providing transparency to the maximum extent, establishing appropriate mechanisms for accountability and remedy is also recommended. Microsoft completed the first Human Rights Impact Assessment on AI for a major tech company. It is the methodology for the business sector that is used to examine the impact of a product or action from the viewpoint of the rights holders-whether they be direct consumers or external stakeholders. In April 2008, around four thousand Google employees sent a letter to their CEO demanding the company to cease its contract to participate in an AI development project called Maven with the US Department of Defense (biased and weaponized AI.) While some major international Human Rights organizations are starting to focus on AI, additional attention is needed from civil society on potential risks and harms. Dozens of countries have initiated national strategies on AI, yet human rights are not central to many of their efforts vis-a-vis European Union’s General Data Protection Regulation, Global Affairs Canada’s Digital Inclusion Lab, Australian Human Rights Commission project, A law passed by New York city etc. The UN has yet to sustain a focus on AI from a right perspective with some notable exceptions, particularly from UN independent investigators, special rapporteurs, and their Secretary General’s strategy on new technology. In September 2018, the UN Secretary-General released a strategy on new technologies that seek to align the use of technologies like AI with global values found in the UN Charter, the Universal Declaration of Human Rights, and International law. Intergovernmental organizations may play an influential role, including the organization for economic cooperation and development which is preparing guidance related to AI for its 36 member countries. More work can be done to bridge academics in Human Rights Law, Social Science, Computer Science, Philosophy, and other disciplines in order to connect research on the social impact of AI, norms and ethics, technical development, and policies. Isaac Asimov’s three laws of robotics are important factors in the development of artificial laws. They are as follows: AI may not injure a human being or, through inaction, allow a human being to come to harm AI must obey orders given it by human beings except where such orders would conflict with the I-Law. AI protects its own existence as long as such protection does not conflict the I & II – Laws. These laws were designed to motivate his authorship on short stories and books, they have also impacted theories on the ethics of AI.

TABLE 1: Basic Knowledge of What AI is.

Table 1 illustrates that 85 % of respondents have knowledge of AI

Table 2 shows that 70 % of respondents are agreeing that AI harms human, 20 % are unable to express their opinion and below 10% are not agreeing.

TABLE 3: AI affected cases

Table 3 explains that 60% of male respondents have noticed AI affected cases where only 43% of the female have noticed.


The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

TABLE 2: AI and its harmfulness to human

TABLE 4: AI threats International Human Rights

TABLE 5: Necessity for enforcement of new laws for AI

Table 5 notifies that 82% of respondents are recommending new laws for AI

TABLE 6: India’s efforts on Law Regulation of AI

Table 6 reveals that 80% of respondents are not happy with the Indian Government towards Law Regulation of AI.

TABLE 7: Groups have to look into the matter for more governance of AI


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Table 4 reveals that 75% of respondents are expressing their consent in favour of argument AI - threats International Human Rights

Table 7 states that 70% of respondents opined that Governments, Private Sector and Judiciary Departments should work together to bring news laws for AI. The conducted survey accomplished that the establishment of suitable laws for AI is most warranted an immediate and it also stated that, otherwise the society witnesses dangers of AI.



The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

This article findings identified that AI does not have a moral or ethical set of qualities that should be inherent in a civil servant. Equalization of AI and Human Rights, we encounter problems with current legislation and the lack of effective regulatory mechanisms for this subject. According to current legislation, public authority functions could be implemented by AI which could raise the issue of legal capacity legitimacy and assessment of legal and human risks. Till today, the issue of legal personality of a fully AI, its legal capacity and responsibility have not been resolved in current versions of national legislations. The specific purpose of AI to explore and identify its legal nature with respect to the Spirit of the law specified at the basic concept of prospective legislation. Therefore shaping of legal relations arising between AI and Human is most warranted.

Ritam Khanna & Nayan Grover, Research Members, Indian Society of Artificial Intelligence and Law. With the recent alleged ‘civil rights’ movement taking place in the United States has highlighted the overlooked deep-seated discrimination among the whites and all the other people of different colours. 48 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

The Blur Line between Data Prediction and Systematic Discrimination: LAPD’s Pred-Pol programme

Before, this movement kicked off in the month of April the LAPD programme based on the data analytics and prediction were discontinued. Reasons given for the same were the rigorous lockdown that led to the shut down on the development of the programme due to the COVID-19 and subsequent crunch of financial funding for the same. The Programme was a bit controversial due to the data predictions being inclined to be committed by the African American and Asians. The present analysis is the two flip sides to the programme’s effect on the citizens as well as on the course of the criminal investigation.

What is Pred-Pol ?

The Criminology and the fallacy of the Data Prediction System The Article by Christopher Riagano on Artificial Intelligence and criminal justice relations, the author had identified the police detections and to observe the criminal patterns to detect the behaviour of the crowd.[2] This detection strengthens over a series of the addition of data and improvements of the software on a recurring basis so that the probability of prediction can reach the accurate outcome. We can consider that a virtual agent able to provide objective help to a real investigator doesn’t seem realistic because of the heterogeneity and the complexity of the situation that is not uniquely logical. Formal or informal perception plays an important role in the grip on reality for investigators. Because the human brain is not a chess program, AI is not completely ready today to emulate it.


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

As technology has emerged and computers have evolved, so has the US State Department’s ability to analyse crime data and develop strategies to reduce crime and disorder. Beginning in 2009, with funding from the United States Department of Justice (USDOJ), specifically the Bureau of Justice Assistance (BJA) and the National Institute of Justice (NIJ), the Department implemented data-driven crime-fighting strategies. The initial program led to the development of a strategic plan to move the Department towards a Data-Informed, Community-Focused approach to crime prevention. The Pred-Pol was such an invention although it is contrary to any idea of strengthening the vision of community protection or harmonisation. Critics state that when the Pred-Pol was to be used it would identify the specific areas and minority groups that already are susceptible targets of police as evident from the protest that is enough to highlight the problems. It was not only susceptible to targeting the vulnerable but, also was working on a fact-based prediction[1] which renders the end result to be similar to that of any common patrol police. This has not proven to be much effectiveness in either investigation or in even predicting any probable criminals or offences. Since, Pred-Pol works on the model of the SARA (Scanning, Analysis, Response and Assessment) it relies on the database such as the Crime Statistics which is a numerical quantifier but the criminals work with a psychological manuscript which is like a fingerprint- everyone has a different one altogether. The target of these machines would be based on crime prevention on environmental define which is a little closer to understanding human behaviour and it includes the following designs: 1. Natural Surveillance: the removal of hiding spots or physical barriers, 2. Natural Access Control: controlling the flow of traffic or travel, 3. Territoriality: generating a sense of ownership within the location, and 4. Maintenance: the physical maintenance or general upkeep of a place

Understanding criminal investigations also require inferring a hidden factor, namely, the intention of the police officer. But we cannot exclude for the future that the extension of AI in this field is based on an analysis of police officer patterns. Analyses could, for instance, include experience, age of investigators, trajectories, modus operandi of investigations, crime type, and so on.

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

The main interest is to identify trends, patterns, or relationships among data, which can then be used to develop a predictive model and propose short, medium and long-term trends (Hoaglin et al., 1985) in order to inform police service at different levels.[3] However, the predictions suffer from many of the nuances to reach a heightened level of accuracy in their judgement. As stated in a paper Stuart Armstrong et al (2014), many of the predictions made by AI experts aren’t logically complete: not every premise is unarguable, not every deduction is fully rigorous. In many cases, the argument relies on the expert’s judgement to bridge these gaps. [4]This doesn’t mean that the prediction is unreliable: in a field as challenging as AI, judgement, honed by years of related work, maybe the best tool available. Non-experts cannot easily develop a good feel for the field and its subtleties, so should not confidently reject expert judgement out of hand. Relying on the expert judgement has its pitfalls. Expert disagreement is a major problem in making use of their judgement. If experts in the same field disagree, objective criteria are needed to figure out which group is correct. [5]If experts in different fields disagree, objective criteria are needed to figure out which fields are the most relevant. Personal judgement cannot be used, as there is no evidence that people are skilled at reliably choosing between competing experts. The LAPD has had a history of the notoriety of attacking the minority people in various crimes in the state. A study states that the LAPD officers happen to search the black and coloured race American Latinos often, in relation to heinous crimes.[6]This abuse over the minority has been recorded according to the National Crime Bureau of America to be over the last 100 years at least.[7] It is a solid foundation for the predictions all gone wrong.

The Los Angeles Department’s defence


The LAPD chief Moore, the reasoning of shutting down the programme is logical and very idealistic on the part of the police department. He stated that the financial crunch was the major reason why the programme was shut and even though the lockdown across the state was in force, the work on the same was being persistently carried on the online platforms. 50 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

The department defended the value and ideas behind the programme. They refused to agree with the critical views of racial biases in the programming software and underlined that community building is central to policing in LA. Assistant Chief Beatrice Girmala justified the police support toward the minority by citing the probable recruitment figures involving the 671 prospective candidates from the minority community. However, this number dropped because of the reasons, in words of the Assistant Chief, “increase in the community anxiety and maybe people getting focused on future careers to other things as critical as life and death�. The focus of the statement was shifted from allegations against the software defects to the matter of the status update on testifying on the inclusiveness of the minority communities into the LAPD and telling the same to be too high and rigid standards of recruitment to meet by the community. This underwires the unsettling tone of community biases, which exists within the department. The main conclusion from the above is that the department which has failed to distinguish the vulnerable from the privileged, casually and has always sidelined them from unexplainable reasons, their policing data of arrest and charges might also reflect the same. It forms the basis of the crime predictions considering this notion in the notorious LAPD.

Lessons for India and Recommendations India, much similar to the United States, is a diverse country with various social groups residing. In India, the bias towards minority communities is not based on race but religion and caste. Especially in rural areas, the police don't register complaints of the vulnerable sections that easily especially when the victim belongs to a weaker section of the society and the accused is from a privileged section. Apart from the malevolent influence of systematic discrimination on the data, the data is not well maintained either. If such AI machine learning programmes are introduced in the Indian policing system then India will face some major issues. it will also face accountability issues. Firstly, it will face the same problem as the U.S. since the data it will be fed would come from the same police departments who hold a bias towards certain communities, especially in rural areas. Thus, the predictions made by Machine Learning programmes would make the weak sections of society more vulnerable to harsher police actions. Secondly, there will be no one to hold accountable for the increased unjust notorious measures taken by the police towards the endangered communities. Since the police would hide behind the predictions made by the Artificial Intelligence programmes and use it as a guard to justify their increased scrutiny towards the minority groups, they already hold a bias against. Whereas the Machine Learning programme would be giving the predictions based on the data that is being fed by the same biased police officers. 51 THE INDIAN LEARNING/JULY 2020

First and foremost, it is crucial to supervise the data being fed to such machine learning software programs and involve people from all backgrounds in the process of training them. It is important to ensure that the data being fed is unbiased. Also, improvements in the data and record maintaining system would go a long way in establishing a reliable data set to be fed to these machines. This is the only way we can expect better predictions from such AI machine learning programmes and ensure that they help us in crime prevention rather than instilling discomfort amongst the weaker and vulnerable sections of the society. The reliability of these systems should be improved by introducing simpler and explainable machine learning models and prediction techniques rather than the complex ones. Last but not the least, the laws in India need a lot of catching up to do to make sure the criminal justice system in India can benefit from such advancements in Technology.


The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

At last, it is also pertinent to note that although India is increasingly advancing in the technology sector, we still lag far behind developed countries like the United States. And a Machine Learning prediction technology like Pred – Pol takes several years and a large data set to fully develop and attain precision in predictions, even in a country as technologically advanced as the United States. So, such tech in India would require much more years until it can be considered completely reliable. Now the laws in India are still catching up with these advancements and such Archaic laws leave loopholes to be exploited by defenders in such situations where they could attack the credibility of the tech on whose predictions the foundation of the investigation began. These issues are complex and need several continuous steps and improvements on our part based on observations and feedback on the working of such tech in the policing system. Still, at present times, some comprehensive measures can be suggested to minimise the problem.

References [1] Robin Hanson. What if uploads come first: The crack of a future dawn.Extropy, 6(2), 1994 [2] Christopher Rigano, “Using Artificial Intelligence to Address Criminal Justice Needs,” NIJ Journal 280, January 2019, [3] Hoaglin, D. C., Mosteller, F. and Tukey, J. (1985), Exploring data tables, trends, and shapes, John Wiley.

[5] D. Kahneman.Thinking, Fast and Slow. Farra, Straus and Giroux,2011. [6] [7] Continuing the struggle for justice: 100 years of National Council of Crime and Delinquency (2007).


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

[4] Stuart Armstrong and Kaj Sotala. How we’re predicting ai–or failingto. In Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, andRadek Schuster, editors,Beyond AI: Artificial Dreams, pages 52–75,Pilsen: University of West Bohemia, 2012

P. Yuvasree, Student, Tamil Nadu Dr. Ambedkar Law University, Chennai K M Poongodi, Student, Ethiraj College For Women, Chennai.

After AI’s winter, 21st century is considered the blooming age of AI, after having witnessed the increase by leaps and bounds in its investments and a hike in interest about the AI. The aim of this work is to show the differences between fantasy and reality in accordance with the human brain, in association with the AI. The discussion on constructive and destructive fantasies on AI brings into light both the pros and cons; A hope for something promising as well as otherwise which will place the forthcoming AI as being productive or counterproductive. Quotes from Marco Badwall on habit loop, and Stephen Hawking on AI's possible destructive nature, Lisa Feldman Barret’s view on emotions and Mayor's myth support my work. The need for the big data storage system and energy requirement in relation to the human brain. The metaphorical comparison of emotional intelligence is the anchor in an AI and a description of the selection of data.

Introduction Artificial Intelligence has a plethora of categories in which I am here to cast the spotlight on “fantasies” that are about and associated with AI. Why then, is it still considered a fantasy despite the unbeatable capacity of the human brain and the data pertaining to our habits and emotions?

Fantasizing the World of AI According to The Cambridge dictionary: 54 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Human Fantasy in Creating Artificial Life

Fantasy is a pleasant situation that you enjoy thinking about but is unlikely to happen, or a story or a type of literature that describes situations that are very different from real life, usually involving magic

“THE DEVELOPMENT OF FULL ARTIFICIAL INTELLIGENCE COULD SPELL THE END OF THE HUMAN RACE”. Why is AI just a Fantasy? In today’s world, we have lots of IOTs and bio wearables which might give us hope that an AI, which would be possible only if it has data and human-like intelligence is not far from reach. This is the 21st century, whereupon, after Alpha goes a success, the AI now focuses on deep learning, learning everything from scratch. However, it is as of now, not likely possible to store each and every data in one connected network where the selection of data according to the new situation will not be questionable. What is AI? In simple terms, it is a computer program which has the ability to think and learn. Simply, it’s an artificially made human brain. CAN HUMANS STORE DATA (MEMORIES) WITHOUT EXPERIENCE? It’s impossible for humans to learn without experience, which can be proven by various experiments, of which isolation of a child leading to not only the lack of its 'civilization' but also it’s the inability to learn to “speak” a “language” without proper exposition or contact with the required environment could be taken into account as a simple example. 55 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

In real life, we have come to meet fantasies through fictions, movies and games where, the duality emerges as we place forth a hypothesis, or a theory, which only time factor and further researches can prove both constructive and destructive fantasies. Constructive fantasies on AI usually are more welcomed by people who would like to face only the better side of the machine intelligence.. The fantasies of people in the contemporary world have travelled beyond that of any ordinary movie and have entered into the fantastical realm of a utopian world. We fantasize that, once AI is built completely, it would replicate itself with its intelligence and would set out to would save our world from the devastating effect that humans have brought upon themselves as well as to the planet, polluting it, causing global warming, health issues, defiling it and that it would enable people to achieve whatever they need. AI has the potential to bring about the dawn of Zenith of human civilization. Mankind would feel “blessed” and There would be no disease, decrepitude or anything to be feared. The world of AI would be like the age in a Satya era (The Golden age). Destructive fantasies on AI, on the other hand, talks about the odds of the arousal of jealousy between humans and AI over the domain. It might assume the role of puppeteer, subjugating us humans into being puppets. They just might even be able to unravel and unlock the mysteries of the human world and cosmos, discover Multiverses etc. which might potentially pose a risk, a threat to the Human race. Ultron, from the well known Marvel series, could be placed as an archetypal example for the same Here, I would like to quote Stephen Hawking, to aid and enhance the above statements.

EMOTIONS CAN VARY WIDELY DEPENDING ON THE CONTEXT. She shares this striking claim with metadata to support it. The point is not only that no pattern of neural activity reliably corresponds to emotional state but, moreover, that neural activities and networks that do sometimes support emotion also do cognitive and perpetual work.According to Lisa, our brain tries to guess emotions in the face of others, similar technologies do the same, but both fail in reality because emotion can vary widely depending on the context as sometimes objectivity are but subjectively objective when one can’t see past the visible claims.

Conclusions AI can pop out of fantasy into being a part of our reality if we could get our hands on superpower batteries, enhanced storage networks and a new analysing team for various situations to conserve the energy. “The story of Talos, which Hesiod first mentioned at around 700 BCE, offers one of the earliest conceptions of a robot”, says Mayor who is also a part of the centre for advanced study in the behavioural sciences. The myth describes Talos as a giant of bronze that Hephaestus, the Greek God of inventions & black Smith, himself has built. At his core, the giant had a tube that connected his extreme terminals, his head at the top to his foot at the bottom through which flows the ichor, the mysterious elixir-like fluid believed to flow through the veins of gods. Zeus, the king of Greek gods, commissioned Talos to protect the island of Crete from invaders. Following the order, Talos marched around the island three times a day and hurled boulders at approaching enemy ships. 56 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Are Habits Made? According to Marco Badwal, a full-time research scholar at Harvard University, half of everything in human’s daily life are habits that are ‘programmed’. From the time we get up, playing computer games, using mobile phones, checking notifications, driving a car etc. Which make us think that sometimes, certain abilities or actions and habits of ours are already hardwired into our brain. What Habits Are? Habits are but automated and repetitive behaviours. In today’s world automated and repetitive behaviours are possible with machine learning as humans are trained to get habituated to certain things. We can reprogram our habits, if necessary, which Marco defines as a habit loop. When we reprogram our habit we could see that there are neurons interconnected to perform it and that these neurons are getting stronger and the behaviour becomes a habit. He points out that the brain is an energy consumer. Weighing about 2% of body weight, it consumes 20% of our whole energy supply. The more important the activity is, the more repetitive the action is, forming ‘habits’. Thus the execution of an important action by the brain costs very little energy. That is, the thicker the connections between neurons, the lesser the energy is needed to perform a behaviour and to activate the neurons. We have a countless number of neurons, more than trillions of which everything is interconnected. For an AI, the data stored based on situations that have occurred already requires less energy whereas for the ‘contemplation’ 'recognition' 'training' and 'execution' of an unfamiliar one would consume a lot of energy. We will be needing more powerful batteries and data saving network to make this virtual fantasy a reality.Similarly, with emotions, there are technologies in the contemporary world which could read emotions through the detection of facial expressions..or...could It? According to Lisa Feldman Barret,


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Another ancient tale of Argonautica, which dates to the third century BCE, describes how sorceress Medea defeated Talos by unbolting one of the fastened bolts at his ankle, letting the ichor fluid flow out, rendering him harmless and lifeless. Mayor says, we humans don’t have any ichor like fluid to give us mortals an intelligence beyond human comprehensibility, as Hephaestus did for Talos. Everything differs, bending it’s meaning and effect, moulding itself according to the situation. Even happiness differs because the group of people who view it perceive it differently. If these perceptions could be altered to superimpose each other, bringing together the people of different groups into one community, merging views, ideas and notions with a lack of any subjectivity, maybe then AI wouldn’t just remain a fantasy. We, humans, evolve and survive by learning to imitate, similarly, AI could be considered an imitation where it’s not about the creation or recreation and development of intelligence but just the usage of the already available data of humans. We will just have to train the AI to think, learn and to analyse different, new and unfamiliar situations that may arise to make AI a reality, a virtually true reality.

Falguni Singh, Student, Rajiv Gandhi National University of Law, Patiala

Professor Rossi's article put forward the out of the ordinary points and issues in regards to "Artificial Intelligence" (hereinafter referred to as AI), which is indeed both informative and thought-provoking. AI is, without a doubt, the mantra of our age. Albeit AI research has subsisted for over five decades, interest in the topic has heaped on over the past few years. This extraordinarily multifaceted and complex area emerged from the discipline of computer science.[1] Interestingly, Professor Rossi defined AI as “scientific discipline aimed at building machines that can perform many tasks that require human intelligence.”[2] It indicates an essential shift, from humans instructing computers how to act to computers learning how to act.[3] AI makes this possible fundamentally through machine learning, comprising ‘deep learning’ methodologies.


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Commentary on Building Trust in Artificial Intelligence by Francesca Rossi

All around the world, many countries place millions of dollars in funding proposals and projects for further growth to be made in the field. The global AI and robotics defence industry was priced at $39.22 billion in 2018. With a probable compounded annual growth rate (CAGR) of 5.04 per cent, the market is expected to be priced at $61 billion by 2027.[4] Market Forecast characterizes this valuation and growth to investment in latest systems from countries such as the United States, Russia, and Israel as well as obtaining of systems by countries such as Saudi Arabia, India, Japan, and South Korea.[5] Professor Rossi advocates that the two areas, as mentioned earlier of research about AI, are progressively merged to maximize the benefits of both and alleviate their downsides. Which indeed indicates that we are going towards the right path, but again are we there yet? The artificial intelligence revolution is on the move. Remarkable gains in AI and machine learning are being used across various industries: medicine, finance, transportation, and others. These advancements likely will have a substantial impact on the global economy and the international security environment. Business leaders and politicians all over the world, from Elon Musk to Vladimir Putin, are having thoughts about whether AI will prompt a new industrial revolution. Like the steam engine, electricity, and the internal combustion engine, AI is an enabling technology with a wide range of applications.[6] What Rossi describes “AI systems with human-level perception capabilities,” or “AI is to augment humans’ capabilities”—reconstructing our intelligence in a machine, but conceivably an advanced and improved version of us, free from the computational boundaries and the restrictions of the amount of data we can process to arrive at decisions has served as the incentive behind AI.[7] Regardless of our flaws, we make the right decisions as a prudent person, which includes facing uncertainty at a decent level. Astonishingly, we can grasp concepts and draw conclusions that result in the application of our learnings from one set of problems to entirely different ones in different domains. And this is where AI may lack in the view of the fact that the high-level reasoning is beyond on hand technologies, irrespective of the amount of data we have available.

This is where we have yet to make progress in AI systems, and they can learn and improve. One possibly will be able to apply the same system in a similar domain, but it is practically impossible to use this to an entirely different domain. Humans can carry out this sort of reasoning with only a few examples indeed. 59 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Very efficiently, Professor has further explained AI consists of "two main areas of research"; one is based on what we can call explainable AI, which seems less risky and entirely trustworthy. On the other hand, there is less explainable AI which is based on “examples”, “correlation” and “data analysis”, explicitly where the problem is not foreseen, and this type of AI is very capable of committing errors and mistakes. The main question that comes to mind at this juncture- are we willing to be dependent on these machines and applications? The world is already reaching a point where a future cannot be seen without AI.

If we were to talk about the consequences, then there can be both intended and unintended consequences. They include challenge and risk of an increase in threats to existing cybersecurity, and moreover, vulnerabilities into an expansion of complex AIdependent systems (like cloud computing); AI merges with other technologies including in the nuclear and biotech domains; weak transparency and accountability in AI decision making processes; algorithmic discrimination and biases; limited investment in safety research and protocols; and overly narrow ways of conceptualizing ethical problems.[10] On moral grounds, the purpose of AI should be known beforehand, and it should be made while keeping in mind the influence it can cause on society; otherwise, things can go south. Don't you think humans are unique and undoubtedly possess the exclusive quality of reasoning? Sometimes our emotions must take tolls. They guide us towards the right path. They show us the difference between a good and malice. Can AI do that? For instance, in 2018, a Canadian company was set to open a store in Houston, Texas, where customers could try out, rent, and buy sex robots.[11] But Houstonians became demoralized at the idea of having a “robot brothel” in their backyard, and lawyers working against sex trafficking circulated a petition which received thousands of signatures. By revising its act controlling adult-oriented businesses to ban sexual contact with “an anthropomorphic device or object” on commercial platforms, the Houston city council shut down the enterprise before it could even start. 60 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Now, let's get to the main crux of the Professor's article that is "A Problem of Trust". It is very well foreshadowed by the Professor that AI can become very pervasive in our day-to-day lives. It would become a great asset to society, but it can also bring greater liabilities with it. The Professor has also addressed the issue of “Black Box”. Firstly, it is any artificial intelligence system whose operations and inputs are not detectable to the user or the interested party. A black box, in a general sense, is an impenetrable and unfathomable system. Secondly, a recurring concern regarding machine learning algorithms is that they operate as “black boxes.” Because these algorithms time after time change the way that they weigh inputs to improve the precision of their predictions, it can be intricate and tricky to spot how and why the algorithms reach the outcomes they do.[8] Yes, it seems scary and frightening to be honest, but Professor has further discussed "High-Level Principles of AI" to tackle these concerns and issues, which are quite promising. Undeniably an explainable AI is a key to this problem, a system that is designed to explain that how the algorithms attain their outcomes or predictions. There is even ongoing research on whether judges should demand explainable AI in criminal and civil cases. It is chiefly a theoretical debate about which algorithmic decisions entail an explanation and in which forms these explanations should take place.[9] Another trustgenerating factor can be transparency; without an iota of doubt, companies need to be transparent about their design choices and data usage policies during the development of their latest products. Don't you think humans are unique and undoubtedly possess the exclusive quality of reasoning? Sometimes our emotions must take tolls. They guide us towards the right path. They show us the difference between a good and malice. Can AI do that?

Locals’ objections contained apprehensions that such a business would “reinforce the idea that women are just body objects or properties” or “open up doors for sexual desires and cause confusion and destruction to our younger generation”[12] It is time for a reality check. People are indulging too much in these AI that they are losing their connection with the real world. AI supposed to make our life better, but is it really making it better? We are too dependent on these machines. These can fire back on both our mental and physical health. The developers and companies need to be very careful; otherwise, it may influence the youth immorally. The foundational pillars of AI should be based on fairness, transparency, human rights, ethics, and above everything, it should mostly be human-centric. The current hype is at the damaging level, and long term consequences may be detrimental. If we do not chase away confusion and supervise anticipations on the uses and abuses of AI, we risk plunging into another AI drawback.

[1] Camino Kavanagh, Camino Artificial Intelligence. Carnegie Endowment for International Peace, 2019, pp. 13–23, New Tech, New Threats, and New Governance Challenges: An Opportunity to Craft Smarter Responses?, (Last Accessed, 14 May 2020.) [2] Francesca Rossi, Building Trust in Artificial Intelligence, Journal of International Affairs, Vol. 72, no. 1, 2018, pp. 127–134. Jstor, (Last Accessed, 17 May 2020.) [3] Ulrike Franke, Harnessing Artificial Intelligence, European Council on Foreign Relations, 2019, (Last Accessed, 14 May 2020.) [4] Global Artificial Intelligence & Robotics for Defense, Market & Technology Forecast to 2027, Market Forecast, January 18, 2018, (Last Accessed, 15 May 2020.) [5] Andrew P. Hunter, et al. International Activity in Artificial Intelligence, Center for Strategic and International Studies (CSIS), 2018, pp. 46–61, Artificial Intelligence and National Security: The Importance of the AI Ecosystem, (Last Accessed, 14 May 2020.) [6] Paul Scharre, et al. The Artificial Intelligence Revolution, Center for a New American Security, 2018, pp. 3–4, Artificial Intelligence: What Every Policymaker Needs to Know, (Last Accessed, 14 May 2020.) [7] Maria Fasli, Commentary on Artificial Intelligence- The Revolution Hasn’t Happened Yet by Michael J. Jordon, University of California, Berkeley, July 01, 2019. (Last Accessed, 17 May 2020.) [8] Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence, Columbia Law Review, Vol. 119, no. 7, 2019, pp. 1829–1850, Jstor, (Last Accessed 15 May 2020.) [9] Ibid. 61 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)


[10] Ibid., 1. [11] Olivia P. Tallet, ‘Robot Brothel’ Planned for Houston Draws Fast Opposition from Mayor, Advocacy Group, Hous. (Last Accessed, 17 May 2020.) [12] Jeannie Suk Gersen, Sex Lex Machina: Intimacy and Artificial Intelligence, Columbia Law Review, Vol. 119, no. 7, 2019, pp. 1793–1810. Jstor, (Last Accessed 17 May 2020.)

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)


Samridhi Talwar, University School of Law and Legal Studies, GGSIPU. Introduction

Artificial Intelligence has transformed its footing from being science fiction to mushrooming to an extent where it has become a part and parcel of our lives. This technology has paved its way to India and the government is embracing AI in an unprecedented manner. 63 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

The CONFLICT of Artificial Intelligence with Indian Constitutionalism: A Normative Critique

The concerns of AI around privacy rest on the fact the organization's cache elephantine magnitude of data from consumers and harness it for illicit and inappropriate motives, merely to garner sapience on these consumers.[14] Moreover, these data hoarding organizations have been stockpiling and selling the data thus creating an unfair competitive advantage for themselves in the market. To combat this issue, establishing a legal data protection framework is imperative. The Justice B.N. Srikrishna Committee report had released a draft Personal Data Protection Bill in 2018 which was aimed at creating a cohesive data protection regime for India.[15] The dire need for privacy and thereby a data protection bill was also discussed in the landmark Puttaswamy judgment.[16] Another issue around the AI is about the freedom of speech and expression. Freedom of speech and expression is a fundamental right enshrined under Article 19(1)(a) of the Constitution of India.[17] 64 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

The first issue that arises is that of the legal personality of AI. Legal personality is an attribute of individual autonomy under Article 21 of the Constitution of India.[7] Though legal personality isn’t exclusive to humans; a piece of technology has not been granted this status in India. Moreover, a precedent for granting AI legal personality, roots from the Companies Act where corporates have been granted the status of a separate legal entity.[8] However, the differentiating factor for AI and corporates is that corporate entities, though independent, are held liable by virtue of their stakeholder, in contrast to AI which is actually independent. The issue of granting legal personality to Artificial Intelligence rests on a basic question i.e. can AI can be subjected to legal duties and be granted legal rights.[9] In case AI is not granted legal personality, another stumbling block in its use is that it cannot be held individually liable for its misdoings. A legal person is always penalized for his/her wrongdoings but the same cannot be said for a bot who merely can act and react based on the environmental stimulus sans the emotional qualities present in humans.[10] This issue can be a cause of problems as self-driving cars such as Tesla, voice assistants like Siri and Alexa, fully automated machines, chatbots such as Lyft and A.L.I.C.E, etc are around the corner. The culpability arising from the misdoings of this technological tool is yet not adequately determined. The next constitutional conflict with AI revolves around the right to privacy under Article 21 of the Constitution of India.[11] The imminence of Artificial Intelligence, with its data-driven multifaceted realms, push the buttons on the burning concerns for privacy and security of data. The models of Artificial Intelligence, their application, and solutions hinge on the generation, accumulation, and processing of mammoth quantities of data and societal behaviour. However, when the accumulation and processing of this data are bereft of due consent or suffers from selection bias which subsequently leads to the hazards of discrimination and undue profiling, it leads to an infringement of the right to privacy under Article 21. Additionally, Artificial Intelligence is ambiguous and non-transparent. These features are forcing privacy to walk the plank. [12] For instance, the Aarogya Setu app, China’s social scoring system, the case of Cambridge Atlantica in US elections, and the collection and dissemination of personal data by Facebook, Google, Microsoft, etc. have impacted the privacy of citizens and users of the AI software.[13]

Conclusions Artificial Intelligence has emanated as the cynosure of development in India. A blooming AI industry has already entered the country and the government is taking zealous initiatives. It is cardinal at this point, to consider this burgeoning industry with the constitutional provisions of India. The analysis of the conflicts AI faces with the constitution as done by the author, highlights that AI cannot be evaluated as secluded mathematical or scientific algorithms; or as propitious tools due to their effectiveness and efficiency; or even as a neutral technology. Instead, this technological weapon is a perplexing social system that India cannot afford to merely consider on the grounds of efficiency, effectiveness, and accuracy and must be balanced with the Constitutional provisions.

References [1] AI, Machine Learning & Big Data Laws and Regulations | India | GLI, (Jun 12, 2020, 3:50 PM) [2] Ibid. [3] Marda Vidushi, Artificial Intelligence Policy in India: A framework for engaging the limits of data-driven decision-making, SSRN, Philosophical Transactions A: Mathematical, Physical and Engineering Sciences (2018). 65 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

Artificial Intelligence has an abject brunt on the freedom of speech and expression due to our mushrooming dependence on AI software and applications.[18] There is a nexus between freedom of speech and expression and technology such as Artificial Intelligence. This was also recognized in the case of Shreya Singhal v Union of India[19] where the court declared Section 66A of the IT Act[20] unconstitutional because this provision was vague, offensive and had a chilling effect on freedom of speech. This judgment reestablished the imperativeness of informed citizens, culture of dialogue and democracy while emphasizing on the reasonable restrictions to Article 19(1)(a) in the context of technology. The plans of the government regarding the mass implementation of AI in India cannot be analysed without this context. A hazardous trend has been emerging wherein governments and corporations have started using AI as a cure-all medicine for grave problems such as fake news, hate speech, violence, and other forms of extremism.[21] However, this is marred with the incompetence of a machine to understand human emotions and tones thus resulting in undue censorship and blocking of justifiable notions of people. Moreover, AI surveillance has a chilling effect on the freedom of speech and expression as it blears the distinction between private and public. For instance, YouTube removes several videos that might be the sole evidence of horrific human rights violations and crimes despite the videos falling in the exception of having pertinent educational or documentary value.[22] Similarly, Facebook, Google, and Netflix have AI-powered algorithms that influence the content consumption of the consumer thereby adversely affecting media pluralism and diversity of opinions.[23] Artificial Intelligence, when used for regulation and censorship of content even with the use of sentimental analysis tools, is not well accepted by the citizens of India in the private or governmental actions.[24] Artificial Intelligence is immersed with the potential to transform India at the global level, by adding to the economy and bringing never witnessed before efficiency, accuracy, and speed. However, there is considerable friction between this contemporary technology and the Constitution of India. To ethically and justifiably implement this technology in India, it must be balanced with the provisions of the Constitution.


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

[4] Agarwal S, IT Ministry has formed four committees for Artificial Intelligence: Ravi Shankar Prasad, THE ECONOMIC TIMES, (June 12, 2020, 4:06 PM), ees-forartificial-intelligence-ravi-shankar-prasad/articleshow/62853767.cms. [5] Ibid. [6] National Strategy for Artificial Intelligence #AIFORALL, NITI AAYOG. [7] INDIAN CONST. art 21. [8] Companies Act, 2013. [9] Tavawalla Huzefa, India: Can Artifcial Intelligence Be Given Legal Rights And Duties?, MONDAQ, (June 12, 2020, 4:34 PM), [10] Indian Strategy for AI and Law, 2020, INDIAN SOCIETY OF ARTIFICIAL INTELLIGENCE & LAW, (June 12, 2020, 4:17 PM), [11] INDIAN CONST. art 21. [12] Ibid. [13] Ibid. [14] Ibid. [15] The Personal Data Protection Bill, 2018. [16] K.S. Puttaswamy v. Union of India, (2014) 6 SCC 433. [17] INDIAN CONST. art 19, § 1, cl. a. [18] Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. BIG DATA & SOCIETY, January - June 2016, pp. 1-12 (2018) [19] Shreya Singhal v. Union of India, AIR 2015 SC 1523. [20] The Information and Technology Act, 2000 § 66A. [21] Marda, V. 2018 Facebook Congressional Testimony: “AI tools” are not the panacea. ARTICLE 19, (June 12, 2020, 4:23 PM), [22] Kate O’Flaherty, YouTube keeps deleting evidence of Syrian chemical weapon attacks, WIRED, (June 12, 2020, 4:20 PM), chemical-weapons-in-syria-youtube-algorithm-delete-video. [23] Human Rights in the Age of Artificial Intelligence, ACCESS NOW, (June 12, 2020, 4:37 PM), [24] Ibid.

Amol Verma, Chanakya National Law University, Patna. Technology aids in making human life easy and smooth. Artificial Intelligence (hereinafter referred to as ‘AI’) as the name suggests can be comprehended to mean the capability of a digital computer to undertake tasks usually associated with intelligent beings.[1]Many written resources have dealt with law and AI.[2] There has been no dearth of debates and discussions pertaining to the impact that Artificial Intelligence is going to have over the future of the Indian Legal System.[3] It is not possible to imagine the future of Indian legal profession without the intervention of AI as the clients themselves are going to demand efficient and speedier solutions to their legal problems which is possible through AI and technology. In fact, technology has already knocked the doors of the legal profession. Legal research which is considered to be of utmost importance for any legal practitioner has been transformed to its core. Legal Search engines such as Manupatra, SCC Online, Hein Online, Legit Quest, etc. have not only made legal research easy but have changed the way how legal research was carried on. This has definitely cut down the amount of time and energy that was earlier invested in searching relevant provisions and judgments in law journals. These Legal Search engines have supplemented the Indian Legal Practice to a great extent.


Artificial Intelligence has changed the work structure and framework of many industries. Without an iota of doubt, AI has the ability to transform the manner in which legal practitioners operate. Not just the laws and regulations, the Indian legal system is also dynamic in its approach and growth. 67 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Is Artificial Intelligence the Future of the Legal Sector?

TRACING THE EVOLUTION OF ARTIFICIAL INTELLIGENCE AND LAW We can find origins of attempts to mechanise legal learning back to Gottfried Leibniz in the 1600s.[7] Gottfried was a well-known mathematician who pioneered the discussion on how mathematical framework can improve the legal practice. He is credited with being a co-inventor of calculus and was also profoundly trained in law.[8] During the mid-twentieth century, many researchers took inspiration from computer science and AI and tried there in developing them in the context of the legal profession. Most of these developments were taking place in Europe across university laboratories. From 1970 -1990, most of the researchers focused upon modelling legal argument in computer-process able form and computationally modelling laws and legal regulations.[9] 68 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

AI has brought a paradigm shift in the way legal research was carried out. Within a few minutes of time, the legal practitioners can get insights into the relevant case laws and provisions they were looking for with the help of an AI-enabled legal search engine. AI has brought a considerable downfall in the expenditure of various law firms, offices and other legal practitioners. By supplementing the legal research domain with AI Technology, it has reduced the long hours of work that were earlier required to go through vast commentaries and journals. Furthermore, these efficient tools of AI-enabled Search Engines to help the Legal Professionals inefficiently advising their clients and honing their litigating skills.[4] Although, in the practical sense the AI-driven robots cannot take the place of legal practitioners, however, they can surely assist lawyers in the creation and drafting of documents.[5] By outsourcing these works to remote AI-based services, the clerical work of lawyer can be reduced to a great extent, leaving them with more time which they can devote to their core works. Areas of the Legal Profession where AI is proving to be a boon Handling of Administrative Matters – It is important to note that, apart from giving Judicial Decisions, the Judges also burdened with administrative tasks such as planning and organizing different categories of trials, sending official communications, making litigants aware about the legal rights, etc. These less important tasks which are repetitive in nature can be undertaken by AI-driven machines, thereby enabling the judge to focus on significant activities. Notification of Case Hearings through AI-driven tools – Lawyers as well as clients have to invest a considerable amount of energy and time in checking the cause list on a regular basis in order to keep them updated regarding the case hearings. AI-driven tools can provide regular updates of case hearings by way of email and SMS. Legitquest, a legal search engine provides updated information of case hearings through Email and Text Messages Accurate Prediction of Case results – AI technology helps the lawyer to know the probability of winning a particular case at hand. It predicts the outcome of a case by establishing algorithm patterns. AI-driven technology goes through the precedents related to a particular case. Certain AI software can also process the documents attached to the judgments. Legal Analysis by use of Visual Search and Case Ranking algorithm – Some start-up legal search platforms such as NearLaw and CaseMine are using innovative tools like Visual Search and Case Ranking algorithms which aids the researcher by displacing most relevant judgments within few seconds of time. The unique feature of case ranking algorithm sorts and ranks judgments across various courts/tribunals and comes up with 50 top cases.[6]

The 21st Century, witnessed the majority of the contribution to the field of AI and law from technologydriven legal start-up corporations. These start-ups made use of machine learning to present the law in a more efficient manner. Stanford University has its own research centre, namely ‘CodeX Center for Legal Informatics’.[10] The research centre weighs on the development of computational law and focuses on mechanization of legal learning.


Can AI and Technology Replace a Lawyer? It is important to understand that the main motive behind the introduction and advancement of AI and technology into the various domains is to flourish opportunities, and not to target the job of individuals and replace them. Even if it goes against its main motive there are certain hindrances that AI encounters while rendering its functions within the legal domain. The rationale decision making, the critical analysis, and the application of relevant provisions to the concerned matter at hand that a lawyer undertakes cannot be performed by any AI-driven software or tool. These software’s are simply incapable to make decisions as to which document is relevant for the case at hand. These AI tools can definitely cut down on the time taken by a lawyer but cannot apply the same legal mind and skills as the lawyer does. 69 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

AI in the legal profession is bound to lead to a better comprehension of legal ideas among legal Practitioners, legal Scholars and law students alike.[11] The effect and impact of AI and technology on Legal Careers can be presented in the following points: 1. AI will lead to more job opportunities in the legal domain: Humans and machines are bound to work in a collaborated manner once AI enters the concerned domain. One of the major prerequisites of a job would surely be sound knowledge of computational technology. The data analysis would be necessary; therefore AI is bound to create more data analytics jobs. It is important to note that, with the help of data analysis relevant insights can be generated in order to improve the legal practice. 2. The efficiency of Legal Practitioners would increase: The coming of AI-driven technology into the legal profession will in no way oust the services of legal practitioners. On the contrary, AI is deemed to contribute towards the efficiency of lawyers by aiding them in speedier legal research, contract review, proofreading of documents, etc. 3. AI demands new skills to be acquired by the coming of age legal practitioners: The days were the mere knowledge of typewriting was sufficient for a lawyer are long gone. The advancement of AI and innovation into the legal industry demands new skills such as sound knowledge of computational technology, data analytics and efficiency in online legal research from the Legal Practitioners. 4. Innovative ways of serving Clients: As of now, the law offices, lawyers and law firms adhere to the practice of billing their clients on the basis of time taken to render a particular service or alternatively on the basis of time taken to find a legal solution to the problem at hand. However, with the coming of age and development of AI-driven technology, this practice is bound to go obsolete. Therefore, this would call for a fast track change of approach from the legal practitioners in the manner of serving their clients. The entire process would become client-friendly and the billing may be done on the basis of the quality of performance/service rendered, rather than the quantity of time spent.

Only that part of a lawyer’s job which is mechanical and repetitive is capable of being undertaken by AI. At present, human cognitive skills are very unlikely to be replaced by some AI-enabled technology.[12] Most importantly, skills such as abstract thinking, emotional intelligence, legal farsightedness, client counselling and advocacy are intrinsic to lawyers per se. AI still has a long road ahead of it in order to serve as an alternative to legal practitioners.

References [1] Artificial intelligence, Encyclopaedia Britannica






[3] How Artificial Intelligence (AI) holds future for the Indian Legal System with innovative assistance, (May 10, 2020, 7:33 AM), [4] Mirza Aslam Beg, Impact of Artificial Intelligence on Indian Legal System, Legal Services India (May 10, 2020, 11:23 AM), [5] Pallavi Gupta, Artificial Intelligence: Legal Challenge in India, ResearchGate (May 10, 2020, 11:45 AM), [6] Parth Jain, Artificial Intelligence for sustainable and effective justice delivery in India, Oida International Journal of Sustainable Development 64, 66 (2018). [7]5 Giovanni Sartor, A Treatise Of Legal Philosophy and General Jurisprudence: Legal Reasoning 389-90 (Enrico Pattaro ed., Springer 2005). [8] Id. [9] Trevor Bench-Capon et al., A History of AI and Law in 50 Papers: 25 Years of the International Conference on AI and Law, 20 Artificial Intelligence and Law, 215, 277 (2012). [10] CodeX: Stanford Center for Legal Informatics, STANFORD LAW SCHOOL (May 10, 2020, 2:35 PM), [11] How careers in Law will be affected by the emergence of Artificial Intelligence, India Today (May 10, 2020, 4:54 PM), 70 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

[2] Sonia K. Katyal, Private Accountability in the Age of Artificial Intelligence, 66 Ucla L. Rev. 54 (2019); Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation, 87 Geo. Wash. L. Rev. 1 (2019).

[12] Bernard Marr, How AI and Machine Learning Are Transforming Law Firms and the Legal Sector, Forbes (May 11, 2020, 5:29 PM), [].

The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)


Abhijeet Agarwal and Namrata Saraogi, Amity Law School, Amity University Kolkata. “People talk about this being an uncertain time. You know, all time is uncertain. I mean, it was uncertain back in 2007, we just didn’t know it was uncertain. It was uncertain on September 10th, 2001. You just didn’t know it.”- Warren Buffett Owing to the gravity of present times, it is only fair to say that COVID-19 needs no introduction with respect to both its origin or the kind of impact it has cast upon the world. It has rendered us defenceless even in our safe havens. While the end of this battle against SARS-CoV-2 seems to be a distant idea, it is a vital need of the hour for us to keep probing into the potential ideas that could allow us to solve the disputes that have been awaiting resolution since a long time now. In this article, I would review the possibilities in the field of Artificial Intelligence (hereinafter referred to as “AI”) in the post-COVID-19 era. It is also a known fact by now that AI is meant to depict machine intelligence to mimic human intelligence in carrying out certain tasks such as problem-solving and learning either as a replica or slightly better than that. The competition between human productivity options and machine productivity advantages has been limited in the legal arena. The last two decades have witnessed organisations aiming to make humans highly productive while promoting business competitiveness. Several ideas namely, telecommuting, remote work, and co-working spaces that had once been thought of as impracticable have started manifesting in the lives of people especially now more than ever. The nature of the COVID-19 virus has imposed precautionary measures in the form of ‘physical distancing’ and has made remote work our last resort to keep advancing in our lives. The world is currently making use of umpteenth forms of AI to fight the fatal virus on the medical front primarily. It is vital to weigh the scope of AI in the legal field as the world seems to rush into an economic recession. To put it clearly, what we may be looking at are times of replacement of humans as factors of production to an extent, by utilizing the AI-based tools to perform mundane tasks, freeing up humans to focus on more significant work. Sooner or later what we may be heading towards are times of innovation in all fields. What we’re looking forward to is maybe as unprecedented as the times we are living in. 72 THE INDIAN LEARNING/JULY 2020

The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Can real threats be handled artificially?

One may also look into the early initiative taken by one of the tier one law firms of our country namely Cyril Amarchand Mangaldas for initiating a program called “PRARAMBH: LEGAL TECH INCUBATOR” which is one its kind program and ‘aims to augment the spirit of entrepreneurship and innovation, identify domestic talent and support upcoming technologies in the business and practice of law’, thus providing a platform for such AI and technological development. Another very interesting initiative taken by the Supreme Court of India is the development of ‘Supreme Court Vidhik Anuvaad Software’ commonly known as 'SUVAS' which is a tool specially designed to be operated in the judicial domain. At present, it has the capacity and capability of translating English judicial pronouncements, Orders or Judgments into nine vernacular languages scripts and vice versa and is considered to be one of its programs to mark the presence of AI in the field of judiciary. Now, what this points towards is an age where institutions will essentially prefer machines to humans when it comes to works like reviewing and drafting a document that has been looked at as jobs requiring skill. The fact that top-tier firms have the capability to invest in such technologies also suggest that most law firms will be willing to be equipped with having man-power that could cater to deadlines as swiftly as possible owing to their inability to invest in such a technology. Additionally, what this also points at is the fact that the world sees a potential to move towards AI that could have the potential to reduce the burden on courts as well. While it may still be too soon to suggest the formation of online courts for petty causes, which has in all means been suggested in the past by eminent authors and experts, it cannot be forgotten that the current crisis might have triggered this possibility to occur in a nearer future.


The Indian Learning | -ISSN: 2582-5631 | Volume 1, Issue 1 (2020)

Despite socio-economic and existential concerns about automation, the premise triggering an automated world seems agreeable and, in many ways, unavoidable because machines do not get sick and, thus, will not stop production. While one can term AI as a device that widens the gap between the rich and poor, thus pushing us further away from the idea of an egalitarian society, another can look at it as a tool to alleviate human suffering by solving legal issues that have been lying in courts from a time when we moved around unaware of a virus such as Covid-19. Therefore, rather than looking at AI as a threat, it is important that we embrace the benefits we can reap out of it and become friends with this unique manmade intelligence. AI is not exceptionally new in the legal sector as law firms have already turned to make use of high-tech AI software such as ‘Ravn Ace’ (that converts unstructured data to structured data while extracting automated data), ‘Kira’ (that highlights and extracts relevant paragraphs from documents), ‘Premonition’ (that analyses the time a lawyer would devote to an actual case and predicts his/her success rates by weighing all possible outcomes) etc. In fact, in June 2016, J.P Morgan Chase & Co. implemented a program called COIN which is short for ‘Contract Intelligence’ and is used as a tool that has the relevance of 360,000 hours of work in terms of human capacity which can be done in a matter of seconds. It runs on a machine learning system powered by a private cloud network that banks use.


The Indian Learning | -ISSN:Â 2582-5631 | Volume 1, Issue 1 (2020)

One of the most pertinent questions that arise from here is that, are these platforms fully developed and equipped to take up the challenges that we are facing in the current times? The answer to that may be no at present, but the fact that there is a lot more that can be achieved through the application of AI, cannot be overlooked. The authors in no way mean to harm the plight of any profession but merely hope to be of value to create a platform that can be of aid to the legal fraternity as a whole. Another area where modifications may be welcomed is the introduction of AI in the education system for the law students. For a better understanding of the subject and to slowly accommodate everyone, the example of Institute of Chartered Accountants of India may be cited, wherein students are exposed to both theoretical as well as the software application of Information and Technology during their course. Similarly, the Bar Council of India can take steps towards the introduction of necessary subjects and practical exposure for students across the country so that they become equally equipped with the technology along with the subjects of law. Moreover, it would not be wrong to say that we are moving towards a world where the majority of the things involve AI and this would, in turn, open avenues for further development in the field of AI. Lastly, it is still too early for authoritative pronouncements about the precise outcomes of AI but the future of legal service is neither Grisham nor Rumpole, nor is it wigs, wood-panelled courtrooms, leather-bound tomes, or arcane legal jargon.

Millions discover their favorite reads on issuu every month.

Give your content the digital home it deserves. Get it to any device in seconds.