Page 1


No. 66· II - 2019


Magazine of the European Law Students' Association


Demystify Artificial Intelligence

Challenges of the Digital Era in the Field of Democracy and Information

New Technologies and Legal Order: The Case of Blockchain

An analysis of Artificial Intelligence by Charlotte Altenhöner-Dion

Aleksandra Zuchowska comments on the challenges Digital Era brings

A closer look into Blockchain by Kyrill Ryabtsev

GET AN INSIDER’S VIEWPOINT ON THE HOTTEST LEGAL TRENDS with the UIA-LexisNexis Publications Collection Current Trends in Start-Ups and Crowd Financing

Natural Resources ExploitationBusiness and Human Rigwhts

A useful reference for corporate and commercial lawyers dealing with legal issues relating to establishing or financing a start-up in Europe or the United States.

For legal practitioners in the fields of project development and finance, energy and natural resources, and environmental law, and for anyone interested in the direction that human rights and environmental law is taking in the area of natural resources exploitation.

Compliance - Challenges and Opportunities for the Legal Profession A useful reference for legal practitioners dealing with day-to-day compliance issues in the fields of corporate, commercial, banking & finance, and antitrust law.

Legal Aspects of Artificial Intelligence For legal practitioners and academics interested in learning about the future of the practice of law and courtroom decisions.

Recognition and Enforcement of Judgments and Arbitral Awards For dispute resolution lawyers with international clients as well as academics specialising in the field of international dispute resolution.


store.lexisnexis.co.uk boutique.lexisnexis.fr 2 | SYNERGY Magazine


ELSA International Phone: +32 2 646 26 26 Web: www.elsa.org E-mail: elsa@elsa.org

The Association The European Law Students' Association, ELSA, is an international, independent, non-political and not-for-profit organisation comprised of and run by and for law students and young lawyers. Founded in 1981 by law students from Austria, Hungary, Poland and West Germany, ELSA is today the world’s largest independent law students’ association. Synergy Magazine Synergy Magazine is ELSA's members' magazine, which is printed in 10,000 copies and distributed all over the ELSA Network. The articles are contributions from students, young and experienced lawyers as well as academics.


Human Rights Partner



ELSA’s Members

General Education Partners

ELSA’s members are internationally minded individuals who have an interest in foreign legal systems and practices. Through our activities, such as seminars, conferences, law schools, moot court competitions, legal writing, legal research and the Student Trainee Exchange Programme, our members acquire a broader cultural understanding and legal expertise. Our Special Status ELSA has gained a special status with several international institutions. In 2000, ELSA was granted Participatory Status with the Council of Europe. ELSA has Consultative Status with several United Nations bodies: UN ECOSOC, UNCITRAL, UNESCO & WIPO.

General Partners

ELSA is present in 44 countries Albania, Armenia, Austria, Azerbaijan, Belarus, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Montenegro, the Netherlands, Norway, Poland, Portugal, Republic of North Macedonia, Republic of Moldova, Romania, Russia, Serbia, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, Ukraine and the United Kingdom.

SYNERGY Magazine



Editor-in-chief: Irem Ozener

Would you like to contribute with

Would you like to advertise your

Design : Irem Ozener

articles or pictures for the Magazine?


Proofreading: Pavel Vevera and

Please, contact ELSA International for

products, please do not hesitate to

ELSA International Proofreading Team

further information and guidelines.

contact ELSA

Contact: marketing@elsa.org




International: synergy@elsa.org SYNERGY Magazine | 3

EDITORIAL As we already stepped in the month of December, leaving behind almost half of a term, it is high time to present you the first edition of Synergy Magazine of our term and the second edition of the year 2019.

Irem Ozener Vice President in charge of Marketing ELSA International 2019/20

I would like to take this chance to thank all my board members for being 100% supportive throughout the process from the very beginning until now. 7 unique personalities with the same ambition, which is to take ELSA one step further in the light of Human Rights. Additionally, my special thanks go to my dear Assistant for Publishing Pavel Vevera for helping me and giving me all the technical support I needed, and of course the amazing members of the Proofreading Team for their remarkable work. As of today, we celebrate the 27th year of ELSA's vision and Philosophy Statement, thriving through the aim of raising aware-

4 | SYNERGY Magazine

ness of Human Rights across 44 Member Group of ELSA and all across the world. In the spirit of this aim, I hope the following pages of this edition of Synergy Magazine brings you another perspective on this topic, the impacts of the Artificial Intelligence development on Human Rights, Rule of Law and Democracy as well as an insight of the ELSA Network. I wish you all a good read. Happy Holidays!



International Board 2019/2020


Demystify Artificial Intelligence


08 11 14 17 20 22 24

Demystify Artificial Intelligence Heading for the Future: The Council of Europe at 70 The Challenges of AI and the Practice of Law AI as a Generator of Social Change Artificial Intelligence (AI) and the Future of Law Making it in Manhattan Webinars in Higher Education


26 28

Innovation and the Law - Where Passion meets Reason Attending EWLA's Congress on the Fourth Industrial Revolution


Impacts of Artificial Intelligence on Human Rights and Contemporary Problems


32 34 39

STEP to Unknown INTA's 2019 Europe Conference: Embracing Change


44 48 56

Artificial Intelligence in the Judicial System: Arguments For and Against

Challenges of the Digiral Era in the field of Democracy and Information

AI: Friend of a foe in Protecting Children's Rights? New Technologies and Legal Order: The Case of Blockchain AI Use in Warfare

51 SYNERGY Magazine | 5

44 Countries, 1 United Network.

INTERNATIONAL BOARD 2019/2020 As ELSA International of 2019/2020, we proudly present ourselves to our Network and our Synergy readers.

don’t make him decide about the next destination. He will be undecided.

Diomidis Afentoulis, the President of ELSA International, had to travel a long way to the ELSA House. Born and raised in Thessaloniki, Greece, his biggest challenge in the city of Brussels was initially the weather and the lack of homemade food. With music and chocolate being constantly on his desk, up from the early morning hours, he mainly works on the strategic management, the external relations and the representation of the Association. Being very much of a talker, he might ask you more than two times per day “How are you?”, while his biggest pride and achievement are the cultural Sundays, that he and his Board spend in the ELSA House.

Irem Ozener, the Vice President in charge of Marketing of ELSA International, comes from Turkey and is a Bachelor's student at Istanbul University. This is her fourth year in ELSA as a Marketeer as she has always been passionate about expressing herself in the artistic ways and finds great joy in creating materials and designing with Adobe Programmes. Her responsibilities include taking care of the Corporate Identity of the Association, creating a marketing strategy for every project and dealing with the external relations. Even though her busy schedule keeps her at the office almost all the time, in her free time, she likes cook Turkish food for her board members and tries to explore Brussels when she could find time to leave the ELSA House.

Nana Gegia, the Secretary General of ELSA International, is from Georgia where she obtained her Bachelor Degree and is currently enrolled in a two year LL.M. Programme at the Tbilisi State University. Nana has been a member of ELSA since the Autumn of 2013. She thinks that it is an amazing opportunity to be working with seven different individuals with 6 different cultural backgrounds for the Network. Nana is in charge of the Internal Management, ensuring constant development and cohesion within the ELSA Network. Her goal for this year is a quality evaluation of the current projects and all the sub-areas under Internal Management such as Statutory Meetings, Area Meetings, ELSA Training, Human Resources and Network communication. Sotiris Vergidis, the Treasurer of ELSA International, is originated from Greece but he claims to feel more northern. From the very beginning of his life was undecided about major choices such as sweet or salty, hot or cold, north or south. The same it was about his studies as well, as the final choice of whether to study Finance or Law was an unresolved matter. That’s why he decided to do both. His favourite word quote is ‘I am on a budget!’, so If you want to get his attention start with words such as sustainability, liquidity, asset, cash flow, balance sheet. When he doesn’t do accounting he loves adventure, traveling and especially one way tickets. But

6 | SYNERGY Magazine

Sarah Ikast Kristoffersen, the Vice President in charge of Academic Activities of ELSA International, is originally from a small town in Denmark, but has studied in Copenhagen, Paris and London. Prior to joining the International Board, Sarah served in the National Board of ELSA United Kingdom and in the National Directors Team of ELSA Denmark. Sarah’s responsibilities include organising ELSA’s human rights moot court competition, coordinating the academic competitions of the network, facilitating legal research through comparative research reports and the ELSA Law Review as well ascoordinating ELSA’s human rights campaigns. Jakub Kacerek, the Vice President in charge of ELSA Moot Court Competitions of ELSA International, born and raised in Slovak Republic but has been living and studying in the beautiful city of Brno, Czech Republic, for the past four years where he also began his ELSA carrier. His responsibilities include the organisation of one of the biggest moot courts in the entire world, the John H. Jackson Moot Court Competition on WTO Law, and the supervision and coordination of the National and the Local Moot Court Competitions. In his free time Jakub likes to read, educating himself on today's geopolitical matters and hanging out with his friends. He describes

Nana Gegia, Sotiris Vergidis, Aleksandra Zuchowska, Diomidis Afentoulis, Sarah Ikast Kristoffersen, Jakub Kacerek, Irem Ozener, Meeri Aurora Toivanen ELSA as a hobby and an activity that he enjoys doing from the bottom of his heart. Because of this, he wouldn't trade his decision to become a member of this amazing association for anything in the world. Aleksandra Zuchowska (aka Lexie), the Vice President for Seminars and Conferences of ELSA International, comes from Warsaw, Poland. She is a recent law graduate, who decided to dedicate her time right after the graduation to developing the association while being an executive board member of ELSA. In her daily work, she is in charge of i.a. ELSA Law Schools – international academic project organised annually in around 60 European destinations; ELSA Delegations, which consist of participation of the representatives of ELSA in the sessions of international organisations and institutions, ELSA Webinars and many more. When in Brussels, Aleksandra enjoys having long walks in the city, as well as exploring the local cuisine. Who does not love fries and waffles in the end?

Meeri Aurora Toivanen, the Vice President for the Student Trainee Exchange Programme of ELSA International, was born in Finland but for the past six years, she lived abroad in eight different countries studying law and using the acquired skills in work life. Growing up in the real countryside did not only teach her survival skills in case she gets lost in the woods or encounters a wild animal, but it most importantly shaped her passion for environmental conservation and social responsibility from a young age. Following three years as an active ELSA member in Tallinn and Maastricht, she was elected as the Vice President in charge of STEP at ELSA United Kingdom for two years in a row. During those two years, I also became the founding President of ELSA LSE SU and joined the ELSA International Team as a STEP Coach. She would not change one second from the countless hours dedicated to ELSA and I am happy to continue working for our Association and the members it represents!

SYNERGY Magazine | 7



Charlotte Altenhöner-Dion

Head of Internet Governance Unit Secretary of MSI-AUT (Expert Committee on Human Rights Dimensions of Automated Data Processing and AI) Media and Internet Division - Information Society Department at Council of Europe

What is artificial intelligence? It is curious that we all speak so much about a term that is not clearly defined. In fact, it is safe to assume that we all speak about something different when referring to ‘AI’. There are multiple attempts to define AI, most involving three machine properties: a) the ability to sense the environment, b) the ability to select a specific response to it, and c) the ability to act on that selection. This quite resembles a definition of animal intelligence from 1882 as “the capacity to adjust behaviour in accordance with changing conditions”1. Yet in a machine, this behavioural adjustment can be very powerful. A thermostat can sense the temperature in a room, select the set response to it, and then act upon it automatically by either turning itself on/off or staying put. In line with the above criteria, a thermostat acts in an artificially intelligent manner. In combination with two important elements that are available in today’s environment, massive amounts of data and unprecedented processing powers, AI promises enormous efficiency and effectiveness gains in a wide range of fields, including industrial productivity, health care, transportation and logistics. Many of us reap the benefits of technological advancement. In fact, our use of modern tools for communication, news consumption, education, entertainment, commercial 1  George Romanes, Animal Intelligence, 1882 (Cited in Alan H. Fielding,MachineLearningMethodsforEnvironmentalApplications (2012) 227). 8 | SYNERGY Magazine

transactions and multiple other facets of everyday life have fundamentally transformed the way our societies are interacting, structured and governed. But there is also growing concern about the broader implications of the use, and possible abuse, of automated data processing and mathematical modelling for individuals, for communities, and for society at large. Can computational data analytics replace the reasoning of a trained judge when applying the law to a specific context? How does algorithmic decisionmaking affect the delivery of essential public services or our recruitment and employment conditions? Can individuals remain visible as independent agents in societies that are shaped by automated optimisation processes? And finally: how does the increasing reliance on mainly privately developed and run technology square with the rule of law and the fundamental principle of democratic societies that all power must be accountable before the law? It is good that we are asking these questions. Economic growth constitutes an important public policy objective and innovation remains one of its key pillars. Yet, innovation today develops at an ever faster pace and prompts societal changes, sometimes unintended or unwanted, at an extraordinary and fundamental level. So far this is not accompanied by the type of firm policy and regulatory response that has historically been triggered by innovative processes. The argument

Partners' and Externals' Perspective put forward is usually that regulation would stifle innovation and could prevent Europe’s ability to compete against the U.S. and China.2 The same arguments were used to prevent the introduction of seat belts in cars. Yet, innovation continued and the introduction of seat belts has not only saved millions of human lives but has also paved the way for more profitable innovation related to safety standards since. We need to stop thinking of AI development with such awe and feebleness. Mathematical models and statistics have facilitated pattern recognition for centuries. The fact that we now have enormous amounts of data to feed into our calculation systems does not necessarily make their outputs more accurate. If you have more data, you will usually find more correlations. But often, you will also produce more errors in the form of both false positives and false negatives. 3 “If a machine is expected to be infallible, it cannot also be intelligent.” 4We need to demystify AI, integrate the inevitability of machine errors in our planning, and develop contingencies. It appears from the current state of investigations into the two recent crashes of a Boing 737 Max in Ethiopia and Indonesia, as if the most effective means to avoid an accident would have been for the pilots to deactivate the automated Maneuvering Characteristic Augmentation System (MACS).5 But in an environment where the common assumption is fueled that machines are smarter than humans and that the likelihood of human error is much higher than that of a machine, what professional is going to trust him- or herself in a situation of stress, when the decision to override a machine may trigger direct responsibility (and personal liability) for the live(s) of others? All too often, expectations and fears of AI are utopian and vastly exaggerated. We have to overcome the sense of being overwhelmed by technological advancement and reflect on what kind of society we want to live in. To that end, we need to develop clear legal frameworks to allocate duties and responsibilities to the various 2  See Daniel Castro, Vice President of the Information Technology and Innovation Foundation (ITIF), in Politico Europe’s silver bullet in global AI battle: Ethics. 3  See also Beware Spurious Correlations, Harvard Business Review, June 2015. 4  Alan Turing in his lecture to the London Mathematical Society on 20 February 1947. 5  Alan Turing in his lecture to the London Mathematical Society on 20 February 1947.

actors involved in the design, development and use of AI tools and we need humans to not just be “in the loop” but in control always. Human control does not only mean that software programmers ensure that new applications are diligently tested for technical errors, but it must mean democratically legitimated control and oversight. There are certain decisions that we should take. Do we think that the further development of lethal autonomous weapons should be legal? Do we think that we should own the data that we enter into the various devices we use? Who do we want ultimately to be in control of decisions that carry significant weight or legal consequences for individuals – or indeed large parts of the population: our governments or automated software that is usually designed and managed by private actors? All of these are important questions and in order to be able to respond to them properly, we, first of all, need more information. ‘We’ means all of us, irrespective of age, gender, race, language ability or socio-economic background. Too many of us define ourselves as “illiterate” when it comes to the potential impacts of AI development (be they positive or negative) on our lives, on our communities, our environment and the way resources are distributed. But how can we go about it? In February 2019, the Committee of Ministers of the Council of Europe called on states to initiate open, informed and inclusive public debates regarding the question where to draw the line between forms of datadriven persuasion that is permissible and unacceptable manipulation and to make this a high priority concern for the government. It equally called them to consider the need for additional protective frameworks that go beyond current notions of personal data protection and address the significant impacts of the targeted use of data on societies and on the exercise of human rights more broadly.6 Following the example of its previous initiatives towards the adoption of legally binding treaties in fields where new ethical and legal issues emerge from technological advances7, the Council of Europe is maintaining inclusive and inter-disciplinary discussions with policymakers, civil society, independent researchers, the 6  See the Declaration on the manipulative capabilities of algorithmic processes, Decl(13/02/2019)1. 7  See the Council of Europe Convention on Human Rights and Biomedicine (Oviedo Convention, ETS 164) and the Additional Protocol on the Prohibition of Cloning Human Beings (ETS 168). SYNERGY Magazine | 9

Partners' and Externals' Perspective technological community and the private sector to identify AI-related risks that could directly compromise member States’ obligations under the European Convention and other binding instruments, including the European Social Charter. At the same time, the organisation is actively exploring direct and indirect effects of emerging technologies, including AI, on the exercise of human rights, on democratic societies and on the viability of existing institutional frameworks, and is helping member states in formulating adequate policy responses. It is not obvious how the continuously evolving challenges in the many fields where data-driven technologies and services are deployed, whether in the health sector, the criminal justice system, or in relation to communication networks, can most effectively be addressed. Building on its existing standards8, including, the Council of Europe is also supporting the adoption of adequate legislative and non-legislative measures at a national level with sector-specific recommendations, guidelines and codes of conduct that provide advice through a common and human-centric approach. For impacts that are pervasive and conceivably irreversible, clear, binding and enforceable rules must be formulated and legitimated through democratic processes. This will help us govern AI throughout all stages of its design, development and deployment in a manner that ensures the “primacy of the human being” at all times. Human-centricity must remain at the heart of all our efforts. More AI by itself is not going to solve our problems, nor can it sustain everlasting economic growth. Without duly respecting the values of democratically governed societies, the economic benefits deriving from AI cannot be realised. We must demystify AI and start governing it as we have governed previous innovative processes – with clarity, purpose and speed. Only then can we reap the benefits 8  See notably the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (CETS No. 108), as modernised in the Amending Protocol (CETS 223).

10 | SYNERGY Magazine

properly, sustainably and collectively.

Partners' and Externals' Perspective


Cathie Burton

Senior Editorial Adviser Directorate of Communications

Seventy years ago, Europe was a desert. Millions had died, cities lay in rubble and economies in ruins. But out of tragedy grew opportunity - the chance for countries to get together and overcome centuries of conflict by forging a common path forward. Europe needed something it had never seen before - a way to confine horrific human rights abuses to history; to build democracies based on diversity, respect and free elections, and to ensure everyone was treated equally. Those three principles - human rights, democracy and the rule of law - became the bedrock of something new, the Council of Europe. The idea was born in wartime, as Europe’s leaders puzzled how to build peace. Winston Churchill coined the name ‘Council of Europe’ as early as 1942 and pushed his vision in 1946 calling for ‘a kind of United States of Europe’. The idea was popular, promoted by the emerging European Movement, whose 1948 conference in The Hague provided the impetus. By May 1949, foreign ministers from ten European countries were signing their commitment to the new institution at St James’ Palace in London. The post war years were hard for everyone. Countries struggled to rebuild smashed economies; people struggled to rebuild smashed lives. The new Council of Europe had a full agenda, and one that was coloured by geopolitics. With a confident Soviet Union exerting

power on the countries of Eastern Europe, there was no guarantee that another war might not be on the horizon. The Council of Europe gave western countries a way to bond as a political unit, promoting popular democracy as an insurance policy against communism. In parallel, governments were taking the first steps in building the human rights system, with its backbone in the European Human Rights Convention. Taking its inspiration from the United Nations Charter launched just a few years before, the Convention came into force in 1953. By 1959, the Human Rights Court – the body that would judge human rights violations - was sitting for the first time in Strasbourg. By the 1960s, it was clear that guaranteeing freedom, prosperity and safety for all Europeans demanded both vigilance and forward-thinking. Events were testing the promises made a decade before, and as Greece fell to military rule, and withdrew from the Council, many people were still trying to build personal security at home and in the workplace. The time had come to look at how to help people in their day to day lives – with protection at work, through social security and with medical insurance. The European Social Charter was designed to do just that – match the guarantee of human rights protection with the assurance of better living conditions. Progress on big political issues proved the worth of SYNERGY Magazine | 11

Partners' and Externals' Perspective approaching common problems together. When the European Directorate for the Quality of Medicines was founded as part of the Council in 1965, it brought in a safety net that insured all medical products were safe and harmonised throughout the Council region. And as the 60s gave way to the 70s, young people started to demand their own voice in building the future, and the Council of Europe’s youth work began.

in data transfer. International tragedies such as the Heysel stadium disaster, where fan violence led to the deaths of over thirty people, brought international action in the form of new treaties. At the same time, the Council of Europe was taking a tougher stand towards human rights abuses, with a ban on the death penalty in wartime, and an Anti-Torture Convention to protect people in places of detention.

The 70s began on a wave of turbulence. British troops patrolled the streets of Northern Ireland and conflict broke out in the Mediterranean. Few countries were spared, with terrorists resorting to violence in pursuance of political aims in Germany and Italy. It was the decade that saw the first inter-State case come before the Human Rights Court as Ireland accused the UK of practising torture. It also saw the first joint attempt to stop terrorists in their tracks as the Convention on the Suppression of Terrorism came into being. But there were grounds for optimism – Spain became a democracy and joined the Council of Europe family. The Council of Europe turned its attention to new areas – growing international co-operation on education, sport and natural heritage with the innovative Bern Convention to protect the continent’s flora and fauna.

But the greatest changes came at the end of the decade, as the Berlin Wall fell and countries of Eastern Europe turned their backs on communism. A new Europe was taking shape. In November 1990, Hungary became the first country to join the Council and by the first years of the 21st century the organisation covered most of geographical Europe. The 90s was a decade of transformation, with the Venice Commission established to guide new democracies in how best to set up their national institutions. Yet wars in former Yugoslavia and in Chechnya showed that hate was alive and well. The commitment to human rights was renewed, with the first Human Rights Commissioner appointed, a reformed Court that gave everyone direct access, new treaties to protect minorities, a ban on any form of discrimination and the creation of a Commission against racism - ECRI. By the turn of the century, all Council of Europe members had agreed to ban the death penalty in all circumstances, and had brought in new rights for children involved in court proceedings.

By the 80s, international travel and business was commonplace. Gradually, problems that had once demanded national solutions became cross-border issues, and joint action seemed the best response. The Pompidou Group began its work towards effective drug policies, new conventions were drawn up to tackle the movement of firearms and to protect privacy

12 | SYNERGY Magazine

The fall of the Twin Towers in 2001 and the advent of the internet age heralded a world where crime was no

Partners' and Externals' Perspective longer confined behind borders. The Council of Europe had anticipated the changes with its Cybercrime Convention of 2001: later it brought in conventions to tackle terrorism, terrorism financing and corruption and set up mechanisms for closer international cooperation. Closer to home, there were moves to stop long-standing forms of abuse, such as domestic violence, to tackle modern forms of old crimes, such as human trafficking, and to combat new crimes such as faking medical products. Conventions on these themes brought countries together in innovative and effective ways. Even in these last years, war and conflict have not been far away. Whilst new countries are born in peace, old enmities re-emerge and tensions resurge. People are still tortured. They are still exploited and sold as

slaves, executed and treated unfairly. A technological revolution has brought us speed and comfort, but threatened our personal privacy and the certitude that we can vote freely and live in a democracy safe from manipulation. New forms of extremism resurrect the ghosts of a violent past we thought we had overcome. What of the future? How will we adapt to changes in society, in technology? Could artificial intelligence prove a boon or a threat? Could discrimination be overcome, or might racism, sexism, and other forms of hate mushroom and grow? If the past 70 years have taught us anything, it is to be vigilant and to work together to defend the gains we have made and to face the challenges of the future. As we start our eighth decade, the Council of Europe is needed more than ever.

SYNERGY Magazine | 13

Partners' and Externals' Perspective


Marc Gallardo Meseguer

Partner, RSM UIA Deputy director for Digital Strategy Barcelona, Spain

The challenges that the legal sector is facing with the so-called "digital transformation" are numerous. Emerging technologies such as machine learning, artificial intelligence (AI) and blockchain are already impacting most areas of economics and society, and will undoubtedly continue to do so to a greater extent in the coming years. This will affect the work of all the legal professions (including judges, lawyers, registrars and notaries). The change seems inevitable and raises many issues, in particular, that of whether AI, at least in part, will replace legal professionals in the next decade. Although the rise of AI appears to be unavoidable, numerous aspects of its impact on the legal profession have yet to be determined. What will the changes caused by the use of these new technologies look like? To what extent will they affect the way we work on a daily basis, for example, drafting legal documents, searching for case law or offering new legal services that are linked to the nascent phenomenon of Legal Tech? 14 | SYNERGY Magazine

Will we be able to anticipate new trends that have yet to emerge? As some well-known authors have pointed out, our sector is likely to change more drastically over the next two decades than it has done in the last two centuries. Richard Susskind goes even as far as to state that "traditional lawyers will be largely and in the long-term replaced by advanced systems, or by less expensive workers supported by standard technology or processes, or even by laypersons armed with online self-help tools." We do not necessarily need to adopt this somewhat apocalyptic view of the legal profession; however, the truth is that tasks which can be automated by algorithms will be more affected by AI than those that require typically human skills, such as creativity, and the detection and management of emotions. Given the current state of technology, it is very difficult to replicate such skills using machines.

Partners' and Externals' Perspective It may also be that this horizon is temporary and that innovations will one day challenge our current hypotheses. Only time will tell. For the moment, as long as the tasks of each of the legal professions cannot be automated, flesh-and-blood lawyers and judges will continue to exist. In contrast, repetitive tasks with low value-added, such as document management, are highly likely to be automated and industrialized using the Legal Tech tools that are already available. Other tasks could be partially automated in a short time, such as drafting contracts, due diligence reviews, e-discovery, or even drafting judgments (although France has recently passed a law to forbid the use of predictive justice tools, with penalties of up to five years in prison for anyone who publishes statistical information on judges' decisions and patterns of conduct regarding the sentences they hand down). However, these innovations should not necessarily be seen as a threat to the survival of some legal professions, but as an opportunity to use new technological tools to support many of the tasks that we perform daily. This will allow us to focus more on activities that require greater analysis and creativity, and where legal professionals can add considerable value. This man-machine collaboration will turn out to be positive and sustainable if we take the necessary preparatory steps to understand the technologies and how they can serve us. This will ensure effective results in our work, even in an increasingly competitive environment. It will be necessary to acquire new skills and knowledge. For example, we will need to have a basic understanding of computer programming and the underlying methods and techniques of the technologies we use to provide our services. Universities and law faculties will have to adapt their courses in order to train their students on Legal Tech and on the potential legal consequences of using emerging technologies. One of the aspects that is taking on increasing importance in the debate on the regulation of AI is the ethical rules that must govern the use of AI. The European Ethical Charter on the use of AI in judicial systems, mentioned above, has identified the following core principles to be applied in the field of AI and justice: 1) Principle of respect of fundamental rights: ensuring SYNERGY Magazine | 15

Partners' and Externals' Perspective that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights; 2) Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals; 3) Principle of quality and security: with regard to the processing of judicial decisions and data; 4) Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorizing external audits; 5) Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices. There are other similar recent initiatives promoted by the European Union that concern the ethical aspects of AI. The European Commission's Communication of 8 April 2019 on Building Trust in Human-Centric Artificial Intelligence is one of the most prominent proposals so far. In this Communication, the Commission recalls that in order to achieve ‘trustworthy AI’, three components are necessary: (1) it should comply with the law, (2) it should fulfil ethical principles and (3) it should be robust. Finally, another aspect that is being widely discussed is civil liability in the use of AI applications. It is planned to devote several case studies to this subject during the session on November 9. The "innocence" or "guilt" of AI will be examined in a simulated trial, with the parties defending their respective positions. A human judge will render the final decision. We, therefore, invite you to participate in this interesting debate on the paradigm shift that has already started in all the legal professions.

16 | SYNERGY Magazine

Partners' and Externals' Perspective


Second-Vice President at European Communities Trademark Association

AI is upon us. Many see the emergence of the artificial intelligence as a unique and unforeseeable threat. If we try to step back, we can actually distinguish at least two reasons why AI is perceived as a threat. The first and the most widespread one is primarily stemming out of the general human difficulty to embrace the new and the unknown, especially if this is novel at such magnitude. This is certainly compounded by the intuitive fear that this particular new technology is bringing a very deep change that can, potentially, eliminate many jobs or even entire professions. In the case of lawyers, this threat is rather unprecedented as the profession has uninterruptedly been developing for centuries with ever growing number of professionals together with their reputation and standing in the societies which they served. These fears are not unfounded, and the lawyers as an educated part of population quickly

properly grasped the full potential of the emerging technology which does, indeed, bring the potential of automatizing large segments of the traditional legal work. The survival instinct of the entire profession kicked in and the lawyers turned their sights the other way from the imminent developments for a couple of decades. The second ground for perceiving the emergence of AI as a threat is much more elaborate in the sense that it stems out of the understanding of the inherent traits of AI. Examples of such threats might be the issues of “explainability” and “unlearnability”. The role of the rule of law has been raising in importance for centuries to take the position of the very central axis of social organization. The feudalism was replaced by the rule of law and the civil democratic societies as

SYNERGY Magazine | 17

Partners' and Externals' Perspective

the forces of social reorganization were unleashed by the industrial revolution and were aided by the way we have started using law to regulate the individual rights in itself. There can be very little doubt that law and the legal system, followed by the raising role of the intangible assets protected by law, have brought unprecedented levels of development to a number of societies on the planet. It could be even asserted that this potent combination of judicial and legal systems in conjunction with intellectual property in particular have brought the highest rates of development in the entire human history. Law did serve us a potent booster for development like no other social institution in human history.

societies, it is becoming increasingly certain that its role will shift and its form will morph. In accordance with the so called “principle of layering� in technological development. To the people who have noticed that human innovations tend to survive in layers it is clear that print, radio, and television all survived the onset of each new technology, and that they all have continued to thrive after the emergence of internet, albeit with different roles and functions. Here it is possible to make the analogy with the developments in the social realm where we can also observe that the forms of human invention tend to stick around over the centuries without disappearing, unfortunately including slavery that also morphed and shifted to the social margins.

Feudalism might have been ultimately replaced by civil society and the rule of law as the central axis of organization of modern societies. While it may have lost its dominant role, it did not disappear. While we were proudly and rightfully touting this achievement as a progress of human race, the feudalism morphed, shifted and transformed itself into a different quality, and have persistently remained a part of the social organization. It is impossible to overlook its morphed remnants in the very visible role in constitutional monarchies where royal sovereigns play an important role. Not least, in many countries, republics or monarchies, there is a number of associations composed of the individuals who are heirs to the formerly noble families and who keep the alive the memory of the feudal social structure. It is possible to imagine legal system morphing and shifting into its future role in a similar manner.

Therefore, the emergence of AI does not mean an end of the rule of law. Indeed, moving beyond the rule of law requires its strong functioning in the background. It does likely signify a radical shift in the role of the rule of law. If we try to read the future we could probably first spot a chance for the AI as a potential savior of the legal system. As its inconsistencies increase under the ever-growing complexity, the legal systems are even worse suited to serve as a kind of guidance to their societies that they held as one of their most important traditional roles. Bringing consistency to the complex legal systems in the conditions of ever more populous, complex and diverging societies is something the legal systems were not designed to do such a massive scale. Just the doubling of the population on the planet would have seriously endangered the cohesion of the legal systems, but the growing social freedoms and the multiplications of the number of interactions enabled

While it is clear that law will not disappear in the future

18 | SYNERGY Magazine

by the dependence on the internet sealed the fate of the traditional role for the legal systems for good. I would argue that the answer lies in abandoning the skills as the core foundation of the legal profession. If we accept the role of the properly structured AI as a replacement for the basic legal skills what does remain? It is true that this means loss of traditional jobs for many legal assistants, trainees and associates as well as for many attorneys as well. However, I feel that already by addressing this threat, it is becoming clear that the morphing of the legal profession into a creative profession rather than skill based profession will be needed for its survival. It looks likely that the societies, in order to bridge the huge gap we have developed between our actual position in relation to the place that we need to be today in order to adequately address the needs of contemporary and future societies, legal profession needs de employ traits such as imagination and creativity. In other words, legal profession needs to morph into a creative and imaginative profession, rather than remain a repetitive skills-based craft.

new products and services and inventing of the new marketing methods in the societies where they are permitted. In the societies that have realized that the innovation is a condition of survival these kind of efforts of the legal profession do not amount to a sufficient guarantee for its justification and its survival. By utilizing design thinking and imagination the lawyers of today and tomorrow will aim to shape the solutions we need to reshape the status at hand and realign the social forces for greater efficiency. In that sense, as lawyers, we need to pay more attention to the technological and social development and must not shy away from innovating profusely and abundantly the solutions our societies need to realign with their needs. For this to be achieved we need not only understanding of new technologies, but also gain different legal education and ambition to contribute to our society in innovative ways. If the AI will push us in this direction, it might, indeed prove to be the saviour of our profession.

This means that innovation in the legal profession cannot remain limited to the introduction of the

SYNERGY Magazine | 19


It’s an exciting time for those entering the legal profession, especially if the creation of new laws is something that sparks your interests. With the rapid development of AI, the need to revamp the industry and create new laws is forever growing. Current legal professionals tend to look at the advancements with fear, however students and those who are newly qualified are ready to embrace these changes. AI in legislation will be the focus of this article. 20 | SYNERGY Magazine

Partners' and Externals' Perspective

Artificial Intelligence in Legislation You are part of the generation which could be involved in the process of creating new laws on AI. Yes, this may seem daunting but when it’s broken down it becomes much more manageable. Over the past few years our firm’s legislative specialist has written frameworks on the different approaches to legislating AI and has concluded that there are three ways to make this process easier. The first approach is to use existing law to address the issues of AI rather than reinventing the wheel. For example, in 2017 MEP’s called for ‘urgent’ new laws on AI and robotics, focusing on liability issues with self-driving cars. In this instance, existing law around product and manufacturer liability would be ideal. The second approach is the adaption of existing laws, which is commonly seen in the area of autonomous, self-learning AI. If we evaluate this in the same way we evaluate other sentient thinking beings, we can adapt laws to address how we handle each stage of AI. For

example, dogs are unable to act autonomously but do learn from their environment. If that dog harms another, law may require it to be destroyed. If AI has the same level of autonomy as a dog, the same would apply. The final approach is creating new laws. However, even in this circumstance the wheel does not need to be completely reinvented. We can look at complex issues which have been successfully legislated in the past and draw from those examples. The best example is legislating for the internet. It may seem impossible to create law for something that seemingly has no boarders however, a long-standing body of law already addresses a similar problem: Maritime Law. Even experts are surprised at how applicable it is. You can now see why I believe this process is more manageable than you may have originally thought. I look forward to seeing which approaches are adopted in the future. SYNERGY Magazine | 21

Partners' and Externals' Perspective

I'll never forget it. It was in that moment that I knew it was all worth it.


Aioffe Moore Kavanagh

Associate Attorney , Maroney O'Connor LLP

Following on from the brilliant news that I had passed the New York Bar Exam, I channeled all my excited energy into the next obstacle I had to overcome: Getting a job in New York City. There is no quick fix for landing an attorney role in the Big Apple. It’s fiercely competitive, hugely oversaturated and as a non-U.S citizen, there is the whole visa process to consider. I spent 5 months applying for jobs on every jobs website I could find. When I ran out of vacancies to apply for, I googled “litigation firms in New York City” and started sending my resume and cover letter to every single firm on the list, from A-Z. I was offered an associate position quite early on in my search which I decided to turn down after much deliberation. I wanted to be a lawyer in New York so badly it felt wrong to not take the offer but I knew I wouldn’t have been happy there. The firm was located outside of the city and I really wanted to be in Manhattan. If I had ignored my gut feeling I wouldn’t have ended up where

22 | SYNERGY Magazine

I am today! I probably sent over 1,000 applications and had around 12 interviews before I finally landed my dream job. Currently, I am practising associate at the renowed insurance defense firm of Maronet O'connor LLP in Manhattan’s Wall Street District. I spend most of my days in the courtroom, engaging in settlement deliberations with judges and my adversaries, arguing motions and attending conferences. I have an amazing case list that is varied and keeps my work exciting, from MAKING IT IN MANHATTAN construction accidents to property damage, product liability to motor vehicle accidents, I handle it all. I am lucky to be a member of a firm that gives me free rein on my cases andallows me to steer them completely independently, from the initial stages of discovery right through to settlement. I am looking forward to future accomplishments at the firm, particularly trying my first ever case.

Partners' and Externals' Perspective

SYNERGY Magazine | 23

Partners' and Externals' Perspective

How to explain complex topics using online courses


Magdalena Klimko

PR & Communication Speacialist at ClickMeeting

With the technological leap that’s happened over the last 20-30 years, online studying has become a standard in education. Most universities now offer e-learning courses or subjects partially held online. How can webinars fit into this space? According to ClickMeeting’s State of Webinars 2019 report, 34% of all events held on the platform were online courses. This means there’s a significant need for e-learning experience. Students from all around the world can enroll in numerous courses ran by even the most remote universities or lecturers. Let’s also keep in mind that many online courses are created by experts without extensive academic background. We’ll take Arturo Tedeschi as an example. This worldclass architect and computational designer runs courses on the ClickMeeting platform and shares his professional experience in a new kind of algorithmicbased design. This way he can reach a wide audience. It wouldn’t be possible with traditional education. Arturo, equipped with tools such as screen sharing, a whiteboard, or presentations, can easily explain even the most complex topics to his audience. He’s receiving very positive feedback after his online classes and now webinars have become part of his professional career as a teacher.

24 | SYNERGY Magazine

Partners' and Externals' Perspective

The interesting thing about webinars is that they are not narrowed to live events anymore. Any teacher can prepare a full e-learning experience in advance using automated webinar features. The whole flow is thoroughly explained in many of ClickMeeting’s online resources. The main idea behind it is gathering materials, such as pre-recorded webinars, presentations, CTA (callto-action) button ideas, PDFs to upload for participants, tests, etc. This could be especially useful for lecturers who want to automate their teaching process and for students who wish to participate despite not physically being in the lecture hall or classroom. Webinars have been on a rise for a couple of years now and have proven to be an effective source to gain knowledge even on the most complex topics: whether it’s AI, computational design, or physics. With the right set of tools, they can all be explained through online courses and lectures. ClickMeeting is proudly partnering with ELSA International to provide the best webinar and online meeting experience.

worknights, hours of study and group meetings - had paid off. I made it into the law school of my choice. I had never been more excited to be accepted into a school. I attended the Admitted Student Day program held on the campus and had the opportunity to hear the experiences of alumni, current students, professors and the Dean. It was on that day that I felt, for the first time, what it was like to become part of a school community, one that not only cares that students finish the program successfully but also gives them the tools to succeed after graduation. After beginning my study, I realized that my intuition was correct. With the small size of the LL.M. class and the unique environment that comes from a multicultural student body, I have been able to connect with my LL.M. and J.D classmates, build relationships, hear from alumni, become part of organizations and volunteer in my community. After reading some of the history of the school, walking the halls, and speaking to professors and alumni, I am aware of how special my school is. Every day, as I step in the courtyard, I feel proud to call Brooklyn Law School my home.

SYNERGY Magazine | 25



Diomidis Afentoulis

President ELSA International 2019/2020

Are the lawyers of today prepared for the legal reality of tomorrow? Do new technologies affect litigation? Are judges losing their work to robots? What should the role of digital Advocacy be? The Annual Congress of the International Association of Lawyers (UIA) set a lot of important, yet difficult questions on the table. And surprisingly so, it answered most of them. Technology research and advisory company Gartner forecasts that around one-third of all current jobs will be automated by 2025, yet only 2% of all legal department budgets are spent on technology. Jean-Pierre BUYLE, President of the French-German speaking Bar Association of Belgium stresses out how

26 | SYNERGY Magazine

important it is that the legal sector invests more on technology and training of lawyers with the goal to prepare them for the change to come. Artificial Intelligence calls for “an augmented lawyer”, a lawyer with a different kind of intelligence, training and skills; it will be the key from the management of future law firms, to the modernisation of legal counselling, to the widening of the access to justice. “There won’t be fewer, but different tasks;” says Ian McDougall, Vice President and General Counsel of LexisNexis. “AI and lawyers should and will complement each other. Legal technology is a huge opportunity; what a lawyer called professional expertise in the past, will now be information and data.” His only advice? “Adapt and change! The legal sector shouldn’t fight the

International Focus change that technology brings, but embrace it.” Most, if not all, standard and repetitive processes are likely to be grabbed by automation in the legal industry. US law firms invested $1.5 billion in RPA (Robotic Process Automation) in legal sector offices over the past 24 months, according to the Legal Tech Sector Landscape Report by Tracxn this year. At the same time, only 4 out of the hundred top AI companies are situated in Europe; “The reason? The more you restrict data, the more you restrict the information flow, the more you restrict the AI development” stresses our Mr. McDougall. However, as much as we should be prepared for the change, we should also be ready to protect individual and collective rights. The future regulator should know and understand technology. Education on legal technology should be a part of the curricula of universities all over the world. Speaking out and raising awareness around the fundamental changes coming and the need for their legal regulation should be a part of our role as law students, young graduates and lawyers. Robotic process automation, law hackathons, virtual law firms and legal counselling via AI; they all slowly prepare us for roles inside law firms that we cannot even envisage right now. They prepare us for a different lawyer; “At the end of the day”, as a speaker at the UIA Congress said, “gazing at the future of the lawyer shouldn’t cause fear and pessimism. The legal sector should adapt and embrace the changes that RPA brings in the legal sector practice management. Because at the end of the day, technology has made it even cooler to be a lawyer.”

SYNERGY Magazine | 27

International Focus

Other than being a very educational experience, the Congress was also a great way for ELSA to show support to our partners.


Vice President in charge of Academic Activities ELSA International 2019/2020

From 21-22 November 2019, I had the honour of representing ELSA at the Congress of ELSA’s partner, the European Women Lawyers Association (EWLA), in Madrid. The Congress tackled the issues presented to society by the Fourth Industrial Revolution. The Congress was aimed at ensuring that future technological developments further inclusivity and are made with a human aspect in mind. Over two days, we discussed three pillars of the Fourth Industrial Revolution including education, business and ethics. The Congress was co-organised by the Council of Europe which is ELSA’s human rights partner. The Congress was opened by two very interesting keynote speakers, professor Gina Rippon and Dr Genevièvve Tanguay respectively, from whom we learned that the female and male brains are in fact not different, but that gender biases are still embedded into emerging technologies thus causing moral dilemmas, widening inequality and creating risks for data protection. With those concerns in mind, we set out to discuss the means by which we can create an inclusive and secure future through technology.

28 | SYNERGY Magazine

International Focus

As regards education, we learned that only one third of the world’s population has access to the internet and that women are slower in accessing the internet than men. We further learned that half of the European population is digitally unskilled. This led to a discussion about artificial intelligence; the challenges and opportunities offered by emerging technologies, and how the rise of AI means that we need to look differently at the future of work. During the second day of the Congress, the focus was moved to how businesses may contribute to creating an inclusive future. From a round table of female leaders, we were brought on the journey of establishing the W20 and how their work advances gender equality in G20 negotiations. Of particular interest to me, as a representative of ELSA, was the address by Dr Verica Trstenjak who explained the function of human rights in the digital era and concluded that we need to adapt the current human rights protection rather than creating new digital human rights. We further learned why we need women in digital leadership, how sextortion is a threat to women across the globe and how we can

increase our impact by becoming change agents. The second pillar was concluded by a roundtable on female led start-ups from where it became apparent that Europe is behind the rest of the world due to a fear of failing. The final pillar concerned ethics – in development, research and business conduct. We discovered how the Congress co-organiser, Path2Integrity works to further integrity in research and spreads their message on educational institutions across Europe. This was specifically interesting for me, because I am in charge of the legal research work conducted by ELSA, and I always look at ways to improve the quality of our research. Other than being a very educational experience, the Congress was also a great way for ELSA to show support to our partners. It is important for us to find partners with whom we share values, and EWLA stands for fundamental rights and gender inclusivity. I am therefore very pleased to have participated and encourage all ELSA members to follow the initiatives of EWLA!

SYNERGY Magazine | 29







Location: ELSA House, Brussels, Belgium Contact: elsa@elsa.org

Location: ELSA House, Brussels, Belgium Contact: elsa@elsa.org/







Website: legalresearch.elsa.org


Location: Ljubljana, Slovenia Website: elsa.org/elsa-day

Location: Strasbourg, France Working Language: English




/NOVEMBER Location: Brno, Czech Republic






Location: Bangkok, Thailand


Location: Nairobi, Kenya


Location: Malta Website: icmlviv.com


Location: Malta Website: icmlviv.com

Website: lawschools.elsa.org











Location: Vilnius, Lithuania


Location: Gรถttingen, Germany

Location: Munich, Germany









Location: ELSA House, Brussels, Belgium Contact: seminarsconferences@elsa.org



Location: Geneva, Switzerland


Location: Pueto Va,Mexico


Website: step.elsa.org




International Focus

Is democracy in danger in the information age?


Vice President in charge of Seminars & Conferences ELSA International 2019/2020

“Is democracy in danger in the information age?” is the question that has been asked and tried to answer by over 100 speakers and 700 participants of the World Forum for Democracy, an event, that was organised by the Council of Europe, and took place at the beginning of November, in Strasbourg, France. It is also the question, which 15 representatives of ELSA, attending an event has been trying to find an answer to. But why this question was even posed and why such importance was given to it? Within the development of technology, different aspects of our lives are naturally subject to changes. Although, the society in general is always a bit sceptical when it comes to changes, fearing something new to come, we happily welcomed advanced technology in our lives and do enjoy it. Starting with shopping or dating online, ending with drafting and negotiating contracts using online platforms (all of those, by the way, very handy for young lawyers, as myself ), technology simply saves effort and time. As for the time, which is one of the things that sadly cannot be recycled, it proves once again, how precious gift technology is. However, as there are always two sides to every story, the other, in case of digitalisation, is way more interesting. And indeed, that story, with regards to implications of the advanced technology on information and democracy in particular, was one of the topics discussed at the World Forum.

In the 21st century, the easiest way to access information is probably internet and social media in specific. Again, this is a gift from a caring mother - technology, which facilitates our time and comfort. Fun fact – according to the report of the European Broadcasting Union1– social networks and the internet are also least trusted media nowadays. Why? To start with, let’s try to first visualise a daily situation that at least half of us finds ourselves in, when it comes to the social media and information. When scrolling Facebook or LinkedIn, while having a morning coffee, sometimes, even if we do not want to, we encounter with some news or an interesting for us information posted by e.g. a friend from an elementary school, we do not longer even speak with. This can sometimes affect our point of view on some matters, be it political ones or the ones concerning baking a fluffy cheesecake (both close to my heart). At this point, it does not really make a difference if it is about a perfect ingredient of a cake or what a politician promises to do, if elected. It is all about seeking and finding reliable information online. Hence, the question to be asked at this point is do we fully trust in what we see or read on social media? According to the aforementioned research, the answer would be negative, however from a personal perspective, sometimes, out of different circumstances, we choose to believe even in a part of it. Why wouldn’t we though? Here, what is important to recall is the growth and 1 https://www.ebu.ch/publications/mis/login_only/trust-in-media

32 | SYNERGY Magazine

expansion of such events as fake news, finding its save place especially online. Fake news is a major problem nowadays, affecting our perception on media on one hand, and affecting democratic processes on the other. This is why, in order to restore trust in media and guarantee fair participation in democracy, the participants of the World Forum were also trying to find solution to fake news. What is interesting, is that in the point of view of some of the speakers of the event, we cannot really find a solution to fake news, as it would always limit in a bigger or lesser extent our freedoms, in this case especially freedom of expression. As long as the article 10 of the European Convention on Human Rights2, guaranteeing that freedom, has also some restriction as to exercising it, in the opinion of some speakers of the event, restrictions as to exercising freedom of expression in the online environment, would probably have to be more severe and more difficult to define, than the ones we can already read in the text of the article 10. On the other hand, what other speakers mentioned, is that maybe establishing some body, e.g. a commission responsible for “information governance online” would be a good solution. It is difficult to agree or disagree with any of those views, as we can easily find both pros and cons of them. In the end, a matter of choosing if to trust and believe in what we read or see in social media seems to be a matter of our own appreciation, isn’t it? Another problem connected to the issue of reliability of 2 https://www.echr.coe.int/Documents/Convention_ENG.pdf

the information found online, which was tackled during the World Forum is implications of artificial intelligence on dissemination of information. As a conclusion of the discussions that were held in this matter, the need for creating a set of the basic principles, which would aim to facilitate the development of technology and prevent any possible interreference of it on the democratic processes the same time, was highlighted. Furthermore, the best solution in order to achieve that goal, is for humans to develop understanding towards the work of technologies. This, according to some of the speakers and participants, would also enable us to fully take advantage of artificial intelligence and make a positive use of it. I believe this contemplation on the ratio between information, democracy and technology (maybe quite long, but hopefully an interesting for you, dear reader) should end with answering a question asked in the very first line of this article, namely -“Is democracy in danger in the information age?”. My answer would be that likely it is. Technology can be and probably is one of the greatest and worst things that happened to mankind. It is however in our hands to decide, what direction we will undertake in order to make it positively affect our lives (for a recap in this matter, take a look at paragraph two of this article). Can we avoid negative effects? Probably not, however, what we can do is to learn to treat and deal with them, as normal side effects of any medicine that serves as a mean to cure and better some conditions. 33 | SYNERGY Magazine

International Focus

Exploring the dimensions of your own comfort zone

STEP TO UNKNOWN Meeri Aurora Toivanen

Vice President in charge of Student Trainee Exchange Programme ELSA International 2019/2020

The human brain is so silly. According to research, the feeling of uncertainty you get in a foreign environment is associated with the feeling of strong pain. Thus, we are programmed to gravitate within the boundaries of our comfort zones, as we intuitively wish to avoid situations that make us feel uncomfortable and exposed to the unknown. The exact definition of what is the “comfort zone” is essentially subjective, varying from one person to another. For me personally, it is a territory where one feels confident about receiving positive feedback for their capabilities to navigate certain situations well, whether it was engaging in a discussion in their native language, giving a presentation on a subject of their specialisation at school, or following the conventional and socially accepted path of professional development. And this notion of the comfort zone I

34 | SYNERGY Magazine

have always hated. Exploring the dimensions of your own comfort zone and challenging those limits is an optimal way to discover your full potential and unlock unprecedented growth both as a person personally as well as an aspiring professional. To do so, the Student Trainee Exchange Programme (STEP) offers biannually several excellent opportunities to test these limits with ELSA’s unfaltering support every step of the way. The things I am saying here are not motivated by my current position as the Vice President in charge of STEP of ELSA International. Rather, my conviction is rooted in my experiences as a STEP Trainee myself. I did my first STEP Traineeship in Izmir, Turkey, at an international law firm the summer I finished my undergraduate studies. My second STEP Traineeship happened two years later. Then I had already completed by postgraduate

International Focus studies in London when I was offered to go to Geneva, Switzerland, to work in my field of specialisation at a trade law consultancy. Happening at very different stages in life, both of these experiences contributed in unique ways to my personal victories of expanding my personal worldview and understanding my strengths. First, STEP Traineeships immerse you to a different cultural setting. By working from Monday to Friday and returning back to your quarters in the evenings, joining the buzzing mass of other workers, you truly experience daily life in your city. Whether it is about feeling overwhelmed at a Turkish bazaar or wondering why the largest French-speaking city of Switzerland goes to sleep so early, each STEP Traineeship challenges your notion of ‘normal’. And before you even realise it, you do not even shrug at all the perky particularities of your hosting environment but smoothly bargain the best deals for your weekly shopping and navigate where the locals are spending the evenings. Second, the cultural differences are reflected even inside the office of the STEP Traineeship. If you thought that legal work is the same wherever you are and whatever you do, STEP will prove you wrong. For someone coming from a Nordic country, starting

the workday with the team getting together to drink Turkish coffee and share updates about everyone’s private life is something quite particular. In contrast to this open and hospital office culture of a Mediterranean city, working in an environment coordinated by Swiss punctuality and personal space required quite some adjusting of the mind-set again. Finally, the cultivation of a social network happens almost organically during your STEP Traineeship. In addition to your team with whom you work during your STEP placement, the Hosting ELSA Group is there for you as a STEP Trainee. This network functions as your safety net through the thick and thin, were you an independent world-trotter or not. From grabbing a drink after work to going for weekend trips somewhere, your local network certainly helps you to push the limits of the infamous comfort zone. I recognise that it is so easy to accept since the first year of law school what is deemed as the conventional path towards a professional qualification. Equally easy it is to apply for STEP Traineeships and go see for yourself which doors a STEP Traineeship (or a few) opens for you.

SYNERGY Magazine | 35

International Focus


Assistant for Teams in the EHRMCC ELSA International Team

There are two prospects, in which artificial intelligence(AI) drives inequality, and is even used to deny human rights and alternative future in which the ability of AI to propose solutions to increasingly complex problems is the source of realization of all human rights. The first rule in shifting AI’s potential to people’s welfare is analyzing the impacts of it with its whole perspective. Human rights are universal and binding and are codified in a body of international law. The rights that AI affects are largely reflected in the documents that form the basis of international human rights law. For each human right discussed below, the focus is on AI violating or risks of violating that right, and prospects. Using AI is tempting for governments, as it eases the burden of authorities in many ways. The increasing use of AI in the criminal justice system can hamper personal liberty. One example is in recidivism risk-scoring software used in the U.S. criminal justice system. Since risk-scoring systems are not prescribed by law and use data arbitrarily, may lead to unlawful and arbitrary detentions. Moreover,


1  Universal Declaration of Human Rights (hereinafter UDHR), UN General Assembly, 10 December 1948, Article 3, 6, 7, 8, and 10; International Covenant on Civil and Political Rights (hereinafter ICCPR), UN General Assembly, 16 December 1966, Articles 9 and 14. 36 | SYNERGY Magazine

broadly deployed facial recognition software within law enforcement raises the risk of inaccuracies due to misidentification. AI-based Algorithms are increasingly used in the context of the civil and criminal justice systems to support or replace decision-making by human judges. Therefore, great care shall be taken to evaluate what can be delivered and on what conditions it may be used in order not to fail the right to a fair trial. AI systems are often trained through access to and analysis of big data sets. Data protection primarily relates to the protection of any personal data related to you, and is closely related to the right to privacy within the UN human rights system. Data is also collected to create feedback mechanisms and to provide for calibration and ongoing refinement. This data collection raises issues of the rights to privacy and data protection.2 Content-based Internet companies use AI to flag posts that violate their terms of service. For example, YouTube removed over 100,000 videos documenting atrocities in Syria after they had been flagged. 3Governments exerting formal and informal pressure on companies to solve the 2  Article 12 of UDHR, Article 17 of ICCPR. 3  Kate O’Flaherty, “YouTube keeps deleting evidence of Syrian chemical weapon attacks,” Wired, June 26, 2018, http://www.wired.co.uk/article/ chemicalweapons-in-syria-youtube-algorithm-delete-video

International Focus problem of alleged terrorist content hate speech, and so-called “fake news,” but without clear standards or definitions, has led to wider use of automated systems. 4 Since AI is imperfect and companies are pressured to remove questionable content so quickly, most of the content is deleted by mistake. Authoritarian governments can use similar technologies to strengthen censorship. Thus, AI may have its explicit and implicit impacts on the Rights to freedom of expression.5 One of the emerging points of the direct threat to free expression is bot-enabled online harassment. This kind of severe online harassment has a chilling effect on freedom of expression, especially for those who are marginalized populations. AI could also be used to identify and destroy religious content. This would constitute a direct violation of freedom of religion6 if people are not able to display religious symbols, pray, or teach about their religion online. Finally, AI-enabled censorship can be used to restrict the freedom of association7 by removing groups, pages, and content that make it easier to organize gatherings and collaboration. Given the important role of social media in organizing protest movements around the world, the use of AI could have a ubiquitous effect that interferes with freedom of assembly worldwide. 4  The Rise of Digital Authoritarianism, October 2018, Freedom House; https://freedomhouse.org/report/freedomnet/freedom-net-2018/rise-digital-authoritarianism 5  Article 19 of the ICCPR. 6  Article 18 of ICCPR and Article 18 of UDHR. 7  Articles 21 and 22 of the ICCPR.

AI models are designed to sort and filter, by ranking search results or classiyfing people into groups that have significant impact on the Rights to equality and nondiscrimination. 8 The use of AI in some systems can boost historical injustice in everything from prison sentencing to loan applications. In 2015, researchers at Carnegie Mellon found that Google displayed far fewer highpaying job postings for women. Google’s personalized ad algorithms are powered by AI, and they are taught to learn from user behavior. 9 The more people click, search, and use the internet in racist or sexist ways, the more the algorithm translates that into ads. The role of AI in creating and disseminating disinformation casts doubt on the notion of fair elections and threatens to the right to political participation and self determination.10 The 2016 U.S. presidential election showed how a foreign power can use bots and social media algorithms to increase access to false information and potentially influence voters. Just as people can use AI8  Article 3 and 26 of the ICCPR; International Covenant on Economic, Social and Cultural Rights (hereinafter ICESCR), UN General Assembly, 16 December 1966, Article 3. 9  Julia Carpenter, “Google’s algorithm shows prestigious job ads to men, but not to women. Here’s why that should worry you.” The Washington Post, July 6, 2015; https://www.washingtonpost.com/news/theintersect/wp/2015/07/06/googles-algorithm-showsprestigious-job-ads-to-men-but-not-to-women-hereswhy-that-should-worry-you/?noredirect=on 10  Article 25 of the ICCPR.

SYNERGY Magazine | 37

International Focus

powered technology to help spread disinformation or influence public debate, they can use it to create and distribute content designed to incite war, discrimination, hostility, or violence in violation of Prohibition of propaganda.11 Governments around the world have deployed “troll armies” to boost conflicts for political ends. Soon, they could use chatbots to incite racial and ethnic violence in regions that are already full of tension or use fakes to simulate world leaders declaring war or instigating armed conflict. Although the right to work and an adequate standard of living12 are not the absolute and unconditional rights, it does require states to work toward achieving full employment. The role of AI in the workplace automation can pose a real threat to the right to work; this may prevent some people from accessing the labor market in the first place. Automation has resulted in job loss in certain sectors, and AI is projected to accelerate this trend. Although there is considerable controversy regarding the degree to which work automation will be achieved, there is no doubt that AI will lead to some changes in the labor market, both through job creation and job destruction. AI impacts on the right to health13 provide the most promising and effective applications of AI in healthcare, from helping doctors more accurately diagnose disease, providing more individualized patient treatment, and increasing the availability of medical advice for patients. However, there are also ways in which AI could jeopardize the right to health in cases of discrimination. In the context of the famous dichotomy of International Humanitarian and International Human Rights Law collision, in both legal perspective crucial importance takes the Right to life, which can be limited due to the 11  Article 20 of the ICCPR. 12  Articles 23 and 25 of the UDHR, Articles 6, 7, 11 of the ICESCR. 13  Article 12 of the ICESCR. 38 | SYNERGY Magazine

application of IHL in armed conflict but following the principles of IHL which are of customary nature. Fully autonomous weapons systems are currently under development in many countries and they are likely to suffer from AI’s inability to deal with nuance or unexpected events. In a conflict situation, this could result in the death or injury of innocent civilians that a human operator may have been able to avoid. The international society is already actively alarming to ban fully autonomous weapons with the campaign to stop killer robots. In conclusion, I would like to recall Elon Musk and 116 experts open letter calling to an outright ban of killer robots to the UN and their words: “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close”;14 and touching speech of Council of Europe High Commisioner for Human Rights in Helsinki 2019 with her clear message: “if a state actor harms me, I can bring the state to court, but I cannot sue an algorithm for harming me.”15 I completely agree with the naratives and encourage lawyers to actively participate in the analysis of AI from a legal point of view in order to stimulate the processes for good.

14  Samuel Gibbs, Elon Musk leads 116 experts calling for outright ban of killer robots, 20 August 2017, The Guardian;https://www.theguardian.com/technology/2 017/aug/20/elon-musk-killer-robots-experts-outrightban-lethal-autonomous-weapons-war 15  Council of Europe High Commissioner for Human Rights, Speech, High-level conference “Governing the Game Changer - Impacts of artificial intelligence development on human rights, democracy and the rule of law”, 26 – 27 February 2019; https://rm.coe.int/hlchelsinki-feb-2019-commhr-interventionfinal/16809331b8

International Focus

Arising Challenges in the Evolving Reality

INTA’S 2019 EUROPE CONFERENCE: EMBRACING CHANGE Anna Wojciechoska Director for ELSA Delegations ELSA International Team

Having this unique opportunity, a group of five ELSA Members participated in the 2019 Europe Conference with the topic ‘Embracing Change’, organised by the International Trademark Association (INTA). The event took place in the beautiful scenery of Paris in The Westin Paris-Vendôme, on the 18th and 19th of February. INTA is a global association of trademark owners and professionals since 1878, dedicated to supporting trademarks and related intellectual property and advocating the protection of consumers and the promotion of fair values in the innovative world. While its main headquarter is located in New York City, other offices are situated in Brussels, Santiago, Shanghai, Singapore and Washington D.C., while representatives sit in Geneva and New Delhi. The Association – besides uniting big corporations – gives a chance to small- and medium-sized enterprises, law firms, and non-profits, as well as offers the memberships to government agency members, individual professors and students. This year’s conference focused on ‘Embracing Change’ by introducing topics connected to the future of the Intellectual Property: inter alia promoting brands, IP reforms, challenges in the digital world, the role of Artificial Intelligence, new opportunities to protect IP rights and solutions for IP outside of IP law. During the two-day event more than 10 themes were discussed

by top speakers, brand owners and specialists in the respective fields providing some deeper overview of the arising challenges in an evolving reality. The participants had the chance to further educate themselves on these hot new topics and key ideas. Thanks to the warm welcome from the organising team, being a part of this big forum was an excellent opportunity to broaden the IP knowledge in relation to the development of technology, especially for the ELSA Delegates. This event was both interesting and inspiring as it included sessions like ‘the role of AI’ which addressed the way AI could be used to protect trademarks, i.e. through conducting statistical analysis or Blockchain from a legal or practical perspective – namely, how this technology could be used to strengthen IP rights and how it could be applied to IP law. There was also the possibility to debate the latest court decisions in Europe like Louboutin v van Haren, Prada v EUIPO and Mondelez v EUIPO. Furthermore, the breaks between the sessions created amazing opportunities for networking. Through a precisely calculated time schedule, we were given the perfect shot at personally meeting and interacting with speakers as well as experienced practitioners from various countries all around the world. The event definitely furthered an exchange of ideas and outside the box thinking. SYNERGY Magazine | 39

Partners' and Externals' Perspective

Student Membership from £20

The International Bar Association (IBA) was established in 1947, and is the world’s leading international organisation of legal practitioners, bar associations and law societies. Student Membership – from just £20 – will give you: • access to a vast online library of substantive legal information, including: newsletters, practice-area specific journals and magazines, webinars and the IBA’s bi-monthly magazine, IBA Global Insight; • internship opportunities available at our London office, including positions working with the Human Rights Institute, Legal Policy and Research Unit and Executive Director. Other internships are available at our Hague and Washington Office; • opportunities such at the International Human Rights Law E-Learning Course, Young Lawyers’ Training Course, and the ICC Moot Court Competition; and • the opportunity to participate in cutting-edge research, writing and editing in specialised legal practice areas.

If you think your university could benefit from Group Membership please see this webpage for more information: https://tinyurl. com/IBA-Student-GroupMembership

Student Membership is only available online at www.ibanet.org/Membership/ Student-Membership.aspx. To get the most out of your membership, login to your MyIBA to access any journals or online content. If you have any questions please contact our membership team at member@int-bar.org.


International Focus

Are our legal systems prepared for artificial intelligence?


Director for Academic Competitions ELSA International Team

Artificial intelligence systems are becoming increasingly common and will undoubtedly play an important role in our future. Our economy, health care systems, and private lives will be supported by and filled with decisions made by artificial intelligence.

one on a long term perspective but certainly shows the path we may take in the future.

However, with all this progress and advancement, we should not forget the legal implications that arise by allowing machines and robots to make decisions on behalf of humans. We have to take a look at the question if our legal systems are prepared for the ever-continuing development and improvement of AI systems. Who is liable for accidents that happen due to an AI mistake? What kind of decisions do we allow AI to make?

One of the most pressing issues is the liability for damages of autonomous cars. Or simply put – who pays for the damage of a crash caused by an autonomous self-driving car?

In 2017 the European Parliament suggested applying “electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently”. 1This suggestion was the last

We currently have to answer whether our current legal systems are able to solve the issues that come with the implementation of autonomous operating systems.

In Germany, the UK and other European countries the owner of a car can – to varying degrees – be held liable if their car is involved in a car crash. This means that there is sufficient coverage of damages through insurances if the autonomous driving system of a car makes a mistake or malfunctions. The development of AI, however, will not eventually stop at self-driving cars but will continue to other

1 http://www.europarl.europa.eu/doceo/document/TA-8-20170051_EN.html?redirect#title1 Nr. 59 Lit. f SYNERGY Magazine | 41

International Focus aspects of our daily life. A lot of the problems that will arise are not foreseeable yet and eventually, our legal systems will be unable to solve the issues fairly. The idea of applying the concept of liability of the car owner to every other AI operating machine is a captivating thought. Every owner of an AI operating system would have to get insurance to cover all damages that an AI system’s mistake would cause. Lawmakers will have to keep an eye on the progress AI systems make and the impact they will have on our day to day life. Eventually, our legal systems will have to adapt to the changes. However, there does not seem to be an immediate need to rush towards new legislation. With the current state of the art technology, the idea of creating electronic personalities for AI systems seems to be too far-fetched and unreasonable. In general, a unification and the setting of guidelines for the handling of liabilities caused by AI, as suggested by the parliament2, is definitely something that needs to be discussed and pushed further in the future. Besides the legal implication that AI might have in the future, we also have to discuss the ethical issues that come with handing over decision making to AI. We will have to ask where we want to draw the line and what kinds of decisions we will not allow AI to make. The idea of “Robocop” – a machine patrolling the streets and ‘eliminating’ people who pose a threat – once was science fiction. With the development of AI, 2 http://www.europarl.europa.eu/doceo/document/TA-8-20170051_EN.html?redirect#title1 Nr. 51.

42 | SYNERGY Magazine

facial recognition and robots, the idea of a ‘robot police’ is no longer science fiction but a possible reality. Currently, in the military and police, whenever a shot is fired and a human is being injured or even killed, the final decision is made by another human being. Whether the action was justified or not, there is always a human that could be held accountable for the killing. Once AI machines get to decide on whether to fire a potentially deadly shot, the question remains who will be held accountable for that action? Is it justifiable to take humans out of the decision making and let machines decide over life and death? The murder of a human being by a machine that made the final decision limits the possibility for the human to get justice afterwards. The machine cannot be sent to court and tried. At the same time, it feels like a stretch to put the human on trial who decided to use the AI for the decision making of that action. There will always be situations where it might be necessary to kill a human being to protect others. However, do we really want to allow machines governed by programming and algorithms to make these decisions for us? Or do we want to have a human having the final say? A human being with emotions, empathy and a nuanced understanding of human behaviour and knowledge of what measures have to be taken? All in all, maybe we should not discuss who could be held accountable if a machine kills a human by mistake. But rather we should discuss if we want machines to make these decisions at all.

International Focus

Creatively destructive or destructively creative?



Andriy Yakubuv

Vice President in charge of Academic Activities ELSA-Valencia 2017-2019

Since Aristotle, passing by the medieval rise of rationalism, we burden an entire heritage of thinking, reframed by Saint Thomas Aquinas: “An agent does not move except out of intention for an end”. According to our contemporary scholar Nassim Taleb, this spell that we are supposed to actuate knowing where we are going, –he says– is a big teleological (from telos, “based on the end”) fallacy.

family drama, what this TV product also shows is that people don't know what they want until someone provides them with it. Among ‘Years and Years’ heroes' behaviour we can identify, in a sense, Nassim Taleb's concept of optionality: there are always options to switch to a course of action and benefit from the positive side of uncertainty, without corresponding serious harm from the negative side.

As it happens, speaking about Artificial Intelligence, it’s hard to know, and actually to want, all the consequences of the progress we are making. This is why we must take advantage of good fiction.

AI development is a breaker. Taleb would say: let's become antifragile. But, as it seems even from the mentioned series, we easily forget until the very last moment about our civilisational values – sustained by ethics and exercised by self-restraint. And that means bringing something from Saint Thomas Aquinas into the equation.

People around the world are still talking about ‘Years and Years’ series, released this summer by the BBC, later by HBO. With an original Russell T. Davies’ screenplay, so far it is the best projection we have about possible future AI’s disruptive impacts on Human Rights (as far as a new meaning of being trans: not transgender, but “transhumanist” – an amalgamation of artificial and human intelligence), as well as on Democracy and the Rule of Law (with extreme technological capacities – from faking news and manipulating our personal devices, to such a military destroying precision that deterrence theory shall no longer apply).

…In the 2030s, when things went totally wrong on ‘Years and Years’, Stephen is asking his old mother Muriel: “How am I getting blamed for the entire world?”. And she answers: “We can blame other people (...) and these vast sweeping tides of history like they're out of our control; like we're so helpless and tiny and small. But it's still our fault, and do you know why?”…

Above and beyond the presented political topics and

SYNERGY Magazine | 43

Think Global, Act Local


LLM Graduate, Public Law Paralegal

From smart toys to smart surveillance, artificial intelligence is quickly becoming part of children’s everyday life. AI has made important advancement in the protection of children’s rights as it is successfully used to aid the inclusion of children with disabilities, to develop personalised learning and to enable quicker processing of data in health and education. However, the reconciliation of AI with human rights is still disputed and the concerns are heightened when children are targeted. The activity, emotions and reactions of children are monitored from birth to adulthood by public authorities and private companies through automated surveillance. The advancement of accessible smart house technology aids parents in monitoring the behaviour of children. However, vast amount of sensitive personal data, such as family relationships, health conditions, feelings and reactions, is collected with little to no consent from the parents, who also lack ability to control how that data is used. Automated surveillance systems have been implemented in schools to improve security or analyse class participation. However, this type of surveillance breaches international children’s rights and raises ethical and regulatory concerns. Studies show that the continuous surveillance can negatively impact the children’s mental well-being during the time in which they develop their personality and identity and have life-long consequences.

44 | SYNERGY Magazine

As they grow, children use technology in all aspects of their life: social, education and entertainment; and, without fully understating the long-term impact, they consent to personal data being collected in order to be able to use digital technology. Companies use this bargain chip to collect sensitive data which might indicate sexual orientation, religious or political views. Worryingly, these characteristics can be influenced through AI. While AI algorithms create a personalised online experience, they hinder access to contradictory ideas which reinforces biases and discrimination, often against vulnerable groups. Furthermore, algorithms can manipulate access to information and restrain free speech with grave human rights consequences. States and private entities may, unintentionally or not, block or censor digital content to an extent that violates democratic values and fundamental rights, such as the right to protest and freedom of expression. Having in mind the numerous benefits of AI in enhancing children’s rights, it is important to support the advancement of, and universal access to, the technology. In doing so, governments should implement binding measures to ensure the compliance of AI with fundamental principles of democracy, rule of law and human rights.

Think Global, Act Local


According to an ubiquitously accepted viewpoint, the meaning of human rights is basic rights and freedoms that belong to every person in the world from birth until death. The main idea of them is that they are based on shared values like dignity, equality and independence and also appear to be a foundation for all other types of rights, they apply regardless of who you are, where you are from, and they also can never be taken away. Nowadays, in the era of artificial intelligence development, one of the most vital question is whether or not using artificial intelligence infracts on human rights. Before puzzling this question out, we firstly need to understand what does artificial intelligence mean and how is it used, as a lot of people simply do not know exactly or confuse it with something else.

45 | SYNERGY Magazine

Foremost, artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. It is used more and more in everyday life, and seems neutral only at the first blush. Taking as example such simple situations as when a navigation system lets you avoid a traffic jam or a person gets targeted advertising. There are more advantages in these examples, but let's think about the ethical and legal constituent of the accumulation and analysis of personal information, such as location and private preferences. Artificial intelligence, including such areas as machine learning and deep learning, are neutral only at first glance. A closer examination reveals that it can greatly affect the interests of people in specific cases. In many

Think Global, Act Local areas of life, making decisions based on mathematical calculations offers huge advantages. However, if AI begins to play too much of a role in human life, which involves identifying repetitive behavioural algorithms, this can turn against users, lead to injustice, and also to restrictions on human rights. Intervention in the right to privacy and the right to equality is the first and main point of this article. The right to privacy is a fundamental human right necessary for a safe and dignified life. However, in the digital space, including when we use applications and social networks, a huge amount of personal data is collected (with or without our knowledge) that can be used to create our profile and predict our behaviour. We provide information about our health, political beliefs, family life, without even knowing who will use this data, for what purpose and how. Leakage scandals are no longer shocking news, comparable with the first situation concerning the British analytical company Cambridge Analytics, which collaborated with the headquarters of US President Donald Trump, illegally collected data from 87 million users to analyse the political preferences of voters. The conflict between technology and human rights is also evident in the field of face recognition. As a powerful tool for tracing alleged terrorists, it can also become a means of controlling people. It has now become especially easy for states to follow you, invade your privacy, restrict freedom of assembly, freedom of movement, and freedom of the press Another right that is at risk is the right of freedom of expression. A recent Council of Europe publication, Algorithms and Human Rights, in particular, notes that some of the most popular mass media have filtering mechanisms to identify extremist content that calls for violence. However, there is no information about what procedures and criteria are used to determine such content, as well as which videos contain “clearly illegal content�. Although the initiative in itself to suppress the distribution of such materials is commendable, the opacity of the moderation of the content is alarming because it carries the risks of restricting the legal right of freedom of speech and expression. Similar concerns were expressed regarding automatic filtering of user content when it was uploaded due

46 | SYNERGY Magazine

to an alleged violation of intellectual property rights, which came to the fore in the proposed EU Copyright Directive. In some cases, the use of automatic technologies for disseminating information can also significantly affect the right of freedom of expression and privacy when bots, armies of trolls, targeted advertising or spam are distributed as part of an algorithm that determines the content for specific users. Nevertheless, AI today is an indispensable tool of modern life, the basis of the action of many systems from different spheres. That is, the advantages of using such a technology are not small, but what should be done in order to protect fundamental human rights and freedoms? As it seems at first glance, the direct participation of the state in this matter is necessary. This means that states must also ensure that private companies involved in the development and implementation of AI comply with human rights standards. The creators of AI technology that can improve our lives must act in accordance with generally recognized norms and principles of international law. The decision-making processes themselves, using algorithms, should become more transparent so that their logic becomes clear, responsibility for the use of such algorithms appears, and also that it is possible to effectively challenge decisions made on the basis of automated data collection and processing processes. Already today there are a number of standards from which to build on. For example, the precedents of the European Court of Human Rights clearly establish how the right to privacy, freedom and security should be respected. They also emphasize the obligation of states to provide effective remedies to counter encroachments on privacy and unlawful surveillance. In addition, an updated Council of Europe Convention on the Protection of Individuals with regard to Automatic Processing of Personal Data regarding the privacy threats that arise from new information and communication technologies was adopted this year. We all need to keep in mind how important it is to maintain a balance between the use of artificial intelligence that improves and simplifies our lives and the fundamental rights that make our lives real and full.

Think Global, Act Local

Does the creative operation of text maining algorithms supported by artificial intelligence conducted in tax reasons fall under right to freedom of artistic and scientific creativity?


Wiktor Gawlowicz Tax Consultant KPMG

According to the content of art. 13 Charter of Fundamental Rights of the European Union, cited "Art and research are free from restrictions. Academic freedom is respected." Following the doctrine of human rights, it is the inherent right of the human person. Looking from the perspective of new technologies, in particular artificial intelligence, one should ask the question about the universality of this thesis. Does the creative operation of text mining algorithms supported by artificial intelligence also fall under this right? Action Plan on Base Erosion and Profit Shifting published on July 19, 2013, brought us a new generation of tax instruments, called IP Box. They make the tax benefit conditional on conducting R&D activities and the production of qualified intellectual property rights as a result. They aim to stimulate innovative activity in enterprises. One of its manifestations is, for example, the increasing amount of work undertaken in recent years on the development of artificial intelligence. Concerning, to some extent, particularly interesting "text mining" algorithms used to analyze and combine with each other a large amount of knowledge, available for example in scientific publications. Along with their development, they are more and more effectively used in enterprises' R&D works. Apart from tax benefits, which is also one of the most common grounds for applying for public aid from European regional and

structural funds. Thus, the natural consequence of the European tax system will be an increasing share in the market of activities based on registered intellectual property rights, often to a much broader extent than would be justified by non-tax reasons. The resulting excessive registration of intellectual property rights further supported by the development of artificial intelligence programmed to produce intellectual property eligible for tax exemptions. It will not be without significance for the right to freedom of artistic and scientific creativity. The future decreasing importance of human creativity may pose the question of whether we are not experiencing the actual erosion of the right to freedom of artistic and scientific creativity. Will tax privilege stimulating market expansion finally lead to a depreciation of the factor of creative human activity in the whole process? In some time, it may turn out that the intellectual property law, which was supposed to protect creative activity, in practice will not leave room creativity. The growing importance of entities with a greater amount of intellectual property on the market may result in the inability to compete with them, in particular to the extent that their intellectual property as a result of creative activity could be adapted to new different applications. SYNERGY Magazine | 47

Think Global, Act Local


Vice President in charge of Marketing ELSA the Netherlands

We are living in the 21st century where technology is entering into our life, our home and our daily routine. Nevertheless, an ordinary citizen does not usually understand how deeply technology is integrated into our modern world. Technology is not only influencing private individuals; it is also coming to the public sector. One of the core examples of such a development is blockchain a brand-new trust-based system used in both private and public sectors, and that is seen by them as an alternative to traditional systems of societal relations. Blockchain remains one of the most legally controversial topics of the modern community.1 Now, there are no official legal means to control the blockchain system, as its origin and legal status cannot be explained by legal scholars.2 However, as blockchain is a largely used system, it cannot remain unregulated. This paper will elaborate on what blockchain is as an issue, on what has already been done by states towards its regulation and on benefits blockchain can bring to states and private entities. Blockchain is a new way of dealing with ordinary life issues, which is used by both individuals and governments on a large scale. Previously, blockchain technology was used only among private parties and was designed as an alternative way to create trust-based relations between them without governmental control. The core feature of blockchain was exactly that it was not supervised by 1  Maltaverne Bertrand, 'What Can Blockchain Do for Public Procurement?' (Publicspendforumnet, 28 August 2017) <https://www.publicspendforum.net/blogs/bertrand-maltaverne/2017/08/28/blockchain-technology-public-procurement> accessed 11 December 2018 2  Gep insight drives innovation, 'Blockchain: What to expect, now and later Explore its potential uses in procurement and supply chain' (Gepcom, 2017) <https://www.gep.com/blockchain-procurement-supply-chain> accessed 11 December 48 | SYNERGY Magazine

state officials. 3Moreover, a new technology became very popular due to its simplicity. For many years, when blockchain was just in the process of developing, almost none of the states considered it as an important issue and thus it was not discussed: neither in legislative nor in executive branches of the government. However, when the success and popularity of blockchain grew, states became concerned about this topic and the regulation framework of this system.4 First of all, legal systems in all the states did not have laws that applied to blockchain. This issue was so different from all the problems that have already been regulated by law that it was impossible to legally tackle it. Legal scholars could not identify blockchain, neither with public, nor with private property. At the same time, people were more and more involved into this innovative way of making profit. 5A considerable amount of money was invested by the private sector. Moreover, blockchain platforms started being used as a “legal” possibility to

3  Lakhani Karim R and Iansiti Marco, 'The Truth About Blockchain' [2017] (January–February 2017) Harvard Business Review <https://hbr.org/2017/01/the-truth-about-blockchain> accessed 11 December 2018 4  Accenture, 'How Blockchain can bring Greater Value to Procure-to-Pay Processes' (Accenturecom, September 2016) <https://www.accenture.com/t20170103T200504Z__w__/usen/_acnmedia/PDF-37/Accenture-How-Blockchain-Can-BringGreater-Value-Procure-to-Pay.pdf> accessed 11 December 2018 5  Mark White, Jason Killmeyer, Bruce Chew, 'Will blockchain transform the public sector? Blockchain basics for government' (Deloitte Insights, September 11, 2017) <https://www2. deloitte.com/insights/us/en/industry/public-sector/understanding-basics-of-blockchain-in-government.html> accessed 11 December 2018

Think Global, Act Local avoid taxes.6 Anonymity and confidentiality features of blockchain became a perfect way for criminals to perform various fraudulent activities, without being punished for that. Consequently, almost all states around the world thought that there emerged an urgent need for the creation of legal frameworks concerning blockchain. Second of all, on the national level, several approaches were proposed by governments and legal scholars in order to regulate blockchain and to make the system more comprehensive from the legal point of view. One of them was an attempt to use blockchain in the public sphere and thus to get a better control and understanding of it. By doing so, governments benefited from a core feature of blockchain - transparency.7 Consequently, the level of trust between the private and public sectors increased. The former became more encouraged to invest in the state’s economy. Furthermore, the level of illicit financial flow decreased because contractual parties were “forced” by the trust based nature of a new system to perform their obligations in good faith. Therefore, blockchain became not just a target for regulation, but an innovative approach states started using in their everyday business. However, on the international level, the case of Blockchain is much more complex, as there is no legal framework on it, except for some theories developed by legal scholars.8 International law does not have proper trias politica branches, thus it is difficult to enact regulations on such a specific matter like blockchain. Moreover, not all states are technologically developed enough and are not interested in examining this topic. For now, there are neither international treaties on the blockchain matter nor can any formation of them be predicted. 9There are attempts to bring blockchain onto the international arena, mostly exercised by the European Union. Several EU member states are already using blockchain and EU 6  National league of cities, 'Could Blockchain Technology Innovate Cities and Restore Public Trust? New National League of Cities Research Explores the Future of Blockchain-Powered Cities' (CISION PR Newswire , Jun 06, 2018) <https://www. prnewswire.com/news-releases/could-blockchain-technologyinnovate-cities-and-restore-public-trust-300661216.html> accessed 11 December 2018 7  Lakhani Karim R and Iansiti Marco, 'The Truth About Blockchain' [2017] ( January–February 2017) Harvard Business Review <https://hbr.org/2017/01/the-truth-about-blockchain> accessed 11 December 2018 8  Patric Gabrielle, Bana Anurag 'Rule of Law Versus Rule of Code: A Blockchain-Driven Legal World' [2017] IBA Legal Policy & Research Unit. 9  Lakhani Karim R and Iansiti Marco, 'The Truth About Blockchain' [2017] (January–February 2017) Harvard Business Review <https://hbr.org/2017/01/the-truth-about-blockchain> accessed 18 December 2018

SYNERGY Magazine | 49

Think Global, Act Local itself showed its interest towards this new technology. Therefore, for the EU system, it can bring the elimination of corruption in spheres where data transmitting is involved. However, The Court of Justice of the EU made clear that cryptocurrencies and investments in them do not fall within the scope of the EU Law. Therefore it is clear that the blockchain issue is not very popular among states and international organisations. In conclusion, it is necessary to emphasise that blockchain is now targeted at the national level. It can bring many advantages to states that can regulate it; many developed countries were happy to introduce this technology in many parts of their public sectors. Nevertheless, on the international level, almost no attempts were done to use blockchain worldwide. Not all states are interested in blockchain because of their fear of hacking activities, or as a result of the lack of domestic technological development. However, all states see and accept potential benefits that blockchain can bring them. Therefore, it is necessary to wait and observe the trends that are going on worldwide before making strict conclusions about the “death” of blockchain on the international arena.

Bibliography Maltaverne Bertrand, 'What Can Blockchain Do For Public Procurement?' (Publicspendforumnet, 28 August 2017) <https://www.publicspendforum.net/ blogs/bertrand-maltaverne/2017/08/28/blockchaintechnology-public-procurement> accessed 11 December 2018 Gep insight drives innovation, 'Blockchain: What to expect, now and later Explore its potential uses in procurement and supply chain' (Gepcom, 2017) <https://www.gep.com/blockchain-procurementsupply-chain> accessed 11 December 2018 Lakhani Karim R and Iansiti Marco, 'The Truth About Blockchain' [2017] ( January–February 2017) Harvard Business Review <https://hbr.org/2017/01/the-truthabout-blockchain> accessed 11 December 2018 Accenture, 'How Blockchain can bring Greater Value to Procure-to-Pay Processes' (Accenturecom, September 2016) <https://www.accenture.com/ t20170103T200504Z__w__/us-en/_acnmedia/PDF-37/ Accenture-How-Blockchain-Can-Bring-Greater-ValueProcure-to-Pay.pdf> accessed 11 December 2018 Mark White, Jason Killmeyer, Bruce Chew, 'Will blockchain transform the public sector? Blockchain basics for government' (Deloitte Insights, September 11, 2017) <https://www2.deloitte.com/insights/us/ en/industry/public-sector/understanding-basicsof-blockchain-in-government.html> accessed 11 December 2018 National league of cities, 'Could Blockchain Technology Innovate Cities and Restore Public Trust? New National League of Cities Research Explores the Future of Blockchain-Powered Cities' (CISION PR Newswire , Jun 06, 2018) <https://www.prnewswire.com/newsreleases/could-blockchain-technology-innovatecities-and-restore-public-trust-300661216.html> accessed 11 December 2018 Patric Gabrielle, Bana Anurag 'Rule of Law Versus Rule of Code: A Blockchain-Driven Legal World' [2017] IBA Legal Policy & Research Unit.

Think Global, Act Local

ARTIFICIAL INTELLIGENCE IN THE JUDICIAL SYSTEM: ARGUMENTS FOR AND AGAINST Zhemerov Vladislav Vladimirovich Student Russian State University of Justice

This article is devoted to the idea of introducing artificial intelligence into the judicial system. Arguements of supporters and opponents of this judicial reform are analysed. Key words: artificial intelligence, judicial system, digital judge, law. The idea of introducing artificial intelligence into judicial activity finds its approval not only among the public but also among judges.1 This technology is already allocated enough money for its implementation. Proponents of the creation of judicial artificial intelligence put forward the following argument in support of their position: artificial intelligence will lead to optimizing the proceedings, reducing the cost of maintaining judicial bodies and, most importantly, the exclusion of the human factor from decision on a particular case. Indeed, artificial intelligence, that works on the basis of computational processes, is not a subject to any emotional influences from the participants of the process. It fully complies with the principle of impartiality stated in article 9 of the Code of Judicial Ethics, i.e. the absence of "preferences, prejudices or biases".2 However, despite all the positive arguments of the 1  Sentences in court will be made by robots / / Komsomolskaya Pravda. URL: https://www.spb.kp.ru/daily/26816/3853172/ (accessed: 17.03.2019). 2  Code of Judicial Ethics (approved by the VIII all-Russian Congress of judges 19.12.2012) / / PCA ConsultantPlus. URL: http://www.consultant.ru/document/cons_doc_LAW_139928/ (accessed: 17.03.2019).

defenders of this innovation, there is an ethical question of the application of the artificial intelligence in the proceedings. To what extent will it be able to reliably analyse and understand the information obtained during the trial? It is important to understand that the essence of artificial intelligence is in a special automated system that is in the form of a certain code with special features. But it is worth noting that artificial intelligence is initially limited in the choice of possible solutions given by the code.3 Artificial intelligence cannot consider and evaluate a situation from the human perspective, because a person takes into account not only rational but also irrational factors of another person's behaviour in a given situation. Artificial intelligence is deprived of the ability to understand the motives of human behaviour. For example, if a hungry child steals some bread, such behaviour will be regarded by an artificial intelligence appliance as socially dangerous, and as behaviour that 3  Akhmedzhanova R.R.: Can artificial intelligence replace a human judge? // Jurisprudence 2.0: a new perspective on law. — M.: PFUR, 2017. — S. 461-467. SYNERGY Magazine | 51

encroaches on the significant values of society. When choosing a punishment, artificial intelligence will not be able to make the right decision, taking into account all the factors that influenced the commitment of this action, because there is no understanding of the principle of humanism, proclaimed by article 7 of the Criminal Code of the Russian Federation. Another example would be an accidental innocent infliction of harm, for which a person should not be convicted under criminal law. The computer cannot fully assess all the factors that could influence the decision of the person who committed the criminal act. Supporters of the introduction of artificial intelligence in judicial activity refute this thesis. They believe that the task of finding all options for the conclusion of the trial and analysing the causes of human behaviour will be similar to the action of a judicial body. But at the same time, there is a substitution and establishment of equivalence of the decision of the judge and the decision of the system created by a human with the code put in it. In other words, it is impossible to assert the identity of the decisions of a man and a machine, since electronic computing is not able to fully reflect the thinking of a man as the mental process, as the machine is unaware of subjective reality and operating meanings of things and events.4 Artificial intelligence cannot fully take into account the internal state of a person, to understand his mental perception of the committed actions . As an independent subject, artificial intelligence cannot " independently consider and disassemble court cases, as it is not able to take into account all the nuances, as well as ethical norms and the human factor." 5 The normative basis upon which artificial intelligence decides, is constantly modifying through amendments and clarifications of a man, who is dictated by everchanging public relations. Also, do not forget about the vulnerability factor of artificial intelligence, namely, the impact on it of a newly added code, its maintenance, updating the operating system, as well as the possibility of influencing the technology serving these people 4â&#x20AC;&#x192; Ableyev S.R.: Simulation of consciousness and artificial intelligence: the limits of // Bulletin of economic security. 2015. No. 3. URL: https://cyberleninka.ru/article/n/modelirovaniesoznaniya-i-iskusstvennyy-intellekt-predely-vozmozhnostey (date accessed: 17.03.2019). 5â&#x20AC;&#x192; The judge, the COP predicted the future of robots in law // Lenta.ru. URL: https://lenta.ru/news/2017/05/15/robojudge/ (accessed: 17.03.2019).

Think Global, Act Local

who can directly control the digital mechanism. Another argument of the supporters of artificial intelligence is the one of cost-effectiveness of this technology, which allows the process of legal proceedings to optimize. An example is the launch of the "robot lawyer" Sberbank PJSC, which allowed the efficiency of the legal service to increase, while reducing the number of lawyers to the optimal level. 6 However, in February 2019, the head of Sberbank German Gref, accused artificial intelligence of "high cost of operation, resulting in large companies suffering the losses of billions of dollars".7 He also pointed to the value of "a small inaccuracy in the algorithm", which leads to a huge number of errors. If such a situation occurs in the judicial system, it is worth considering the significance of at least one such mistake for the future life of a convicted person. Thus, artificial intelligence will not be able to completely replace the judge. Consequently, it should not be seen as a substitute for a person, but as a tool, an assistant to the judge in the implementation of justice.8 It is impossible to shift the responsibility for the future 6  Morkhat P. M. the Intellectual-legal paradoxes of creating an artificial intelligence of artistic works and inventions // Intellectual property. Copyright and related rights. — 2018. — No. 11. — S. 5-15. 7  Sberbank lost billions of rubles due to artificial intelligence / / MB Finance. URL: https://mbfinance.ru/finansovye-novosti/ sberbank-poteryal-milliardy-rublej-iz-za-iskusstvennogo-intellekta/ (accessed: 17.03.2019). 8  Ashley K.D. Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age Paperback. Cambridge: Cambridge University Press, 2017.

fate of a particular person to computer technology. It is necessary to create a system that would eliminate temporary and bureaucratic difficulties in the work of not only the judge, but also the judiciary as a whole. It is also worth analysing foreign experience in the use of such technologies in judicial activities. For example, in China, the Netherlands, Australia, the UAE and the UK, similar technologies are already operating or being tested, but "digital judges" consider only some categories of cases such as family disputes, division of property, intellectual property.9 In Russia, there was also an attempt to confront a robot lawyer against a person in a court debate at the St. Petersburg International Legal Forum in 2018.10 After a court fight Roman Bevzenko was declared the winner. Roman Sergeyevich beat the robot with a large margin on points: 243 vs. 178. However, the judges noted the high level of normativity and specificity of the arguments presented by the robot. This fact demonstrates the great potential of artificial intelligence with proper development in the future. But still horrific thoughts of science fiction about the imminent war between the humans and robots are not confirmed in practice. 9  Ramalho A. Will Robots Rule the (Artistic) World? A Proposed Model for the Legal Status of Creations by Artificial Intelligence Systems // <https://papers.ssrn.com/sol3/papers. cfm?abstract_id=2987757>. - 20 p. - P. 4-5. 10  Legal battle: the robot from a Megaphone vs Roman Bevzenko // SPS Right.ru. URL: https://pravo.ru/lf/story/202675/ (accessed: 17.03.2019).

SYNERGY Magazine | 53


Graduate of Law Faculty Sofia University

The first thing I learned at law school was that laws are continuously developing as a reflection of the constant change of life. Therefore, to keep up with the challenges of the legal industry, a jurist needs to accumulate new information every day. While the definition of “information” as facts and details about someone and something has been sustainable over time, the means of receiving it have changed a lot in the last years, as the Internet turned into a major source of knowledge. The enormous amount of information the Internet provides gives us the feeling of freedom. But are we really free or unconsciously navigated by Artificial Intelligence (AI) in what to know and what not to know? Pondering over AI and its impact on Human Rights, I researched the matter online. Unfortunately, the first, and considered the most reliable, results on the web browser were either paid advertisements or websites

offering AI services. As I could not find academic information online, I searched it elsewhere. What I did was research in books and talk to specialists in the area of AI who explained to me how it impacts the web search results and I would add- human rights. Furthermore, there are few factors that determine the first-page web results that one gets out of thousands other sources available on the Internet about a specific subject. One of the key determinative factors are HTTP cookies. They were invented by Lou Montulli in 1994. The “cookie” is a small piece of data sent from a website and stored on the user’s computer by the web browser. Cookies provide information to the websites about the user’s activity. This is a powerful tool in the hands of corporative advertisers nowadays. The use of cookies by websites is subject to an informed consent given by the user before the first visit of the website.

Think Global, Act Local However, sometimes it is questionable how informed the consent is as some websites intentionally provide extra-long terms and conditions assuming that users will not read them end to end just to visit a particular website. Consequently, most of the time the “agree” button is hit without a lot of meaning added to it. Moreover, another form of AI that guides users through their online experience is the Search Engine Optimisation (SEO). It allows web pages to appear as the first results to a web search if they correspond to specific requirements as key words and links. Most of the searches result in thousands to million sources while the user is able to get to know less than 1% of them which usually appear on the first page. The idea of SEO is providing personalised internet browsing to each user. But there is much more behind the noble idea of providing good services to the user. It is responsible for more and more accurate advertisement targeting by corporations. While searching online, users are not able to trace the process run by the SEO so that they would get exactly those specific results. Therefore, in case a user is solely relying on information found online, his opinion is manipulated for the purpose of marketing and advertisement. Finally, how does this AI process impact Human Rights? The first step in analysing a Human Rights issue is consulting the European Convention on Human Rights (ECHR) which provides detailed definitions to safeguard human rights. Article 10 of ECHR describes

the freedom of expression which includes “freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.” The ECHR imposes an obligation upon the State not to interfere in the flow of information towards the citizens. This obligation could be viewed from a wider perspective as assuring that no one, on the whole, interferes in the right of the users to freely receive information. Such interference is exercised by corporations that have the resources and the power to build their websites up to the SEO mechanisms. In addition, they use the information about the users ,gathered and stored by the web browsers, so that they can advertise better. This form of hidden manipulation limits and infringes the freedom of the user to be fully informed and individually form an opinion. In conclusion, as legally engaged minds we should consider the seriousness and depth of this issue which is to develop more in the near future. Law reflects the constant change of time and we should take the responsibility of living in an extra fast-changing world where laws must be adopted quickly to provide cybersecurity and protection of human rights. P.S. A challenge for the reader - work in a team with a colleague or a friend and search online “Artificial Intelligence”. Compare the results and consider what will be your next step in developing the modern legal framework of Artificial Intelligence.

SYNERGY Magazine | 55

Think Global, Act Local


LLM Public International Law Candidate at QMUL


AI in Warfare

Artificial intelligence (AI) is both a promising technological achievement and a challenge for the law. Perhaps more than any other area, AI demands the inclusion of sufficient legal safeguards that will prevent greater harm in the future. Even more, these safeguards need to be formed carefully in order to capture any future technological development that might otherwise slip through an unexpected loophole, which, in the case of AI, can reasonably be expected to have terrible consequences. While the presence of AI in banking systems, self-driving cars and other areas of life is interesting and well-worth researching, this article will choose a more specialised approach and focus on AI use in warfare. It will begin with the introduction of the use of AI for war purposes and continue with an analysis of its impact on human rights, democracy and the rule of law.

AI seems to be an attractive tool for war purposes. Autonomous weapons are systems that to some degree rely on AI. Klare defines them as lethal devices that are capable of analysing their surroundings and identify potential enemy targets. The key element of autonomous weapons is the following part of this definition, which mentions the ability of autonomous weapons to independently choose to attack a target they identified. An example of an autonomous already in use is the Sea Hunter, which was launched by the United States (US) Navy and the Defense Advanced Research Projects Agency (DARPA) in 2016. Sea Hunter is a trimaran with zero crew. It serves as a test that, if proven effective, could lead to more of such Anti-Submarine Warfare Continuous Trail Unmanned Vehicles (ACTUV) to be deployed worldwide. Algorithms may enable some ACTUV to

SYNERGY Magazine | 56

Think Global, Act Local

attack submarines independently. Similar efforts have been done by the US Air Force and Army, as well as in Russia, China and other countries.1 Another example of AI use in warfare is the Joint Enterprise Defence Infrastructure (JEDI) project. The idea behind JEDI is to connect military data in a cloud platform and use machine-learning to analyse it, which will ultimately allow the Pentagon to weaponise artificial intelligence (also referred to as algorithmic warfare). The main use of artificial intelligence in warfare is however not aimed at killing. Besides, there is no need for such advanced technology for that purpose. What artificial intelligence should bring is easier identification of targets.2 The project will be carried out by a single private company that will be awarded $10 billion to carry out the project in 10 years. 3 Human Rights According to a report made by the International Committee of the Red Cross (ICRC), the full potential of AI is not yet known and its implications are not yet fully understood. However, this does not change the demand for all new technology to be in compliance 1  Michael T. Klare ‘Autonomous Weapons Systems and the Laws of War’ (Arms Control Association, March 2019) <https:// www.armscontrol.org/act/2019-03/features/autonomousweapons-systems-laws-war> accessed 6 August 2019. 2  Ben Tarnoff 'Weaponised AI is coming Are algorithmic forever wars our future?' (The Guardian, October 2018) <https:// www.theguardian.com/commentisfree/2018/oct/11/war-jedialgorithmic-warfare-us-military> accessed 6 August 2019. 3  - -, 'An Update on the DOD’s JEDI Project' (AgileIT, March 2019) <https://www.agileit.com/news/update-dods-jedi-project/> accessed 6 August 2019.

with international humanitarian law, as the law applicable to armed conflict. When autonomous weapons are used within a law enforcement context, international human rights law applies. 4The dangers of an AI use in warfare have been stressed by the UN Secretary General António Guterres, who called for a ban on lethal autonomous weapons and a restriction on the development of lethal autonomous weapons. 5Concerns have also been expressed by the Human Rights Watch in their “Campaign to Stop Killer Robots”.6 A step towards the protection of human rights in relevance to AI systems has been made with the Toronto Declaration.7 In 2019, the European Commission has published the Ethics guidelines for trustworthy AI.8 However, as pointed out by Berthet,9 most of initiatives aimed at governing AI come from the perspective of ethics. While a possible concern would be not to limit technological development with the law, this approach is also less effective and lacks the authority that is brought by the law. 4  Christof Heyns, 'Human Rights and the Use of Autonomous Weapons Systems (AWS) during Domestic Law Enforcement' (2016) 38 Hum Rts Q 350 5  - -, 'Autonomous weapons that kill must be banned, insists UN chief' (UN News, March 2019) <https://news.un.org/en/ story/2019/03/1035381> accessed 6 August 2019. 6  See 'Campaign to Stop Killer Robots' website: <https:// www.stopkillerrobots.org> accessed 6 August 2019. 7  - -, 'The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems' (AccessNow, May 2018) 8  See: <https://ec.europa.eu/digital-single-market/en/news/ ethics-guidelines-trustworthy-ai> 9  Alison Berthet ‘Why do emerging AI guidelines emphasize “ethics” over human rights?’ (OpenGlobalRights, July 2019) < https://www.openglobalrights.org/why-do-emerging-aiguidelines-emphasize-ethics-over-human-rights/> accessed 6 August 2019.

SYNERGY Magazine | 57

A Challenge for Democracy AI in war represents a challenge for states from two different angles. Firstly, the states, as the lawmakers of both domestic and international law, are already in a race of efficient law making with private sector actors, which transnational law has included as possible law makers. Will AI turn out to be an area that is better governed by the market? Based on the discussed threat to human rights and danger that comes with AI, states allowing AI laws to be created by private actors sounds like a breach of the state’s due diligence to ensure compliance with international law and respond to breaches with appropriate sanctions. The second challenge is how should the law ensure an effective system of oversight and control over AI. There are two sides to this issue. The first is the actual oversight of how these systems function. Should the law demand human oversight of AI systems and to what extent is that possible? Should AI be used as an oversight

mechanism?10 The second is the legal sanctions that would follow a breach of law governing AI. If the system is independent in its decision making, to whom would the sanctions be aimed at? The engineer who creates the AI system, the private company that employed the AI system (since states often rely on private companies for the use and creation of these type of weapons) or the state itself, because it allowed the use of AI systems in war or failed to effectively ban it? The governance of AI is an issue in itself, as it challenges core democratic values and freedoms. How will the state effectively ban the creation of lethal autonomous weapons, without limiting the individual’s right to privacy or even freedom of thought? A simple example that shows why this could be a problem is that of an individual with sufficient technological knowledge and resources to build a lethal autonomous weapon or other system using AI in the privacy of their own home. 10  See: Amitai Etzioni and Oren Etzioni, 'Keeping AI Legal' (2016) 19 Vand J Ent & Tech L 133

SYNERGY Magazine | 58

Think Global, Act Local

Conclusion AI development has a strong impact on human rights, democracy and the law. Creating an effective system of governance represents perhaps one of the biggest challenges for states today. The consequences of lacking governance of the law or unwisely chosen terminology could quickly lead to great and irreversible harm for humanity. A quick overview of the issue shows that creating a legal framework on AI will demand cooperation between states, experts and the international community as a whole. The nature of AI does not allow different state regulations, as a looser system of laws in one state could easily have a worldwide (most likely negative) impact. The question here is whether states will be willing to be as blind as Lady Justice to their differences, or will AI be yet another attempt to achieve peace and unity, led to failure by disagreements.

SYNERGY Magazine | 59

Think Global, Act Local

60 | SYNERGY Magazine

Profile for ELSA International

66th Edition of the Synergy Magazine  

Synergy Magazine is ELSA's members' magazine, which is printed in 10,000 copies and distributed all over the ELSA Network. The articles are...

66th Edition of the Synergy Magazine  

Synergy Magazine is ELSA's members' magazine, which is printed in 10,000 copies and distributed all over the ELSA Network. The articles are...