
3 minute read
Robot Tortfeasors and the Law:
BCLI'S ARTIFICIAL INTELLIGENCE AND CIVIL LIABILITY PROJECT
What happens when a robot commits a tort?
The British Columbia Law Institute and an interdisciplinary committee of volunteer experts are looking for an appropriate answer via the Artificial Intelligence and Civil Liability Project. We are working toward recommendations on how the rules of tort law should be adapted to reach just and fair solutions when artificial intelligence (“AI”) causes harm to persons or property.
Our Project Committee reflects a blend of expertise in law, computer science, engineering, and medicine that is necessary for the project. The Project Committee has been at work, supported by the BCLI staff, since late 2021.
Basic Questions We Seek to Answer in the Project
• Who is, or should be, legally responsible for decisions made by autonomous intelligent machines that lead to harm?
• On what basis?
• In what circumstances?
Tort law developed as part of the common law to provide redress to victims of wrongful human conduct and deter others from engaging in it.
If damage or loss results from nonhuman conduct or decisionmaking, dilemmas arise in trying to apply its rules. Intent, fault, reasonable care, and foreseeability of harm are all important tort concepts.
Applications
Artificial intelligence or AI is software that allows automated systems to simulate abilities associated with human intelligence and perception. Some examples of AI are natural language processing, speech recognition, computer vision, and computer translation.
AI pilots self-driving vehicles, powers digital voice assistants like Alexa and Siri, and lies behind interactive chatbots like ChatGPT that answer questions and carry on conversations.
• Can a software system be said to have “intended” the consequences of its decisions?
• How can we know whether an AI system could have “foreseen” that its output would result in collateral damage to someone or someone’s property?
• Should the designers of the system have foreseen the risk in question?
• Should the user of the system have foreseen it?
As automation expands, those questions will gain increasing importance.
AI is proving to be increasingly valuable in medical diagnosis. It is being used increasingly to design new drugs, to predict weather, and in robotics, to name a few of its uses. AI has many legal and business applications.
Human programmers supply the fundamental objectives of AI systems, but many of the systems are designed to operate autonomously. The systems select from among alternative solutions and actions that have the greatest probability of reaching the objectives embedded in their programming.
Results
AI is bringing great benefits, especially in fields like medicine, pharmacology, predictive data analysis, and robotics. AI can deal with huge amounts of data in timeframes that are far beyond human abilities.
Risks
It also brings new sources of risk. The autonomy that makes AI systems so useful in arriving at innovative solutions has a downside. It makes them less predictable than more conventional software. They have been called “unpredictable by design.”
In the course of pursuing their programmed objectives, their decisions and actions may have harmful effects because they lack the extent of knowledge of the outside world that a human being gains by lived experience.
• In one publicized example, a bored child asked a popular domestic digital voice assistant to “find a challenge to do.” The voice assistant told the child to plug a phone charger halfway into a wall outlet and touch a penny to the exposed prongs. The digital voice assistant performed the task it was assigned, but lacked the context to know what it had selected for its response was dangerous.
• Someone who has suffered injury or loss through the operation of AI can face nearly insuperable challenges in proving who was at fault. Many different parties are usually involved in designing, programming, developing, training, testing, and deploying an artificial intelligence system. Even if they could all be identified, it might not be possible to prove what went wrong.
• The expression “black box” is often used in connection with artificial intelligence. It points to the fact that autonomous AI systems have limited explainability. Even the original programmers of an AI system may be unable to explain or reconstruct how it reached a specific output. That is especially true of AI systems that operate through machine learning, because they are not limited to executing a fully preencoded program. Instead, they make decisions, predictions, or recommendations on the basis of inferences they make from processing data that is either supplied to them or gathered by sensors.
• There is a push for “explainable AI” but for the time being and foreseeable future, our ability to build systems that do remarkable things has outstripped the ability to determine how they accomplish them.
All those factors create higher barriers for someone harmed by AI who is seeking redress in the civil justice system than the ones faced by victims of human tortfeasors. Classic tort doctrines need to be adjusted to remove those barriers, and the need will become greater as the world becomes increasingly automated.
In late Spring 2023, BCLI will issue a consultation paper with tentative recommendations on Artificial Intelligence and Civil Liability and the public will have the opportunity to comment. The responses BCLI receives will feed into the final recommendations in a report to be issued later this year.
The consultation paper will be available along with all our other publications on the BCLI website at www.bcli.org
G. Blue, KC, is a Senior Staff Lawyer for the British Columbia Law Institute.