Skip to main content

The European-Security and Defence Union Issue 39

Page 50

THE EUROPEAN – SECURITY AND DEFENCE UNION

does not make war more clinical – “Technology it makes it more deadly. Lethal autonomous

Mankind has struggled to define moral values throughout history. If we even cannot agree on what makes a moral human, how could we design moral robots? Artificial intelligence researchers and ethicists need to formulate ethical values as a base for qualified parameters and engineers need to collect enough data on explicit ethical measures to appropriately train artificial intelligence algorithms. A debate has to be held on developing trusted autonomy in future systems and defining how far to go in allowing fully autonomous weapons and platforms: 1) Should robots be regarded as moral machines or moral agents with responsibility delegated to them directly rather than to their human designers or minder? 2) How would we design a robot to know the difference between what is legal and what is right? And how would we even begin to write down those rules ahead of time, without a human to interpret in the battlefield? 3) Does international humanitarian law imply that humans must make every individual life-or-death decision? 4) Can we program robots with something similar to the Geneva Convention war rules, prohibiting, for example, the deliberate killing of civilians?

weapons once developed will permit armed conflicts to be fought at scales greater than ever.”

If the international community does not take steps to regulate the critical functions of LAWS, then regulation will continue to lag behind the rapid technological advances in the field of robotics, artificial intelligence and information technology. Countries with vested interest in the development of LAWS like the US, the UK, Israel, China and Russia have shown little interest in establishing binding regulations. Weapon development should meet internationally accepted standards of ethics, attenuating an individual soldier’s ability to misuse a weapon for an immoral act. Technology does not make war more clinical – it makes it more deadly. Lethal autonomous weapons once developed will permit armed conflicts to be fought at scales greater than ever, and at time scales faster than humans comprehend. Nothing about technology or robots alters the fact that war is a human endeavour, with decidedly deadly consequences for troops and civilians once the forces of war are unleashed. A war between robots no longer an illusion war planning. It will become a reality in the near future, and some are already on the battlefield. Pandora’s box is already open, and it will be hard to close it, if even possible. There is a significant and legal dilemma that emerges as a result. The concept of roboethics (also known as machine ethics) brings up fundamental ethical reflection that is related to practical issues and moral dilemmas. Roboethics will become increasingly important as we enter an era where artificial general intelligence (AGI) is becoming an integral part of robots. The objective measure for ethics is in the ability of an autonomous system to perform a task as compared to the same act involving a human. A realistic comparison between the human and the machine is therefore necessary.

Human machine interaction is central to the judical and ethical questions of whether fully autonomous weapons are capable of abiding by the principals of international humanitarian law. Artificial intelligence developers are representatives of future humanity. But autonomous weapon systems create challenges beyond compliance with humanitarian law. Most importantly, their development and use could create military competition and cause strategic instability. We should be worried of the widening gap between knowledge and the morality of mankind. As the world is past the point of considering whether robots should be used in war, the goal is to examine how autonomous systems can be used ethically. There is a high probability that it will be a relationship of man and machine collaboratively living and working together.

Can robots be moral? With steady advances in computing and artificial intelligence, future systems will be capable of acting with increasing autonomy and replicating the performance of humans in many situations. So, should we consider machines as humans, animals, or inanimate objects? One question in particular demands our attention: should robots be regarded as moral machines or moral agents with responsibility delegated to them directly rather than to their human designers or minder?

50

Israel Rafalovich is a journalist and analyst based in Brussels. He covers the European institutions and writes a weekly column on international relations.


Turn static files into dynamic content formats.

Create a flipbook
The European-Security and Defence Union Issue 39 by The European-Security and Defence Union - Issuu