Page 1

Victor
Massaro
 5/11/09
 REL
394
 
 
 


Robotics
and
the
Morality
of
their
Use
in
War


“All
humans
are
endowed
with
a
moral
faculty
–
a
capacity
that
enables


each
individual
to
unconsciously
and
automatically
evaluate
a
limitless
variety
of
 actions”
(p.
5,
Arkin
2).
If
humans
are
removed
from
the
battlefield,
does
this
 moral
faculty
disappear
as
well?
Morality
is
a
crucial
aspect
in
the
justifications
 and
actions
of
war.
According
to
Just
War
theorists,
there
are
certain
criteria
that
 must
be
met
in
the
justification
and
actions
of
war.
The
use
of
robotics
seems
to
 conflict
with,
or
at
the
very
least,
bend
the
laws
of
Just
War
theory.
The
present‐ day
and
future
uses
of
robotic
warfare
bring
about
this
idea
of
a
necessity
of
 morality
and
the
ultimate
effect
it
has
on
humans
in
war.
Robots
do
not
possess
 an
emotional
side
while
humans
are
highly
emotional.
The
question
that
comes
 to
light
is
whether
or
not
the
use
of
these
robots,
lethal
or
non‐lethal,
is
ethical,
if
 their
use
brings
about
a
desensitized
idea
of
war
and
ultimately
makes
war
an
 easier
choice
to
carry
out.
This
paper
will
examine
each
of
these
aspects
as
a
 means
of
bringing
to
light
the
current
misuses
of
this
new
technology
as
well
as
 providing
a
hopeful
and
beneficial
outlook
of
their
potential.
Nonetheless,
current
 robotic
warfare
desensitizes
humans
from
the
horror
of
war
and
lacks
the
 accountability
and
ethicality
required
by
traditions
such
as
Just
War
theory.

 


The
Laws
of
War
and
Rules
of
Engagement
are
engraved
into
the
human


mindset
during
the
times
of
war.
However,
there
is
a
need
to
transmit
these


1



concepts
into
autonomous
robotics.
Laws
of
War
regulate
the
conduct
of
armed
 forces
while
the
Rules
of
Engagement
focus
mainly
on
the
initiation
of
combat
 engagement.
Combine
these
concepts
with
those
of
Just
War
and
there
is
a
very
 clear
picture
of
what
is
generally
accepted
on
the
global
battlefield.

Just
War
has
 two
aspects,
Jus
ad
Bellum
and
Jus
in
Bello.
The
former
limits
the
initiation
of
 conflict
and
raises
the
idea
of
last
resort,
meaning
that
military
use
must
be
the
 last
resort
possible.
Jus
in
Bello
defines
the
ethical
uses
of
warfare.
It
focuses
on
 the
protection
of
non‐combatants,
the
responsibility
associated
with
actions,
and
 the
concept
of
proportionality
where
the
“acts
of
war
should
not
yield
damage
 disproportionate
to
the
ends
that
justify
their
use”
(p.
2,
Arkin
1).

Robotic
 warfare
is
needs
to
be
in
accordance
with
these
Jus
in
Bello
concepts,
especially
 in
fully
autonomous
technologies.

 


The
first
aspect
of
robotic
use
that
needs
to
be
addressed
is
whether
or


not
their
use
is
ethical
at
all
in
accordance
to
Just
War
thought.
The
major
issue
is
 that
this
technology
is
becoming
readily
available
without
fully
understanding
 the
consequences
or
needed
restrictions
of
its
use.
This
is
similar
to
the
use
of
 the
nuclear
bomb
in
the
sense
that
before
its
repercussions
were
fully
 understood,
it
had
been
used
and
there
was
no
turning
back.
The
same
may
 come
of
robots.
However,
in
complying
with
Just
War
theory,
the
most
 complicated
aspect
is
that
of
combatants
versus
non‐combatants.
How
do
you
 train
or
for
that
matter,
program
an
autonomous
robot
to
distinguish
between
a
 combatant
and
a
non‐combatant.
This
is
an
issue
that
has
not
yet
been
solved


2



even
though
robotic
use
is
becoming
more
and
more
prevalent.
For
instance,
in
 1998
during
a
patrol
mission,
the
U.S.S.
Vincennes,
which
featured
a
new
 autonomous
Aegis
radar
system,
shot
down
an
Iranian
passenger
jet.
The
system
 had
registered
the
plane
as
an
Iranian
F‐14
fighter
jet,
which
made
it
an
 “assumed
enemy”.
“Though
the
hard
data
were
telling
the
human
crew
that
the
 plane
wasn’t
a
fighter
jet,
they
trusted
the
computer
more.
Aegis
was
in
semi‐ automatic
mode,
giving
it
the
least
amount
of
autonomy,
but
not
one
of
the
18
 sailors
and
officers
in
the
command
crew
challenged
the
computer’s
wisdom.
 They
authorized
it
to
fire”.
Two‐hundred
and
ninety
passengers
and
crew
died,
 including
sixty‐six
children
(Singer,
The
New
Battlefield).
It
is
instances
like
 these
that
have
and
continue
to
raise
the
need
for
a
perfected
system
before
the
 use
of
these
robots
is
fully
justified.
These
situations
also
raise
the
question
of
 who
is
responsible
or
accountable
for
actions
such
as
these?
 


Accountability
is
a
large
part
of
Jus
in
Bello.
It
is
an
easy
concept
to


address
with
humans
on
the
battlefield.
A
soldier
fires
his
weapon
and
kills
an
 innocent,
he
and/or
his
commander
is
held
responsible
for
doing
so.
However,
 there
is
a
much
larger
gray
area
when
it
comes
to
robotics,
especially
fully
 autonomous
technologies.
Who
is
ultimately
responsible?
Is
it
the
programmer
 of
the
robot,
the
overseer
of
the
robots
actions,
the
general
in
charge
of
the
 operation,
or
worse
yet,
is
there
no
accountability?
“If
there
are
recognizable
war
 crimes,
there
must
be
recognizable
criminals”
(p.
76,
Arkin
1).
While
presently,
 these
programming
techniques
for
accountability
are
lacking,
it
is
argued
that
for


3



ethical
robotics
to
exist,
“responsibility
returns
to
those
who
designed,
deployed
 and
commanded
the
autonomous
agent
to
act,
as
they
are
those
who
controlled
 its
beliefs”
(p.
76,
Arkin
1).

However,
other
thinkers
on
this
topic,
including
 Robert
Sparrow
argue
that
a
fully
autonomous
robotic
system
is
completely
 unethical.
He
argues
that,
“while
responsibility
could
ultimately
vest
in
the
 commanding
officer
for
the
system’s
use,
it
would
be
unfair,
and
hence
unjust,
to
 both
that
individual
and
any
resulting
casualties
in
the
event
of
a
violation”
(p.
8,
 Arkin).
He
compares
robots
to
child
soldiers,
neither
of
which
can
morally
 assume
responsibility
for
their
actions.

 


Lastly,
in
dealing
with
Jus
in
Bello
concepts
for
a
just
war,
proportionality


must
be
addressed.
The
United
States
Army
“prescribes
the
test
of
 proportionality
in
a
clearly
utilitarian
perspective
as:
“The
loss
of
life
and
 damage
to
property…must
not
be
excessive
in
relation
to
the
concrete
and
direct
 military
advantage
expected
to
be
gained”
(p.
23,
Arkin
1).
Roboticist
Ronald
C.
 Arkin
looks
into
this
aspect
through
a
programming
algorism
to
instill
a
sense
of
 proportionality
in
a
fully
autonomous
robot.
Before
acting
with
lethal
force,
a
 robot
must
assign
responsibility,
which
is
granted
by
a
human
in
command
 before
the
mission.
Then
military
necessity
is
established
through
the
criteria
for
 targeting.
Then
the
robot
must
maximize
discrimination,
which
establishes
the
 target
as
a
legitimate
combatant.
Lastly,
the
robot
must
minimize
the
force
 required
to
succeed
which
combines
the
concepts
of
Proportionality
and
the
 Principle
of
Double
Intention.
This
forces
the
robot
to
“act
in
a
manner
that


4



minimizes
collateral
damage”
while
not
taking
civilian
lives
(p.
59,
Arkin
1).
The
 issue
that
arises
is
that
these
values
are
not
yet
instilled
in
robots,
however,
they
 current
use
is
ongoing
on
the
modern
battlefield.
This
raises
the
issue
that
 robotic
warfare
is
not
completely
unethical,
instead,
certain
current
aspects
are,
 but
can
be
addressed
and
corrected.

 The one aspect that seems to be infallible is the fact that robots are more efficient, more precise and may in fact be able to function with a more ethical nature. “They don’t get hungry,” says Gordon Johnson of the Pentagon’s Joint Forces Command. “They’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes” (Singer,
The
New
Battlefield). Saint Augustine, a founder of Just War thought, noted, “that emotion can clearly cloud judgment in warfare” (p. 2, Arkin 1). Arkin argues, “despite the current state of the art, that in the future autonomous robots may be able to perform better than humans” (p. 6, Arkin 1). Robots do not need to have a self-preservation manner. They can also be designed without emotions that clearly have major effects on a human soldier. Michael Walzer, another political Just War thinker, states that, “Fear
and
hysteria
are
always
latent
in
combat…they
press
us
 toward
fearful
measures
and
criminal
behavior.
Autonomous
agents
need
not
 suffer
similarly”
(p.
6,
Arkin
1).
Brian
J.
Bill
of
the
International
and
Operational
 Law
Department
at
the
Judge
Advocate
General’s
School,
outlines
ways
in
which
 to
avoid
war
crimes.
These
include
avoiding
high
friendly
losses,
poorly
trained
 troops
and
having
unclear
orders
that
are
misinterpreted.
(p.
216,
Bill).
Robots


5



don’t
engage
in
rage
driven
acts
or
react
to
situations
of
horror.
Removing
the
 human
psyche
from
the
battlefield
is
a
great
advantage.
It
not
only
lowers
the
 loss
of
life
in
war,
but
it
also
makes
for
a
more
ethically
level
playing
field.
 However,
these
ethics,
as
stated
before,
must
be
instilled
in
the
programming
of
 the
robot,
and
the
commander
overseeing
the
respective
actions.
The
argument
 doesn’t
seem
to
lie
in
whether
they
would
act
more
ethically,
it’s
instead
whether
 or
not
their
use
is
ethical.

 


A
big
reason
the
use
of
lethal
robotics
is
called
into
moral
question
is
the


effects
they
have
on
humans
in
war.
“The
ethics
of
war
quickly
reveals
ambition,
 wickedness,
courage,
hatred
and
compassion,
within
an
intensely
emotional
and
 human
framework.
Soldiers
live
with
their
enemies
in
the
same
community
of
 fate.
They
also
have
to
live
with
themselves
and
their
actions
for
years
long
after
 the
battle
is
over”
(p.
150,
Coker).
By
removing
these
emotions
and
ultimately
 humans
from
the
battlefield,
does
it
desensitize
humans
from
the
horrors
of
 war?
There
are
three
divisions
to
this
argument.
The
first
focuses
on
the
Just
War
 concept
of
last
resort.
Last
resort
should
only
take
place
after
all
other
options
 have
been
exhausted.
These
include
diplomatic,
economic
and
all
other
avenues
 for
avoiding
war.
If
it
reaches
the
point
that
there
is
no
other
option,
war
is
the
 last
resort
and
is
therefore
justified.
However,
with
robotics,
there
becomes
a
 much‐removed
human
experience
from
war.
Therefore,
committing
to
acts
of
 war
earlier
than
the
last
resort
becomes
an
easier
decision
to
make.
This
leads
to
 a
misconception
of
proportionality
as
well.
In
situations
such
as
these,
the
means


6



may
far
outweigh
the
ends
because
there
is
no
longer
a
time
period
of
hesitance
 required
when
sending
human
life
into
a
conflict.
Instead,
the
only
thing
at
risk
is
 a
high
price
tag
attached
to
the
loss
of
robotic
technology.
Lawrence
J.
Korb
is
 one
of
the
deans
of
Washington’s
defense
policy
establishment.
One
of
his
 arguments
is
that
in
the
future
there
will
be
“more
Kosovos
and
less
Iraqs,”
 stating
that,
“As
unmanned
systems
become
more
prevalent,
we’ll
become
more
 likely
to
use
force,
but
also
see
the
bar
raised
on
anything
that
exposes
human
 troops
to
danger….[envisioning]
a
future
in
which
the
United
States
is
willing
to
 fight,
but
only
from
afar,
in
which
it
is
more
willing
to
punish
by
means
of
war
 but
less
willing
to
face
the
costs
of
war”
(Singer,
The
New
Battlefield).

 


Robotics
make
war
a
fantasy
and
not
a
reality.
This
concept
of


desensitizing
humans
has
a
ripple
effect
originating
from
the
military
and
makes
 its
way
into
the
public
sector.
First
off,
current
and
future
robots
record
 everything
that
they
see.
Combat
footage
has
ultimately
become
a
form
of
 entertainment
for
the
general
public.
“War
becomes…a
global
spectator
sport
for
 those
not
involved
in
it”
(Singer,
The
New
Battlefield).
There
is
an
argument
that
 nations
tend
to
go
to
war
because
of
overconfidence.
Technology
has
proven
to
 fuel
this
overconfidence.
Combining
this
with
the
publics’
seemingly
content
 view
of
war
due
to
the
access
to
the
“entertainments
of
war”
is
a
dangerous
step
 in
the
wrong
direction.


 


Directing
attention
back
to
the
desensitizing
of
military
personal,
most


importantly
with
the
introduction
of
robotic
technologies,
the
human
experience


7



is
being
reshaped.
When
war
is
merely
pushing
a
button,
or
watching
a
TV
 screen,
does
morality
make
its
way
into
a
humans
train
of
thought
as
it
would
 faced
with
a
decision
on
the
battlefield?
The
simple
answer
is
no.
The
growing
 gap
between
personal
experience
on
the
battlefield
and
reality
through
a
 computer
monitor
is
rapidly
occurring.
A
perfect
example
of
this
is
the
current
 use
of
Reaper
drones
being
flown
in
missions
over
Pakistan.
These
are
the
first
 hunter‐killer
unmanned
aerial
vehicles
in
use
by
the
United
States
military.
The
 difference
between
this
and
say
an
F‐16
jetfighter
is
that
pilots
thousands
of
 miles
away
fly
these
Reaper
drones,
at
a
military
base
in
the
middle
of
the
 Nevadan
desert.
This
creates
the
drastic
separation
of
war
mentioned
earlier.
 Not
only
does
censor
a
human
experience,
but
it
also,
yet
again,
puts
morality
on
 the
backburner
of
the
human
mind.
“You
see
Americans
killed
in
front
of
your
 eyes,”
a
drone
pilot
told
author
P.W.
Singer,
“and
then
you
have
to
go
to
a
PTA
 meeting”.
“You
are
going
to
war
for
[twelve]
hours,”
another
pilot
told
Singer,
 “shooting
weapons
at
targets,
directing
kills
on
enemy
combatants,
and
then
you
 get
in
the
car,
drive
home,
and
within
twenty
minutes
you
are
sitting
at
the
 dinner
table
talking
to
your
kids
about
their
homework”
(Singer,
Youtube).
This
 is
a
striking
account
of
the
kinds
of
separation
from
the
realities
of
war
that
 soldiers
on
the
ground
experience.
This
lack
of
understanding
and
experience
 leads
to
a
slippery
slope
of
desensitizing
the
human
consciousness
of
war
and
 moral
action.



8



The
question
that
remains
is
simply,
are
the
current
uses
of
robotics


moral
and
if
not,
what
needs
to
be
done
in
order
to
move
forward
in
a
more
 ethical
manner.
The
answer
to
the
concept
of
current
morality
is
no.
Current
 robots,
primarily
those
that
use
lethal
force,
are
not
adequately
programmed
to
 carry
out
missions
that
take
away
human
life.
Even
though
they
are
still,
at
the
 moment,
under
the
control
of
a
human
overseer
or
controller,
there
are
many
 accounts
of
immoral
action.
Most
prominently,
this
includes
the
killing
of
non‐ combatants
and
the
lack
of
segregation
between
hostiles
and
innocents.
 Commonly,
the
discovery
of
civilian
death,
especially
in
the
case
of
Reaper
 drones,
occurs
after
missions
are
enacted.
However,
no
one
can
deny
that
there
 are
ethical
benefits
to
the
use
of
lethal
robotics.
The
removal
of
human
emotion
 rids
the
concepts
of
revenge
and
brutality
in
response
to
the
horrors
of
war.
 Instead,
robots
are
given
mission
parameters
and
the
mission
is
carried
out
in
a
 prompt
and
efficient
manner.
These
bypass
all
of
the
intangibles
that
coincide
 with
human
emotion.
Nonetheless,
there
is
a
thin
line
between
the
removal
of
 human
emotion
and
human
experience.
The
latter
must
never
be
replaced.
It
is
 the
experience
that
keeps
ethics
in
check.
Arkin
argues
that
the
“primary
goal
 remains
to
enforce
the
International
Laws
of
War
in
the
battlefield
in
a
manner
 that
is
believed
achievable,
by
creating
a
class
of
robots
that
not
only
conform
to
 International
Law
but
outperform
human
soldiers
in
their
ethical
capacity.
At
the
 very
least,
if
[they]
can
reduce
civilian
casualties
according
to
what
the
Geneva
 Conventions
have
promoted
and
that
Just
War
tradition
subscribes
to,
the
result


9



will
have
been
a
humanitarian
effort”
(p.
98,
Arkin
1).
As
long
as
the
concepts
of
 Just
War
and
other
Laws
of
War
are
at
the
forefront
of
robotic
engineers,
 operators,
and
overseers,
the
use
of
this
new
and
constantly
developing
 technology
has
bright
future
in
balancing
war
and
morality.

 


In
conclusion,
while
the
current
use
of
robotics
does
not
adhere
to
the


principles
of
Just
War
and
other
traditions
of
war,
there
is,
at
the
very
least,
an
 attempt
to
correct
the
present
course.
There
is
no
doubt
that
military
robots
 have
the
ability
to
act
at
the
very
least,
in
an
equal
ethical
manner
to
their
human
 counterparts.
The
tradition
of
human
experience
in
war
is
something
robotics
 should
never
replace.
There
also
needs
to
be
a
clear‐cut
establishment
of
 responsibility
of
immoral
action
such
as
the
killing
of
non‐combatants.
These
 instances
can
no
longer
be
chalked
up
to
a
“manufacture
error”.
Instead,
a
 revolution
in
the
ideas
of
conditions
of
war
and
the
consequences
of
actions
 needs
to
be
addressed.
It
has
never
bode
well
for
mankind
when
the
uses
of
new
 technologies
weren’t
fully
understood.
In
order
to
avoid
a
similar
occurrence
to
 that
of
the
nuclear
bomb
in
World
War
II,
a
step
backward
must
be
taken
and
the
 morality
and
ethics
of
their
uses
must
be
addressed
and
corrected.

 
 
 
 
 


10



Word
Count:
2,732
 Bibliography


Arkin, Ronald C. "Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture." GVU Technical Report GIT-GVU-0711: 1-117. 2007. College of Computing, Georgia Tech. (Arkin 1) Arkin, Ronald C. "Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture: Part 3." (2007): 1-10. College of Computing, Georgia Institute of Technology. 31 Apr. 2009 <http://74.125.113.132/search?q=cache:ycEi2iKY4SQJ:www.cc.gatech.edu/ai/rob ot-lab/online-publications/techinwar-arkinfinal.pdf+Arkin+Robotic+Morality&cd=6&hl=en&ct=clnk&gl=us&client=safari> . (Arkin 2) Bill, Brian J. "Law of War Workshop Deskbook." Comp. MAJ Geoffrey S. Corn, LT Patrick J. Gibbons, LtCol Michael C. Jordan, MAJ Michael O. Lacey, MAJ Shannon M. Morningstar, and MAJ Michael L. Smidt. International and Operational Law Department. June 2000. The Judge Advocate General's School, U.S. Army. 3 May 2009 <http://permanent.access.gpo.gov/lps34259/LOW%20Deskbook%202000.pdf>. Coker, Christopher. Ethics and War in the 21st Century (LSE International Studies). New York: Routledge, 2008. Lin, Patrick, George Bekey, and Keith Abney. "Autonomous Military Robotics: Risk, Ethics, and Design." 1.0.7 (2008). 20 Dec. 2008. California Polytechnic State University. 2 May 2009 <http://ethics.calpoly.edu/ONR_report.pdf>. Singer, P.W. "Robots and War." Princeton University. 29 Apr. 2009. Youtube. 2 Mar. 2009. Woodrow Wilson School of Public and International Affairs. 29 Apr. 2009 <http://www.youtube.com/watch?v=dEIVjXemFp0>. Singer, P.W. "Robots at War: The New Battlefield." Woodrow Wilson International Center for Scholars. 2009. 1 May 2009 <http://www.wilsoncenter.org/index.cfm?fuseaction=wq.essay&essay_id=496613 >.


11


Robotic Morality  

Focuses on the Morality or lack there of, of Current and Future uses of Robotic Warfare.