Human and Robot: A commom future
This is another part of the robot series. This time we will try to get into the ethical side of robotics. It is very speculative and there are plenty of ‘IF’s’ in this. We talk concepts here rather than technology, so fasten the seat belts, this is going to be different. If we all agree that the Singularity can happen and that it could be somewhat in the 2050’s, we need to see what the characteristics are: Robots exceed human intelligence to a point where we cannot even fathom it and where we cannot predict anything (due to our limited intelligence compared to robots).
If you remember the movie ‘iRobot’ you may remember the ‘three laws for robots’. I have copied these off Wikipedia and here they are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws These laws were first published by Asimov in 1942 in his book ‘Runaround’. Look up Asimov and you will see a lot of debate and things. Now let us turn to the laws. These should be rather clear, but there are inconsistences. What is ‘harm’? is it physical harm? Is it also to lock up a human being until the human gives you the password to the bank account? (or the nuclear release code?). What if a robot can start to distinguish between ‘human’ and ‘humanity’? Now we have a situation where killing one person is good for humanity (stopping a nuclear release). So what is ‘harm’.
Robots can self-improve and can build next-generation robots exceeding their own limitations. Robots are aware of themselves and their environment. Robots can set goals for their existence themselves and improve these over time. It becomes very dramatic now. A robot may ask itself: “why am I here? What is my role in the universe?” This is not different from what humanity is asking itself, is it?
If an entity is half-robot and half-human (like Robocop) must it then obey orders? If it is dependent on whether a brain is electronics or tissue, well, an electronic brain can be grown as tissue, and then what? So what is a human now? It does become horrible complex and it can get worse. You can figure this out yourself.