Wednesday, November 12, 2014
First what is compassion: “a feeling of distress and pity for the suffering or misfortune of another, often including the desire to alleviate it” I think alleviate is the keyword in this definition.
Next, I need to take a step back and write about why I am asking this question.
I have been thinking about two connected ideas. How to make a useful servant, worker and rescue robot and how sophisticated of an AI will robots of all types need in the future?
Robots who have very little interaction with people will probably not need a very sophisticated AI but what about robots who’s primary function or role is to work with people? Their AI will have to be as sophisticated as we can get it. These are the servant, worker and rescue robots that I have been thinking about. These robots will need rules, laws and can I say morals/values to guide their interactions with humans. Can we use Asimov’s 4 law as a guide or do we need more?
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. “A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
My feeling is that this will not be enough. Maybe at first but as robots become more sophisticated and more entwined in the world around us they will need more.
This all leads to the title question do humanoid robots need Compassion?
To help answer this question I would like to use an example from one of my favorite TV shows, “Person of Interest”. The AI in the show is called the Machine, God and Her. Harold the main programmer of the AI in the show has numerous flashbacks to the days he was programming the AI. Through many attempts and dead ends he has learned one very important lesson, to get his AI to help people it needed to care about them or have compassion towards humans. Yes I know this is only a TV show but it brings up some very interesting points.
How do you teach the Machine/God/AI/Her to care about humans? To have compassion for the people it protects. That is a question that I have been thinking about for the last few months and I definitely do not have the answer. I just have more questions right now. I think James T. Kirk said it best '"Above all else, a god needs compassion!" (TOS: "Where No Man Has Gone Before"). Yes, a robot is not a god but it may start to think this when it compares itself to us? That is exactly what happens to Gary Mitchell in TOS: "Where No Man Has Gone Before". The goal should be to teach the robot to care about the humans it works with. This is to keep the robot from thinking that it is more important and better than us because if it feels this way it may stop working with us? So will we create AIs with different personality traits based on its role or job? A Security robot is confrontational while a care taking robot is caring? Will a rescue robot need to have the personality of a mother hen protecting her chicks? Will it be able to sacrifice itself for the humans it needs to rescue or protect? A great example is sending the robot in to shut down a nuclear reactor going critical. After the robot completes its mission it could be too radioactive or damaged to be used again. Will the 4 laws and a series of IF statements be enough to get the robot to complete its task or will it stop doing the task to protect itself?
I still have a lot to think over and I am not sure I will ever have any answers but I will keep trying to find them. I will also be closely following the DARPA Robotics Challenge Finals this June to see advances in the hardware side of the equation. So stay tune to this blog and I would be interested in any ideas, suggestions or comments about this subject.would be interested in any ideas, suggestions or comments about this subject
at 9:17 AM