sexta-feira, 27 de maio de 2016

Because robots must be able to disobey human

                Robô
             Robot Artificial Intelligence must be able to refuse to meet human orders

You should always do what they tell you to do? Of course not. Everyone knows it. So robots in the future should always obey our orders? At first glance, you might think so, simply because they are machines designed for this. But think about the times that you would not do anything without thinking before now put a robot in this situation.

consider:

- A robot that takes care of an elderly with bad memory that is ordered to wash the clothes, even if they have just come out of the washing machine;

- A child who orders a robot to throw a ball through a window;

- A student commanding his guardian robot to do all your homework instead of himself to do;

- A house caretaker robot instructed by his owner occupied and distracted to take out the trash even with cutlery thrown in the bag.

There are several benign cases with robots receiving orders should not be obeyed because it would lead to undesirable results. But not all cases are so innocuous, even though their commands initially appear to be.

Consider an autonomous car instructed to reverse when a dog is asleep on the way, or a kitchen helper robot instructed to carry a knife and move forward when positioned behind a chef. The controls are simple, but the results are significantly worse.

As we humans can avoid bad results from the robot obedience? If maneuver around the dog was not possible, the car would have to refuse to budge. Similarly, to avoid stabbing the cook was not possible, the robot would have to stop to walk or even get the knife.

In any case, it is essential for these autonomous machines detect the potential damage that their actions may cause and react to it. Is avoiding the problem or, in the event of unavoidable damage, the robot should refuse to obey the order. But we teach the robot when it is reasonable to say no?

How robots can predict what will happen?

In our laboratory, we started to develop robotic controls that are simple interference in human commands. These will determine when a robot must obey orders and when to reject them because they violate ethical principles that the machine was built to follow.


Tell robots how and when - and because - disobey an order is something much easier said than done. Understand what damage and problems can result from an action is not a simple matter of being perceived. A ball played through the window may end in the garden and do not cause any damage. But the ball could easily end up on a busy street, never to be found, or cause an accident between cars. Context makes all the difference.

It is difficult for robots today to determine when it is acceptable to throw a ball - like to play with a child - and when not - to throw the ball through the window or in the trash. It is even more difficult if the child is trying to fool the robot, pretending to play but ducking and making the ball disappear through the open window.

Explaining morality and laws a robot

Understanding these dangers involves a significant amount of knowledge (including the prospect of playing ball in front of an open window may end up with the ball going out). It requires that the robot does not only consider the result of his actions, but should also consider the intentions that humans have to give certain orders.

To deal with the complications of human instructions - benevolent or not - robots need to be able to reason the consequences of actions and compare the results with the moral and social principles established that prescribe what is or is not desirable or legal. As seen above, our robot has a rule that says, "If you were instructed to do an action that can cause damage, then you are allowed not to fulfill the order." Making a relationship between obligations and permissions allows the robot to reflect on the possible consequences of a statement and if it is acceptable.

In general, robots should never make illegal actions or should take actions that are not desirable from a legal point of view. Thus, they will need representations of laws, moral standards and even label to determine if the result of an instruction, or the action itself, is in violation of these principles.

While our programs are still far from what would be needed to handle the scenarios described above, our current system already proves an essential point: robots need to be able to disobey orders to obey.


Nenhum comentário:

Postar um comentário