terça-feira, 24 de outubro de 2017

Who will be responsible for the robots' autonomous actions?

Um robô

The ability of robots and other forms of artificial intelligence to make decisions autonomously raises a number of questions about the moral implications and responsibility for their actions.

An automatic vehicle with a passenger on board transits by the edge of a cliff. In front, a normal car is approaching with a family of five. The smart car has the ability to analyze the situation and decide what to do in the face of the imminent collision risk. If they collide, the chances of six people dying are high.

If you advance to the cliff, the life of an entire family will be saved, but the person inside the smart car will die.

What decision will the robotic car take? Due to the advancement of artificial intelligence in everyday life, situations like this are closer than we think.


"How will machines normally act to make decisions when they can have this computing power?" "Who will be responsible for the mistakes in the decisions made by a machine?" "It's an area that's developing, and even groups of lawyers are analyzing these new horizons, Argentine robotics expert Gonzalo Zabala told the newsroom.

In law, the debates around this theme have no answer. Today, responsibility lies with companies, but as artificial intelligence is increasing its ability to make decisions on its own, the legal vacuum is becoming more apparent.

Zabala cited the Moral Machine website of the Massachusetts Institute of Technology (MIT), which presents users with different conflicting situations that show the difficulty of morally judging smart vehicle decisions.

According to the project website, the goal is not just to understand how humans make decisions, but to understand more clearly how they understand artificial intelligence, which is becoming more important today.

Customer support services are being replaced by "chat bots", programs that are able to hold a conversation. Translation applications that recognize natural language or satellite navigation that can detect traffic jams in a city based on the information of its users are a reality that people bring in their pockets.

Based on algorithms and data available on the internet, automatic learning, or "machine learning," has shifted the focus of artificial intelligence. Information gained from machines gives you the ability to make decisions based on your experiences.

According to the specialist, finding these solutions involves making a reflection that takes into account the complexity of human intelligence. The different types of human intelligence and other mechanisms of our mind are important in this process.

"When we look at systems, or how to make an intelligent machine, we reflect on our own intelligence and the mechanisms we have, such as learning," Zabala explained.

According to the expert, artificial intelligence "is not dangerous in itself", the problem is that the valorization is based on the uses that it gives to him, which "will determine its dangerousness or not".

He cited the example of nuclear power. On the one hand, the bombing of Hiroshima and Nagasaki in 1945 resulted in an enormous amount of deaths. But, on the other hand, the generation of energy and its medical use are good practices based on nuclear technology.

Another problem linked to artificial intelligence is the loss of jobs.

"One of humanity's plans in which it will have a terrible impact is the subject of work, not just artificial intelligence, but all the technologies that are impacting on it," Zabala said.

By 2050, the plants are estimated to be robotic at a level above 90%. However, the expert stressed that the world has already faced the problem of the introduction of new technologies and the loss of jobs. While many jobs have disappeared due to the emergence of technology, new jobs have emerged, "higher level."

"We have already received the benefits of technology and not only in the upper social classes or the countries of the first world: people do not work in conditions equal to 50 or 100 years ago nor work the same hours," said the expert.

For Zabala, robots are not the main threat to humanity. On the contrary, the people themselves and their "delusional" level of consumption and production pose a concrete threat in the short term.

Nenhum comentário:

Postar um comentário