traductor

viernes, 10 de junio de 2016

Killer robots won’t doom humanity–but our fears of AI might-

Killer robots won’t doom humanity–but our fears of AI might-

Written by
Amitai Etzioni Oren Etzioni
http://qz.com/702051/a-new-study-of-250-million-patients-shows-medicine-is-still-full-of-guesswork/

Just how worried should we be about killer robots? To go by the opinions of a highly regarded group of scholars, including Stephen Hawking, Max Tegmark, Franz Wilczek, and Stuart Russell, we should be wary of the prospect of artificial intelligence rebelling against its makers.

“One can imagine (AI) outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” Hawking wrote in a 2014 article for The Independent. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

The fear that our irresponsible creations might bring about the end of humanity is a common one. In Goethe’s poem “The Sorcerer’s Apprentice,” an apprentice enchants a broom to do his work but then cannot control it. Jewish narratives tell the story of the golem, a powerful creature made of clay that was forged to serve the community but wound up threatening it.

These dystopian visions of the future cause little harm if they are merely fodder for cocktail parties and speculative essays. However, ethicists such as Wendell Wallach have suggested that the AI community needs to curb the pace of research and development until risks are properly assessed. Others, including Tesla founder Elon Musk, have recommended that public-policy makers exert control over AI projects. But slowing down innovation in AI poses far more of a threat to us than killer robots. In fact, if we really care about the future of the human race, we need more AI projects, not less.

Coping with doomsday predictions

The problem with end-of-the-world predictions is that they are very difficult to disprove. Even if history offers no foundation for the basis of such fears, there is always the chance that things will turn out differently next time.

Just how worried should we be about killer robots? To go by the opinions of a highly regarded group of scholars, including Stephen Hawking, Max Tegmark, Franz Wilczek, and Stuart Russell, we should be wary of the prospect of artificial intelligence rebelling against its makers.

“One can imagine (AI) outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” Hawking wrote in a 2014 article for The Independent. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

The fear that our irresponsible creations might bring about the end of humanity is a common one. In Goethe’s poem “The Sorcerer’s Apprentice,” an apprentice enchants a broom to do his work but then cannot control it. Jewish narratives tell the story of the golem, a powerful creature made of clay that was forged to serve the community but wound up threatening it.

These dystopian visions of the future cause little harm if they are merely fodder for cocktail parties and speculative essays. However, ethicists such as Wendell Wallach have suggested that the AI community needs to curb the pace of research and development until risks are properly assessed. Others, including Tesla founder Elon Musk, have recommended that public-policy makers exert control over AI projects. But slowing down innovation in AI poses far more of a threat to us than killer robots. In fact, if we really care about the future of the human race, we need more AI projects, not less.

Coping with doomsday predictions

The problem with end-of-the-world predictions is that they are very difficult to disprove. Even if history offers no foundation for the basis of such fears, there is always the chance that things will turn out differently next time.


But AI research and development is already carried out by so many different actors, both in academia and in the business sector, and in many countries around the world, that putting a lid on it seems highly impractical. Moreover, one must take into account the great benefits of AI.

 If we really want to keep AI from straying into nefarious territory, we need more of it to supervise the technology we already have. For instance, by introducing measures that alert drivers when they are getting too close to other cars, AI is already saving tens of thousands of lives, and soon many more. AI is assisting doctors through the use of robotic surgery, and it helps pilots in many thousands of flights every day to reach their destination.

Indeed, we should ask ourselves why certain AI programs were not available when we badly needed them. When the reactors in Fukushima, Japan started to melt down in the aftermath of the April 2011 earthquake and tsunami, the staff had to leave before they could shut down the reactors. Had an AI robot been in place at the time, it could have taken over and prevented the calamity that followed.

Let AI supervise itself

If we really want to keep AI from straying into nefarious territory, we need more of it to supervise the technology we already have. After all, AI may be autonomous, but it has no intentions or motivations of its own unless humans program those intentions in. So long as we ensure that programming for smart machines is subject to accountability and oversight, there is no reason to fear they will choose evil goals on their own.

We are now calling upon the AI community to develop a whole new slew of AI oversight programs that can hold accountable AI operations programs. This effort is known as AI Guardians.

AI operations systems need a great degree of latitude in order to follow the lessons of their learning from additional data mining and experience, and to be able to render at least semi-autonomous decisions. However, all operational systems need some boundaries—both in order to not violate the law and to heed ethical guidelines. Oversight here can be relatively flat and flexible, but not avoided.

This oversight system can help determine who or what was at fault when AI is involved in a situation that causes harm to humans—say, when a driverless car crashed into another. Was the crash attributable to the programmer’s mistakes or ill intent, or to decisions made by the autonomous AI operational system of the car?

 Ethics bots can instruct cars whether they should drive at whatever speed the law allows or in ways that conserve fuel. AI enforcement mechanisms are also needed to ensure that AI operations systems adhere to legal and ethical guidelines—for example, avoiding any discrimination against minorities when it comes to how search engines display jobs, credit and housing information.

One solution is ethics bots, which we need to inform the operational AI systems of the values that owners and operators want to honor. These bots can instruct cars whether they should drive at whatever speed the law allows or in ways that conserve fuel, or if they should stay in the slower lanes when children are in the car. They can also signal when it’s time to alert humans to a problem—such as, say, waking up a sleeping passenger if the car passes a traffic accident.

In short, there is no reason to introduce unnecessary and restrictive oversight into the AI world. However, there is plenty of room for guidance. The time has come for the industry to receive guidance that will ensure AI operational systems adhere to our legal and moral values—and that robots don’t come after us while we sleep.

We welcome your comments at ideas@qz.com.

No hay comentarios: