Who should a driverless car try to protect if a crash is unavoidable?

Among issues to be worked through before driverless cars appear on our roads: Who does the vehicle try to protect in a ...
REUTERS

Among issues to be worked through before driverless cars appear on our roads: Who does the vehicle try to protect in a crash? The picture shows a Google prototype.

Driverless - or autonomous - cars may be picking up an unstoppable head of steam, but among the many challenges to overcome is deciding who should be at risk if a crash is unavoidable.

A series of six online surveys in the US found that people generally approved of autonomous vehicles (AVs) programmed to sacrifice their own passengers to save others.

But people weren't actually interested in having those cars themselves. Instead they wanted cars that sought to protect them and their passengers, particularly if family members were involved.

A report on the study said AVs had the potential to eliminate up to 90 per cent of crashes, but not all would be avoided, and some scenarios required AVs to make difficult ethical decisions.

READ MORE:
Google's driverless cars are now legally the same as a human driver
Dubai pushes the pedal to the metal on driverless cars
Transport Minister Simon Bridges trying to lure driverless cars to New Zealand
Google's self-driving car takes some blame for a crash

An author of the study, Jean-Francois Bonnefon from the University of Toulouse, said regulation may be necessary to resolve the dilemma. At the same time, there was a risk such a move could be counterproductive, given the surveys indicated regulation could substantially delay AV adoption.

Professor Toby Walsh, from Australian data innovation group Data61, said the study showed that aligning moral artificial intelligence driving algorithms with human values was a major challenge.

Some people had suggested that given the number of fatal crashes involving human error, it could be considered unethical to introduce self-driving technology too slowly, Walsh said.

"The biggest ethical question then becomes: How quickly should we move towards full automation given that we have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill a few?"

Professor Hossein Sarrafzadeh, from Unitec, said the study highlighted issues about how much control should be given to machines. That question became more central as machines were used more extensively.

Ad Feedback

 

 

 

 - Stuff

Comments

Ad Feedback
special offers
Ad Feedback