This Google self-driving car was involved in an accident after another vehicle ran a red light. (Photo: Ron Van Suylen)
It may be the trickiest debate in the self-driving vehicle space. In a situation where an accident is unavoidable what should the vehicle do? Does it sacrifice the occupant to potentially save a school bus of children?
Researchers at the Institute of Cognitive Science at the University of Osnabrück, Germany have been using virtual reality to study if algorithms can be developed based on human behavior.
The findings, recently published in Frontiers in Behavioral Neuroscience, make the case that “moral decisions in the scope of unavoidable traffic collisions can be explained well, and modeled, by a single value-of-life for every human, animal, or inanimate object.”
The study does not address specific formulas, but suggests further research and debate can assist in determining how a driverless vehicle should react in various situations, based on how humans tend to act.
The research comes after the German transport ministry's ethics commission presented 20 guidelines for self-driving cars. These include damage to property must be allowed before injury to people and that in the event of unavoidable accidents, all classification of people based on their personal characteristics is prohibited.
It is a critical topic beyond Germany. This topic came up in my recent interview with April Sanborn of the Nevada Department of Motor Vehicles. Even as she expressed optimism that autonomous trucks would be on U.S. roadways in limited form within a few years, she called this moral dilemma “always an uncomfortable topic.”
She stressed it is not an area DMVs will ultimately weigh in on, but is one that often comes up when working with vehicle and software makers.
“We’ve yet to have one provide an answer to that question,” she noted.
Reaching decisions on this topic will be needed before vehicles will be allowed long stretches without a driver in the seat, such as last year's Otto delivery of beer in Colorado. (Photo: Otto/Uber)
As for the study itself, it included 105 participants who controlled a virtual car and had to choose which of two obstacles they would sacrifice.
Researchers cited “social desirability” as the reason why adult males were sacrificed 80 percent of the time in slow mode. In fast mode, however, that tendency disappeared, which the researchers said calls “for more investigation of the effect of time pressure on moral decision-making.”
In comparison, algorithms can estimate the potential outcome of options within milliseconds, and make a decision that factors in pre-programmed research and regulations.
These algorithms can also factor in probabilities of injuries, and help to make reasonable decisions in situations where these differ greatly.”
One of the questions out of these studies is who determines how the algorithms are created.
Gordon Pipa, a senior author of the study, stressed additional research is needed.
"We need to ask whether autonomous systems should adopt moral judgments, [and] if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?"