Would you buy a car that might decide to kill you?
It is a question social-science researchers are exploring amid the development of driverless cars. While commercial applications may be years away, any fully autonomous vehicle that eventually takes to the road will need to make decisions—like whether to swerve to miss one pedestrian at the risk of hitting another.
Many ethicists argue that a public conversation should be part of the development process.
In a study published in Science, researchers found people want the cars to be programmed to minimize casualties while on the road. But when asked about what kind of vehicle they might actually purchase, they chose a car that would protect the car occupants first.
The paper describes a series of online surveys that posited various scenarios.
In one, participants were asked to imagine that they are in a self-driving vehicle traveling at the speed limit. Out of nowhere, 10 pedestrians appear in the direct path of the car. Should engineers program the car to swerve off the road in such instances, killing the car occupant but leaving the 10 pedestrians unharmed, or keep going, killing the 10 people?
In that scenario, 76% of the 182 participants said the moral thing for the car to do was sacrifice the car occupant rather than kill the 10.
Most people, researchers say, intuitively understand that, when viewed through the lens of the greatest good, sacrificing one to save 10 makes sense. But as researchers in the study continued with their surveys, eventually involving over 1,900 people in total, they identified what they call a “social dilemma.”
Researchers asked participants which car they would prefer to actually purchase, one programmed to put a heavier premium on saving more lives, or one that might sacrifice them or family members in the name of the greater good.
In that case, participants “preferred the self-protective model for themselves,” the researchers wrote.
“Just as we work through the technical challenges, we need to work through the psychological barriers,” said Iyad Rahwan, associate professor of media arts and the sciences at the MIT Media Lab at the Massachusetts Institute of Technology and one of the authors of the paper.
The new research offered variations of what is known as the “trolley problem,” a cornerstone of modern ethical inquiry that social scientists use to illuminate potential moral conflicts. In a classic version of the trolley problem, researchers ask a person to imagine being on a trolley racing toward a group of workers. The person has an option of flipping a lever to move the trolley to another track where it would hit only one worker.
The essential difference, some ethicists argue, involves taking an action that doesn’t intend to kill someone versus actively causing the death of one. Variations of these thought experiments test how people might make different choices—say, if the potential casualties are children, the elderly or a pregnant woman.
The paper’s authors acknowledge that one limit of their study is that participants were recruited from the Amazon Mechanical Turk platform, which tends to attract people more comfortable with technology in the first place and who may not represent the U.S. population. They also noted that younger, male participants were far more excited about autonomous vehicles than women or older people.
Developers of driverless cars generally say as of now these are still academic questions.
Karl Iagnemma, CEO and co-founder of nuTonomy, a Cambridge, Mass.-based company developing software for fully autonomous cars, says ethical questions like the ones raised in the Science paper are important to consider. But cars don’t have the technology to distinguish “a baby stroller from a grandmother from a healthy 21-year-old.”
The industry is “still trying to get the software to work in a safe and reliable way,” Dr. Iagnemma said, “let alone worrying about reasoning about complex ethical decisions.”
Write to Amy Dockser Marcus at [email protected]