The ethical dilemma of autonomous vehicles

“Never before, in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second.”

Bit by bit, the technical challenges of autonomous/self-driving cars are being met, although we’re still a long way from a vehicle that is capable of driving itself anywhere, anytime, in any conditions, which is the ultimate goal.

But there’s still a big issue — a core issue — that hasn’t been adequately addressed. It’s a question of ethics, which need not just definition but some level of regulation, or at least some agreed-upon standards, before autonomous vehicles (AVs) care likely to become anything more than an experiment.

From the very beginning, the purported raison d’etre for developing AVs has been public safety, with the oft-stated objective of eliminating all deaths and injuries resulting from traffic crashes.

It’s a noble goal but, especially during the transitional period while AVs share the road with human-driven vehicles, circumstances will arise where a crash simply cannot be avoided. It’s a scenario aptly defined in academic studies of the subject as a “dilemma.”

One conclusion that could result from such a study is that some ethical decisions may have to be programmed differently for autonomous vehicles sold in different regions.

In such cases, split-second decisions will be required as to how best to mitigate the consequences of the crash. And the AVs themselves will have to make those decisions.

As an example, a senior citizen steps into the roadway at an intersection, against a red light, and the vehicle is too close to avoid hitting the pedestrian simply by braking. What to do?

The choices are to maintain course and brake as hard as possible to mitigate the extent of injury to the pedestrian; or brake and swerve into oncoming traffic, where a collision with an oncoming vehicle with an unknown number of occupants is inevitable; or brake and swerve the other way, striking a light standard in a crash-mode that will endanger the young child who shouldn’t be, but is sitting in the front passenger seat.

Such decisions go way beyond the simplistic proscription of Isaac Asimov’s “First Law of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm.” In such a case, the probability is that at least one human will be hurt, if not killed, whatever choice is made.

What should the vehicle do? What would you do?

So far, Germany is the only jurisdiction to have substantively addressed such questions, publishing an Ethics Code for Automated and Connected Driving in the fall of 2017. To its credit, it includes a section on “dilemma situations, in other words a situation in which an automated vehicle has to decide which of two evils, between which there can be no trade-off, it necessarily has to perform.”

The guidelines in that regard are limited to a broad-based directive that preventing or limiting damage to humans takes priority over limiting damage to property and animals.

To broaden the discussion and the base of data on the subject, in 2016 a group of researchers initiated a study dubbed the “Moral Machine,” via an online scientific survey hosted by the Massachusetts Institute of Technology (MIT). It has since gathered 40-million “decisions” in ten languages from millions of people in 233 countries and territories all around the world.

Through a series of 13 theoretical “accident” scenarios, involving pedestrians and animals on the road, the Moral Machine explored decisions on nine factors: sparing humans (versus pets), staying on course (versus swerving), sparing passengers (versus pedestrians), sparing more lives (versus fewer lives), sparing men (versus women), sparing the young (versus the elderly), sparing pedestrians who cross legally (versus jaywalking), sparing the fit (versus the less fit), and sparing those with higher social status (versus lower social status).

The results of that study, so far, were released recently in the science journal, Nature and they include some interesting conclusions. On average, worldwide, people chose to: spare human lives over animals; save more lives over fewer; prioritize young people over old ones.

Beyond those three basic points, however, there was considerable diversion, as the preferences revealed by the Moral Machine were highly correlated to cultural and economic variations between countries. For example, in some cultures there was a bias toward sparing males over females, and even the choice of humans over animals was not universal — cows being sacred to some.

One conclusion that could result from such a study is that some ethical decisions may have to be programmed differently for autonomous vehicles sold in different regions.

It’s a serious matter that goes well beyond the everyday judgements routinely made by engineers and programmers. As the study report states, “Never before, in the history of humanity, have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second.” We’re at that point now.

You can compare your own ethical decisions with those of the masses by taking the survey at MoralMachine.mit.edu.

About Gerry Malloy

Gerry Malloy is one of Canada's best known, award-winning automotive journalists.

Related Articles
Share via
Copy link