Trolley problems: AI hype has roared ahead of self-driving cars

Not yet a subscriber? Sign up free below.

Hands off the wheel

It’s not unreasonable to have expected some major breakthroughs in self-driving cars (autonomous vehicles or AVs) in the same year that AI “went mainstream” — they are, of course, deeply intertwined technologies. Indeed, the only way we’re going to get the ability to sit back while our car drives us safely to our destination, is with the development of AI — along with an array of sensors, cameras, and radar systems.

But, it may feel like progress has stalled in recent years. Indeed, just this week, the AV giant Cruise began laying off contingent workers who supported its driverless fleet. The layoffs follow a recall of 950 of its self-driving vehicles in response to an incident from October in which a pedestrian was hit into the path of a Cruise car by a human driver, and then dragged some 20 feet by the self-driving vehicle as it attempted to pull over.

The word “recall” in this context actually refers to a software update, which the company says would have kept the Cruise AV stationary after the incident had it been in effect, but the California DMV had already suspended Cruise’s driverless permits in light of the October crash.

The logic of the Cruise vehicle was to get out of the way of other oncoming traffic, presumably to avoid another potential accident. Of course, with a pedestrian entangled with the vehicle, that’s clearly not the “desired post-collision response”.

Trolley problems

Although in this case there seems to be room for improved software, the incident raises the classic ethical dilemma for AV scientists and researchers: in the event of an unavoidable accident, how should software “prioritize” the safety of humans? Should it first check if there is anyone that looks like a child, or a vulnerable individual, at risk, and prioritize their safety? Should it prioritize the passengers in the car, or people outside of the vehicle? How should it “think" about animals — is it okay to swerve to avoid a cat running across a road if it creates a 1% chance of causing an accident? What about a 5% chance? How long should it take to work all of this out if it needs to act quickly?

These questions — part of a wider set of thought experiments often referred to as "trolley problems" — clearly don't have perfect answers, and remain a sticking point for engineers, designers... and the wider public. Indeed, a survey from Pew Research Center found that, when asked whether self-driving cars should prioritize those in or out of the vehicle, a whopping 41% said they weren’t sure, 40% said the passengers of the vehicle itself and 18% said those outside of the car.

Not yet a subscriber? Sign up free below.

Tags

Stories from this newsletter

Trolley problems: AI hype has roared ahead of self-driving cars
Waymore miles: Waymo reported more autonomous miles than any other manufacturer
Level up: VC funding for autonomous vehicles is drying up
We and our partners use cookies and similar technologies (“Cookies”) on our website and in our newsletters for performance, analytical or advertising purposes to ensure you have the best experience on our site and/or interaction with us. To find out more about the use of Cookies, see our Cookie Notice. Please click OK if you consent to our use of Cookies or click Manage my Preferences to manage your Cookie preferences.