The Google/Volvo Maru

This morning, a friend of mine posted a fun little item to a Facebook group that I manage for the friends and alumni of my soon-to-be-former academic department. It’s a nice little story from the University of Alabama Birmingham’s site, promoting their National Championship-winning Bioethics Bowl team by talking about a recent Trolley Problem variant:  the Smart Car or Self-Driving Car Problem. This is actually, in its way, a version of trolley-related problem solving that has much more obvious practical import — while it is entirely unlikely that most people will ever engage in the signature form of philosophical violence (pushing a fat man to his death to stop a runaway trolley), Google’s self-driving cars are already on the streets (in small numbers), and Volvo’s also gotten in on the driverless car action.

Obviously, folks have a lot to say already about smart car safety and ethical concerns (see, for example, this nice piece from The Atlantic from a while back). The ethicists are concerned with how the smart car version of the problem affects theoretical commitments. Other philosophers are interested in how developments in this technology bear on arguments about artificial intelligence.* Engineering students get to worry about risk management for smart car design, and programmers get to worry about the same thing, both mostly working out their solutions in a legal  and practical rather than strictly philosophical context. The full spectrum of problems — legal, ethical, practical — is fairly easy to see.

What interests me about the Smart Car Problem (let’s just call it that for now) is the location of the line it forces us to draw between theoretical arguments and practical ones relative to actual implementation issues with technology. Where the Trolley Problem is mostly a test for working out the nature of our ethical commitments, the Smart Car Problem already operates in a world that assumes (for legal and engineering purposes) a consequentialist approach to ethical reasoning. What I mean is this: for the engineer or programmer, a given solution to the Smart Car Problem does not lead to an analysis of ethical reasoning — it leads to potential lawsuits and/or jail time in a world in which harm-minimization is already the rule in place. For practical purposes, the Smart Car problem isn’t about how we arrive at or test our judgments, it’s about how we generate a system that survives scrutiny relative to a value system that is already well-established and enshrined in law and a set of conditions that have to be dealt with regardless of that value system (i.e., physics).

Un-winnable scenarios pose a special engineering and programming problem in that context (as the Atlantic piece nicely points out). Planning for each and every scenario is quite impossible for beings who don’t happen to be omniscient, which means that whatever solution the creators of such vehicles arrive at, it will probably involve developing a set of decision guidelines with valuations for weighting factors against each other (which poses its own significant programming risks). Given any valuation scheme of the kind, a scenario may be ethically un-winnable while being physically successful (or vice-versa), depending on where ethical vs. engineering priorities overlap and where they diverge, and which is given precedence in cases of divergence. It is not impossible to imagine, for example, that lawmakers might require smart car manufacturers to privilege public safety over individual safety in such a way that harm is entirely foreseeable for individuals as a part of minimizing broader harm on crowded highways.

If I were a better writer of fiction, I’d want to come up with a story about a smart car programmer whose job it is to use insurance adjuster’s tables to work out the decision design for an AI that becomes sentient, and argues about the valuation scheme.


* I’ll leave aside for now the actually interesting question of whether it’s ethically acceptable to expect an AI not to prioritize its own survival. The Knight Rider writers/creators were wise to put their fictional AI in a very nearly indestructible and impossibly well-controlled vehicle, filmed on some of the emptiest streets and back roads in California.

UPDATE: For those who might find this both on-point and a bit amusing: “Humans Can’t Stop Crashing Into Google’s Driverless Cars.

Advertisements

About L. M. Bernhardt

For a good long while (15 years or so), I taught philosophy at a little private university in northwest IA, and occasionally branched out into playing music, dabbling in photography, experimenting with food, and writing nonsense on my blog. The philosophy teaching part ended in 2017 (program elimination via prioritization), but never fear! I've just finished my MLIS at San Jose State University, and I'm currently on the market looking for new adventures in either philosophy or LIS. Otherwise, I labor to support my dogs in the lavish manner to which they've become accustomed.
This entry was posted in Case Fodder, Philosophical Mess-making and tagged , , , , , , , , . Bookmark the permalink.