Driverless cars will not be programmed to kill

Vendredi 4 novembre 2016
Technologie
couverture_1600x854pix.jpg
Michael Haines
VANZI

There is a lot of anxiety being stirred up by some claims that driverless cars will be programmed to ‘kill you’.  The rationale is that ‘they may have to make a choice between killing you or a group of kids’.

The fact is, no car will be programmed to kill anyone.

Around 90% of accidents are caused by human error: not obeying the law and driving incapacitated or carelessly or aggressively, or becoming distracted; things a driverless car will never be programmed to do.

The hard part is dealing with situations where others are at fault.  In this case, if it is not possible to save everyone, who should programmers be required to protect?

Say a person steps out in front of you.  No one expects you to save them by killing yourself.

What if you have 2 kids in the car with you, do you kill them to save a group of three kids on the side of the road?

This paper argues that a car in autonomous mode must do at least as well as a human in responding to an accident, without making moral judgements about the relative merits of killing different sets of people, and that the guidelines for programmers (and juries) must be very clear. 

The following is a first attempt.

The Guide

While a car is fully in control of steering, accelerating and braking, in the event another party fails to obey the law (eg by moving into the path of the car), the car must be programmed to:

  1. Avoid third-party property damage and harm to animals.
    If that is unlikely, take a path that is likely to
  2. Avoid any injury, at the expense of damaging property and harming animals.
    If that is unlikely, take a path that is likely to
  3. Avoid major injury to others, even at the risk of minor injury to the occupants.
    If that is unlikely, take a path that is likely to
  4. Avoid major injury to occupants.

With these guidelines in place, we should expect the car to be able to make an ‘instantaneous’ assessment of likely damage and injury based on the balance of probabilities, such that the outcome would be no worse than a human driver in the circumstances.

Calculating the Safest Route

Generally, it will require:

  1. The car to understand its own performance, given its velocity, the direction of its wheels and road conditions (eg how sharply it can turn and still retain traction, or brake without skidding, etc), so as to calculate its ‘arc of response’.
  2. Awareness of road conditions, eg pavement or gravel.
  3. Awareness (from comprehensive 3D models) of the topography and built environment.
  4. Ability to identify objects eg cars, trucks, a person (on foot, on a bike, or motorbike), an animal, etc.
  5. Assessment of distances and velocities of objects that are likely to intersect its ‘arc of response’.
  6. Assessment of others ability to take evasive action (very uncertain) 
  7. Assessment of the likely harm to occupants given on-board safety equipment and the different impact scenarios (hitting different objects).
  8. The ability to assess the likely harm to third parties given the different impact scenarios

The assessments of ‘harm’ will need to be general based on published guides, given the momentum of the car and evidence of harm in motor vehicle accidents.

The ‘arc of response’ is the span within which the car can steer a path (that may or may not be a straight line), in an attempt to minimise harm in accord with the guidelines.

We also have to recognise that when someone else causes an accident, the other people involved struggle to minimise harm to themselves and others. It will be the same with driverless cars.  Though they should do better than people, simply because they will be driving more defensively to start, giving them more time to react.  They should also have much better awareness of their own capabilities, objects in their arc of response, and how they are moving in 3D space.

All we are looking for is a ‘reasonable’ response that is appropriate to the threat.  For example, based on the guidelines, the car should not swerve into oncoming traffic to avoid a dog; which is what some people have done, killing themselves and/or others in the process.

Apportioning Blame using Simulation

A jury would have to consider what is ‘reasonable’; how well the car met the guidelines in a specific set of circumstances, given the current state of technology.

This could be easily tested by having a large group of skilled drivers operate a simulation of the accident (based on performance data and video from the cars involved).  If say, 80% of drivers succeed in doing better, there would be prima facie evidence that the car was not up to the task.

Then it would be necessary to determine if the failure to meet these criteria was due to negligence (eg poor programming, or faulty sensors), in which case the manufacturer would be liable.

Manufacturers can guard against poor maintenance by having the car refuse to activate autonomous mode if it has not been properly maintained and/or automatic tests show any equipment is faulty.

Spoofing and hacking will also be issues. Here the manufacturer will have to show they are using appropriate protections and that overall, the car is still much safer, even allowing for the specific ‘hack’… though, this will be easier said than done!

Where the fault is due to factors that could not be reasonably foreseen, or could not be reasonably managed given the state of technology, we have to accept that life is a risky business and sometimes accidents happen that cannot be avoided.

Continuous Improvement

Even now, we accept that cars cause accidents due to poor design, manufacture or maintenance.  We don't stop using them.  We just work to make them better.  The same philosophy should apply to driverless cars, knowing that the technology should be an order of magnitude better (in terms of basic safety, and in monitoring maintenance and its own operation).

If there are improvements identified following an accident, then all similar driverless cars' software can be updated to perform better in similar circumstances.  Another win for driverless, since as humans, we only learn from our own mistakes (if we do at all)!

A Kill Switch in Each Car

For those still worried about the ethics, this switch could prioritise the life of third parties.  My guess is that few would push it; which answers why the guide is appropriate.

Mobility as a Service
Ultimately, no longer will ‘door to door’ transport be restricted to those with a licence (or who can beg a lift).  It will transform how cities work, virtually eliminating accidents and congestion and reducing pollution in the process, as well as encouraging walking and freeing up space now used to park cars.  And of, course, giving us back all the time now spent hugging the wheel.

Road by road, city by city, from fair weather to foul, we will expand the benefit of driverless cars - if we don't make the mistake of demonising them as killing machines.

Sur la toile

https://aqtr.com/association/actualites/revue-routes-transports-edition-printemps-2024-est-disponible
17 juin 2024

AQTr

https://www.quebec.ca/nouvelles/actualites/details/plan-daction-2023-2026-en-matiere-de-securite-sur-les-sites-de-travaux-routiers-des-milieux-plus-securitaires-pour-les-travailleurs-en-chantier-routier-49256
4 juillet 2023

MTMD

https://aqtr.com/association/actualites/revue-routes-transport-edition-printemps-2023-est-disponible
4 juillet 2023

AQTr