An interesting dilemma has arisen in the development of controlled and automated vehicles (“CAV”). In their report, “Connected and autonomous vehicles (CAV) the future,” the House of Lords Science and Technology Select Committee considered the moral dilemma in which a fatal collision is imminent and who the CAV must, based on its prior programming choose to save.

This scenario can be likened to the “trolley dilemma”, which was developed by Philippa Foot, a British philosopher in 1967. In this scenario, you have to choose who to save through your actions, by deciding whether to pull a lever to alter the course of a train on tracks. On one set of tracks are five people, on the other set, one person. The train is hurtling towards the five people; do you intervene by pulling the lever to alter the train’s course to hit the one person, saving the five people? Or do you not alter the course of the train, thus saving the life of one person and killing the five people instead?

In the context of CAV, the question is whether an autonomous vehicle should act to protect its passengers, or an external party such as a pedestrian?

One route to help overcome this issue could be a form of code of practice or set of internationally recognised standards and norms. One witness to the report, Professor David Lane told the select committee that:

“Engineers alone should not be left to programme behaviours into robots that cross ethical boundaries …. A culture of ethical concern should be encouraged across the international research and development community. This requires an international effort and the evolution of ethical counsels to provide the reference guidelines and standards”.

Others have disagreed with this approach on the basis that how can the “lesser” of two evils be determined in the first place? Further written evidence received by the Committee argues that before deciding on a philosophical definition of human judgment, research is required into what human drivers actually do in an emergency before judging algorithms.

A further option which could be considered would be “machine learning”. A form of artificial intelligence, it allows computers to learn from examples rather than having to follow step-by-step instructions. This would follow on nicely from the argument received by the Committee above, that research is required into what humans would do in an emergency. The machine would be able to operate on algorithms established from human results.

Clearly these ethical implications need to be considered further, and a regulatory framework developed to deal with these difficult scenarios. However, we must remember that it is highly unlikely that we will ever eradicate risk on the road. With a total of 25,160 people killed or seriously injured on the road in the year ending September 2016, automated vehicles could still prove to be a safer way to travel in the future.

Kate is a solicitor at Stephens Scown who champions electric vehicles and alternative fuels. To contact Kate or the energy team, please call 01392 210700 or email energy@stephens-scown.co.uk.