The future of the left since 1884

The compromise of the machines

To make artificial intelligence work, we’ll need to tweak our model of democracy, says Harry Farmer.

Share

Opinion Car

If climate change is to be the defining challenge of the coming decades, then artificial intelligence, so runs the cliché, is the defining opportunity.

While the debates around the social and economic consequences of an automated world are fascinating and deserve our attention, they have a tendency to obscure some of the more immediate questions standing in the way of the wide scale adoption of AI.

One of the thorniest of these concerns the nuances of how we want AI to behave: if we are soon to live in smart cities, amongst driverless cars, drones and automated assistants, we urgently need to decide how we want the underlying AI to respond to morally challenging situations. This is the problem of machine ethics.

Leaving aside the intrinsic difficulty of this question, a big challenge here is working out how morally pluralistic societies like ours can cope with the need to develop and tolerate a single answer to it. This is a deceptively important obstacle: if society can’t find a way to agree on machine ethics, then we’ll either have to give up on AI or, far more likely, resign ourselves to a world in which the ethics of our automated surroundings is set by individuals and, probably more commonly, corporations.

Driverless cars and trolley problems

Perhaps one of the best illustrations of the machine ethics problem and how it might manifest itself comes from driverless cars’ need to confront ‘trolley problems’ – a class of thought experiments designed to tease out people’s feelings about ethical dilemmas.

In the original trolley problem, an out of control railway carriage is hurtling down a track towards five trapped people. You are standing at the points, and must choose whether to divert the course of the carriage, sending it to crush a person on the other fork of the track, or else letting it run its original course and flatten the original five. The two options correspond roughly to two approaches to moral reasoning: consequentialism, which would have you divert the tracks for the greater good, and deontology, which would have you remain idle, on the grounds that doing harm is worse than allowing it.

Think for any amount of time about how to programme a driverless car and you’ll run into trolley problems. Take a scenario in which a child runs out in front of a vehicle, which is going too quickly to stop. Should the car be programmed to swerve to avoid the child, thereby killing its occupant, or to apply the brakes, hitting the child but saving the passenger? As there are almost endless variations of these kinds of scenarios, and no limit to the number of factors that could potentially be taken into account, the car’s AI needs to have a general set of moral principles to apply to novel scenarios.

Who should teach my car ethics?

As the adoption of driverless cars looms ever closer, the need to decide how AI should respond to these kinds of problems becomes ever more pressing.

It looks unlikely that humans can avoid explicitly teaching AI ethical principles. While machine learning enables computers to work out how to accomplish a specified task or to enact abstract principles, this is of no help to the machine ethics problem, where it is the abstract principles themselves that need to be decided upon.

A theoretically possible approach is to get AI to infer moral principles from humans’ actual moral judgements. The danger here is that, by learning from humans, AI develops moral principles that reflect our prejudices and inconsistencies just as much as our considered moral judgements. Given this, it is far safer to have AI do as we say rather than as we do.

Slightly more promisingly, some ethicists have mooted a future in which individuals are free to choose the moral frameworks of the AI with which they interact. In such scenarios, a person might be able to select the ethical principles of the car they buy, or to set those of a taxi before each journey.

Ignoring the hugely complicated question of if and how to regulate the moral frameworks people could choose from, allowing people to pick the morals of their cars would expose them to an unworkable level of responsibility, given their de facto understanding over the outcomes.

Because people’s moral intuitions don’t correspond neatly to sets of abstract moral principles, it’s hard for someone to be confident that the morals they have chosen for their car will result in outcomes with which they always agree. You might think that you’re happy with saving the greatest number of people in all circumstances, but could likely be presented with an application of that principle with which you profoundly disagree.

In asking people to choose their cars’ morals, we would be asking them to commit to, and bear responsibility for, actions they can neither fully predict nor judge in advance.

If the moral frameworks of our cars need to be decided upon explicitly, it seems far better for the responsibility to be shouldered collectively, by having all cars operate according to the same set of moral principles.

Trolley problems derail democracy

While it may be the only way to protect individuals from unfair levels of moral responsibility, the need to decide on a single set of moral principles for AI puts liberal democracies in a very awkward position.

A key feature of liberal democracies is their reluctance to impose any set of fundamental moral values on their citizens. This neutrality is what allows groups with differing conceptions of right and wrong to all feel represented by the same laws and institutions.

This arrangement works because liberal democratic politics aspires to settle questions of policy, rather than values. Because people with fundamentally different moral worldviews are often able to agree on practical questions of policy – endorsing the same positions for different reasons – society is able to develop rules that everyone can get behind. In such cases, the agreed upon policies can’t be said to be motivated by any particular ideological position. As a result, there is rarely the feeling that the majority has imposed its moral worldview on everyone else.

Even in the case of particularly morally charged policy debates, liberal democracies can avoid taking sides. When the opposing moral outlooks on an issue are irreconcilable, policy can generally be developed that manages to please nobody – and thus avoids the charge of having favoured one side over the other. In the rare cases where a public policy does appear to be aligned with a particular ideological stance, governments are always keen to stress the pragmatic (universally acceptable) justifications for the position, rather those based on particular moral outlooks.

The difficulty with the machine ethics problem is that there is no way to divorce the policy question from the moral question: To decide on the ethical framework to programme into AI is to decide what ethical principles are best. It’s hard to see how a liberal democracy could require a uniform approach to machine ethics without surrendering its neutrality on fundamental values.

The deliberative turn

In developing a response to the machine ethics problem, there is probably no easy way for liberal democracies to escape favouring one moral outlook over others. But if society can’t avoid imposing a single moral framework on a morally diverse population, it can at least work to build more consensus about which framework to pick.

With its emphasis on elections and voting, our current democratic structures don’t put much pressure on citizens to come to agreement on divisive questions. Opinions are formed largely in private and then registered in the secrecy of the voting booth.

One way around this problem might be to borrow techniques from an alternative model. In particular, deliberative democratic processes, in which citizens are brought together and given the time and information to develop a considered, collective judgement on a given question, seem far more conducive to building consensus.

When applied to the problem of machine ethics, a deliberative process like a citizens’ assembly could have two clear advantages:

Firstly, it could well make it easier for people to agree on what moral framework to choose. Indeed, there is evidence to suggest that at least part of people’s differences in moral outlook are based on differences in emotional responses to scenarios that, when highlighted, tend to become less important. When forced to discuss and agree upon a set of moral principles in a deliberative setting, people may become less sensitive to these factors, making it easier for them to find common ground.

Secondly, it will make the moral framework chosen feel less alien to those who wouldn’t have initially agreed with it.  There is a far greater sense of ownership of a conclusion that is reached deliberatively than one that is simply voted on. While the result of a ballot can often seem like nothing but the imposition of the will of the biggest single group on everyone else, the conclusion of a deliberative discussion is something for which everyone involved was responsible.

From both sides of the political spectrum, the machine ethics problem looks challenging. One the one hand, a laissez faire approach condemns society to a free for all in which AI operates according to wildly varying moral standards, over which people have very little de facto control. On the other, an interventionist policy, where AI morality is standardised, looks like an affront to the moral pluralism of liberal democracies.

The key difference is that, if the left is willing to entertain more deliberative forms of democracy, it may be able to make the interventionist approach viable. If this seems like a fair trade-off, then we should start calling for the application of deliberative mechanisms to these questions now. By the time ubiquitous AI arrives, we may just have the democratic tools to decide what to do with it.

Harry Farmer

Harry Farmer is convener of the Fabian policy group ‘Fabian Futures’. If you would like to find out more about the group, please contact Harry Farmer at hgfarmer@mykolab.com.

@TheOPosition

Fabian membership

Join the Fabian Society today and help shape the future of the left

You’ll receive the quarterly Fabian Review and at least four reports or pamphlets each year sent to your door

Be a part of the debate at Fabian conferences and events and join one of our network of local Fabian societies

Join the Fabian Society
Fabian Society

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close