The future of the left since 1884

Should we nationalise Google?

Labour should legislate to prevent inequality emerging in access to online privacy.

Share

Opinion

Since 2015, when Tim Cook accused his Silicon Valley rivals of “gobbling up everything they can learn about you and trying to monetise it”, Apple has been eager to turn its privacy credentials into a competitive advantage.

As well as clever marketing, this strategy is testament to an increasingly important difference between Apple’s business model and those of its younger competitors. Unlike Google and Facebook, which are essentially advertising and market research agencies masquerading as tech companies, Apple still makes its money by selling things – and therefore doesn’t need to monetise its users’ data.

With companies like Apple ever more vocal about this difference in approach, consumers are finding themselves presented with an increasingly explicit choice between paying for tech with money or with their personal information. As this trade-off becomes harder to ignore, so too does the need to get clear of the consequences of selling our personal data – and the question of whether the benefits of digital privacy override the money to be saved using cheaper, less secure platforms.

If we decide that privacy is something worth holding on to, we also need to consider the worrying implications of it becoming something for which you have to pay more. While the ability to retain a degree of control over our personal data by buying a nicer phone might sound comforting to those with the money, a two-tier privacy system would have disastrous consequences for equality and for democratic politics.

More practically, it’s worth asking whether our current regulatory regimes are capable of preventing such an outcome – or at least limiting the abuses it makes possible. If they’re not, we desperately need to figure out what needs to change – and whether our expectation that online services be free can healthily coexist with the free market model of the internet.

The value of data privacy

Perhaps because it seems so obvious to those who care about it, the value of privacy isn’t often defended by privacy advocates – who concentrate more on highlighting specific privacy abuses. This is a shame, because as long as our arguments for privacy remain tacit, it’s easy to suspect that our worries about its erosion are cultural hang-ups, peculiar to having grown up in individualistic western societies at a particular point in history. And if this is the case, privacy defenders could rightly be accused of simply needing to get with the times.

But far from being a cultural idiosyncrasy, privacy is essential to numerous things most people care about deeply. At its most basic, people think and act differently when they are being watched, and when their thoughts and actions are recorded. As social creatures, we are hardwired to be sensitive to the demands, expectations and judgements of others, and to sanction those who fail to conform. Privacy is therefore a vital counterbalance to coercive human nature, providing us with the mental and physical space required for moral autonomy – allowing us to decide what is right and wrong, what we want from life and to deviate from received thinking. Without privacy, society is destined to be anodyne, conformist, and resistant to new ideas.

It takes some work to bring this general argument to bear on the question of modern threats to data privacy. While it’s fairly easy to see how your behaviour might be coloured or constrained by the constant scrutiny and judgement of your peers (that is, by total data transparency), it’s harder to see why modern government or corporate access to your data should have the same effect. The government, we assume, only cares if we are committing serious crimes or acts of terrorism, and corporations only care about how to market to us better. Because of this, this kind of surveillance needn’t deter us from doing embarrassing things, having unpopular opinions or cheating on our spouses. Neither the government nor corporations will do anything about it and neither will tell anyone else.

One problem with this line of thinking is that, while a government’s use of personal data may be restricted to catching terrorists (for example), the kinds of data they take into account can be very broad. Security services don’t simply look for smoking guns. Instead, they use data mining to identify people with profiles that make them statistically more likely to commit acts of terrorism. As falling into a high-risk category can have a material impact on your life, it’s easy to see how a young Muslim man, for instance, might feel the need to refrain from perfectly legitimate behaviour in order to make his data profile more innocuous. This chilling effect on some people’s behaviour is made worse by the fact that it’s impossible to know what makes people high risk. (If you don’t know the rules of the game, you have to be very conservative to ensure you are on the right side of them.)

A second problem is that governments and corporations might not always have such a narrow interest in people’s personal data, and might not always be so reluctant to keep this information to themselves. Digital surveillance gives governments the power to identify dissenters and to easily leak compromising information about political opponents. It bears repeating that, while many governments have utterly no interest in using this power, there is no guarantee that they won’t in the future.

Corporations’ interest in our data comes across as more prosaic, but is fundamentally no less worrying. As data mining techniques become more advanced, relatively mundane information can be used to reveal increasingly intimate facts about people. With enough different data sources, reasonably accurate inferences can be made about people’s intelligence, employability, mental health, sexual orientation and whether or not they are pregnant. For obvious reasons, this kind of information will be of interest to employers, banks and insurers. For equally obvious reasons, those holding the data will be happy to sell it to them.

Just as with the data mining techniques that determine whether people are likely terrorists, companies’ use of this data stands to condemn people to Kafkaesque scenarios in which life changing decisions are made about them on the basis of reasons which remain opaque, and about which they can do nothing. The sense of powerlessness would be profound. The proposed Chinese ‘Social Credit’ scheme, in which data will be used to rank citizens’ ‘trustworthiness’, provides a flavour of what might emerge organically (as the result of market pressures) in less authoritarian settings.

The dangers of privacy inequality

As we produce more and more data, privacy is all the more crucial in preventing it from being used against us – enabling us to live our lives free from fear of being singled out, unfairly treated or blackmailed on the basis of our data footprint. It is privacy’s value as a shield against these kinds of abuses that would make high levels of privacy inequality so dangerous.

The coexistence of money and data based methods of payment is a recipe for privacy inequality. Because products and services that harvest and monetise your data will always be cheaper than their secure equivalents, poor people will inevitably enjoy worse data protection than the wealthy. Unlike the rich, who won’t balk at paying more for more secure products, few poor people will be able to justify the expense of privacy. Crucially, it won’t matter how uneasy poor people are about giving up their data, or how cheap secure products and services become. Poor people always struggle to justify not going for the cheapest options, even if they are unpalatable and even if the alternatives are affordable in absolute terms.

There are a few interrelated reasons privacy inequality stands to exacerbate material inequality. One is that politicians and activists from poor backgrounds will find themselves far more vulnerable to identification, monitoring and character assassination than their rich counterparts, putting them at a significant disadvantage. The political system (even more so than today) will become the preserve of the privileged, who will have had their data protected from an early age. As the political system is ever more captured by the elites, politics will increasingly reflect their interests, with governments withdrawing further and further from the regulation of markets.

Another, more direct reason is that that a lack of data privacy leaves you more vulnerable to economic discrimination. While those whose data is protected are in a position to present the best version of themselves to potential employers, creditors and associates, those who have sold their data are much less able to determine how they come across. Take two equally qualified job candidates, both of whom are prone to bouts of depression. The first candidate, whose cheap phone sells his sleep pattern data to third parties, can’t keep this fact a secret from his potential employers. The second candidate, who has a nicer phone which doesn’t out him to the company, will have a much better chance of landing the role.

Crucially, if individual poor people do decide to protect their privacy in such a world, they will arouse far more suspicion than their wealthier counterparts. As the cost to a rich person is minimal, his willingness to pay for privacy protection doesn’t imply that he might have anything in particular to hide. By contrast, people might well wonder about a poor person’s reasons for parting with such a large proportion of his wealth.

What can be done?

Perhaps the most obvious solution to the rise of privacy inequality is tighter regulation, both of the data that can be collected about individuals and of the things for which it can be used.

The EU’s General Data Protection Regulation (GDPR) won’t do the trick. Because the GDPR is aimed at preventing the collection and use of citizens’ data without their explicit consent, it doesn’t address the fact that poor people may agree to giving up their data because the alternatives are more expensive. Moreover, once that data has been collected, the GDPR does too little to stop it being used to make automated, hard to contest, decisions about people – allowing such data to be used where necessary for entering into a contract and providing few clear rights to contest such automated decisions.

One worry about pushing for stronger regulation, either to prevent the voluntary collection of personal data or to prevent its use to inform significant decisions about people, is the risk of destroying the economic model of the internet – based as it is on data driven advertising. If we want to stop the internet vanishing behind paywalls, the argument runs, we need to be very careful about regulation that could harm revenue.

This worry is both overwrought and bound up in unhelpfully rigid thinking about how the internet needs to work.

For a start, a ban on personal data collection would only adversely affect companies like Google and Facebook, who rely on user data to give them an edge over advertisers. Original content providers, by contrast, would have little to lose. As Klint Finley argues, the GDPR is likely to turn most publishers away from data based advertising on to more conventional forms anyway. Though these forms of advertising generate less revenue overall, fewer middle men will mean that more of it will go to publishers themselves.

And while the loss of dozens of free Google services financed by data monetisation would be a shame, it’s not an unavoidable outcome of tighter regulation. From the outset, Silicon Valley has been hamstrung by its determination to reconcile the idea that online content and services should be free with its staunchly capitalist outlook. As it becomes clear that the only way to square these commitments is through a Faustian data pact, many are starting to question this orthodoxy, suggesting we give up on the commitment to free content and services. In some areas, this is probably not a bad idea. But in others, where free access has been transformative, but we could just as easily abandon the commitment to a capitalist model, and think about funding the most democratising features of the internet through taxation.

In addition to a ban on data monetisation, the European left could push for an EU funded service that would provide data secure web tools, free for everyone in the world, regardless of citizenship. Though hugely ambitious, it would be a spectacular exercise of soft power – a BBC for the decentralised, digital age.

More immediately though, left wing politicians need to shout far louder about the problems posed by privacy inequality, and the fact that these problems don’t go away when companies can only harvest people’s data with their permission. This is fertile territory for socialists who, unlike their opponents, aren’t ideologically disbarred from posing alternatives to the internet’s dysfunctional economic model. But it is also urgent. The worse privacy inequality grows, the more entrenched the status quo will become.

Harry Farmer

Harry Farmer is convener of the Fabian policy group ‘Fabian Futures’. If you would like to find out more about the group, please contact Harry Farmer at hgfarmer@mykolab.com.

@TheOPosition

Fabian membership

Join the Fabian Society today and help shape the future of the left

You’ll receive the quarterly Fabian Review and at least four reports or pamphlets each year sent to your door

Be a part of the debate at Fabian conferences and events and join one of our network of local Fabian societies

Join the Fabian Society
Fabian Society

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close