There is a heated debate going on about Facebook and privacy since the revelations about Cambridge Analytica surfaced. The reaction is a cry for more privacy regulation. The European approach of the General Data Protection Regulation (GDPR), which will come into effect by late May this year, is seen by many as a role model for a much needed privacy regulation in the US.
But they are wrong. I feel that there are a lot of misconceptions about the effectiveness of data protection in general. This is not surprising since there are few similar rules in the US and so the debate is based more on projections than on actual experiences.
I want to add the perspective of someone who has lived long enough within a strict privacy regime in Germany to know the pitfalls of this approach. From this angle I want to reflect the Cambridge Analytica case regarding of how effective EU style privacy regulation would have been to prevent this event from happening. Jürgen Geuter has already published a very readworthy and detailed critic of the GDPR, but my angle will be more conceptual and theory driven.
I will apply the theory of ‘Kontrollverlust’ to this case to come to a deeper understanding of the underlying problems of data control. You can read a much more detailed examination of the theory in my book ‘Digital Tailspin – Ten Rules for the Internet after Snowden’ from 2014.
In short: the notion of Kontrollverlust is basically the idea that we already lost control over our data, and every strategy should acknowledge that in the first place. There are three distinct drivers that fuels this loss of control and they are all closely entangled with the advancements of digital technology.
The first driver of Kontrollverlust reads:
„Every last corner of the world is being equipped with sensors. Surveillance cameras, mobile phones, sensors in vehicles, smart meters, and the upcoming ‘Internet of Things’ – tiny computers are sitting in all these objects, datafying the world around us. We can no longer control which information is recorded about us and where.“
This holds certainly true and you can watch an instance of this ever unraveling enlightenment in the outrage about the related issue of how the facebook android app has been gathering all your cellphone data. But it is the remaining two drivers of Kontrollverlust that are at the heart of the Facebook scandal.
1. The „Data Breach“
The second driver of Kontrollverlust is:
„A computer will make copies of all the data it operates with, and so the internet is basically a huge assemblage of copying machines. In the digital world, practically everything we come into contact with is a copy. This huge copying apparatus is growing more powerful every year, and will keep on replicating more and more data everywhere. We can no longer control where our data travels.“
Regardless if you like to call the events around Cambridge Analytica a „data breach“ or not, we can agree on the fact that data has fallen into the wrong hands. Dr Alexandr Kogan, the scientist who first gathered the data with his Facebook app, illegally sold it to Cambridge Analytica. While this was certainly a breach of his agreement with Facebook, I’m not entirely sure if it was also a breach of the law at that time. I’ve come to understand that the British data protection agency is already investigating the case so I guess we will find out at some point.
However, what becomes obvious is – regardless of which kind of privacy regulation would have been in effect – it wouldn’t have prevented this from happening. The criminal intent with which all parties were acting suggests that they would have done it one way or the other.
Furthermore, Christopher Wylie – the main whistleblower in this case – revealed that an ever growing circle of people also got their hands on this data. Including himself and even black market sites on the internet.
The second driver of Kontrollverlust suggests that we already live in a world where copying even huge amounts of data has become so convenient and easy to do that it is almost impossible to control the flow of information. Regardless of the privacy regulation in place we should consider our data being out there and with anybody who has an interest in knowing about it.
Sure, you may trust big corporations in trying to prevent this from happening, since their reputation is on the line and with the GDPR there may also be huges fines to be paid. But even if they try very hard, there will always be a hack, a leak, or just the need for third parties to access the data and thus the necessity to also trust them. ‘Breach’ won’t be an event anymore but the default setting of the internet.
This certainly doesn’t mean that corporations should cease trying to protect your data since it’s hopeless anyways and this is also not an argument against holding these companies accountable by the means of the GDPR – please do! Let’s prevent every data breach we can from happening. But nevertheless you shouldn’t consider your data safe, regardless of what the law or the corporations will tell you.
2. The Profiling
But much more essential in this case is what I call the third driver of Kontrollverlust:
„Some say that these huge quantities of data spinning around have become far too vast for anyone to evaluate any more. That is not true. Thanks to Big Data and machine learning algorithms, even the most mundane data can be turned into useful information. In this way, conclusions can be drawn from data that we never would have guessed it contained. We can no longer anticipate how our data is interpreted.“
There is also a debate about how realistic the allegations concerning the methods of Cambridge Analytica are and how effective this kind of approach would be (I consider myself on the rather sceptical side of this debate). But for this article and the sake of argument let’s assume that CA has been able to provide their magical big data psycho weapon and that it has been indeed pivotal in both the Brexit referendum and the Trump election.
Summing it up, the method works as follows: By letting people do psychological tests via Mechanical Turk and also gaining access to their facebook profiles, researchers are able to correlate their Facebook likes with their psychological traits from the test. CA was allegedly using the OCEAN model (Big Five Personality Traits). So the results would assumingly read somewhat like this: if you like x, y and z you are 75% likely to be open to new experiences and 67% likely to be agreeable.
In the next step you can produce advertising content that is psychologically optimized for some or for all of the different traits in the model. For instance they could have created a specific ad for people who are open but not neurotic and one for people who also match high on the extraversion scale and so on.
In the last step you isolate the likes that correlate with the psychological traits and use them to steer your ad campaign. Facebook gives you the ability to target people by using their likes, so you can use the infrastructure to match your particularly optimized advertisement content to people who are probably most prone to it.
(Again: I’m deeply sceptical about the feasibility of such an approach and I even doubt that it came into play at all. For some compelling arguments against this possibility read this and this and this article. But I will continue to assume it’s effectiveness for the duration of this article.)
You think the GDPR would prevent such profiling from happening? Think again. Since Cambridge Analytica only needs the correlation between likes and traits, it could have completely anonymized the data an be fine fine with GDPR. They totally can afford to lose every bit of identifiable information within the data and still be able to extract the correlation at hand, without any loss of quality. Because identity does’t matter for these procedures and this is the Achilles‘ heel of the whole data protection approach. It only applies to where the individual is concerned. (We’ll discuss this in detail in a minute.) And since you already agreed to the Facebook TOS which allows Facebook to use your data to target ads towards you, the GDPR – relying heavily on ‘informed consent’ – wouldn’t prohibit targeting you based on this knowledge.
So let’s imagine a data protection law that addresses the dangers of such psychological profiling.
First we need to ask ourselves what we learned from the case in prospect of data regulation? We learned that likes are a dangerous thing, because they can reveal our psychological structure and by doing that, also our vulnerabilities.
So, an effective privacy regulation should keep Facebook and other entities from gathering data about the things we like, right?
Wrong. Although there are certainly differences in how significant different kinds of data may correlate to certain statements about a person, we need to acknowledge the fact that likes are nothing special at all. They are more or less arbitrary signals about a person and there are thousands of other signals you could match against OCEAN or similar profiling models. You can match login times, or the amount of tweets per day, browser and screen size, the way someone reacts to people or all of the above to match it against any profiling model. You could even take a body of text from a person and match the usage of words against any model and chances are that you get usable results.
The third driver of the Kontrollverlust basically says that you cannot consider any information about you innocent, because there can always appear a new statistical model, a new data resource to correlate your data with or a new kind of algorithmic approach to render any kind of seemingly harmless data into a revelation machine. This is what Cambridge Analytica allegedly did and what will continue to happen in the future, since all these data analysis methods will continue to evolve.
This means that there is no such thing as harmless information. Thus, every privacy regulation that reflects this danger should prevent every seemingly arbitrary bit of information about you from being accessible by anyone. Public information – including public speech – has to be considered dangerous. And indeed the GDPR is trying to do just that. This has the potential to turn into a threat to the public, to democracy and to the freedom of the individual.
Privacy and Freedom in the Age of Kontrollverlust
When you look back to the origins of (German) data protection laws you will find that the people involved have been concerned about the freedom of the individual being threatened by the government. Since state authority has the monopoly on force – e.g. through police and jails – it is understandable that there should be limits for it to gather knowledge on citizens and non-citizens. „Informational self determination“ has been recognized by the Federal Supreme Court as a basic civil right back in 1983. The judges wanted to enable the individual to secure a sphere of personal privacy from the government’s gaze. Data protection was really a protection of the individual against the government and as such it has proven to be somewhat effective.
The irony is that data protection was supposed to increase individual freedom. But a society where every bit of information is considered harmful wouldn’t be free at all. This is also true on the individual level: Living in constant fear about how your personal data may fall into someones hands is the opposite of freedom.
I do know people – especially within the data protectionist scene – who promote this view and even live that lifestyle. They spend their time hiding from the public and using the internet in an antiseptically manner. They won’t use most of the services, only some fringe and encrypted ones, they never post anything private anywhere and go constantly after people who could reveal anything about them on the internet. They are not dissidents, but they chose to live like ones. They would happily sacrifice every inch of public sphere to get to the point of total privacy.
But the truth is: With or without GDPR; those of us who wouldn’t devote their lives to that kind of self restrictive lifestyle, already lost control of their data and even the ones who do will make mistakes at some point and reveal themselves. This is a very fragile strategy.
The attempt to regain the control won’t increase our liberties but already does the opposite. This is one of the central insight that brought me to advocate against the idea of privacy for privacy’s sake, which is still the basis of every data protection law and also of the GDPR.
The other insight is the conclusion that privacy regulation doesn’t solve much of the problems that we currently deal with, but is making it much harder to tackle them properly. But this needs a different explanation.
The Dividualistic Approach to Social Control
I’m not saying that we do not need regulation. I do think that there are harmful ways to use profiling and targeting practices to manipulate significant chunks of the population. We do need regulation to address those. But data protection is no sufficient remedy to the problem at hand, because it was designed/conceived for a completely different purpose – remember: the nation state with its monopoly on force.
In 1997 Gilles Deleuze made the point that next to the disciplinary regimes like the state and its institutions that we know since the seventeenth century, there has been a new approach of social control coming up, which he called the “Societies of Control”. I won’t get into the details here but you can pretty much apply the concept on Facebook and other advertisement infrastructures. The main difference between disciplinary regimes like, say, the nation state and regimes of control like, say, Facebook is the role of the individual.
The state always refers to the individual, mostly as a citizen, that has to play by the rules. As soon as the citizen oversteps the state uses force to discipline him back to being a good citizen. This concept also applies down to all the states institutions: The school disciplines the student, the barrack the soldier, the prison the prisoner. The relation is always institution vs individual and it is alway a disciplinary relation.
The first difference is that Facebook doesn’t have a monopoly on force. I doesn’t even use force. It doesn’t need to.
Because second, it doesn’t want to discipline anyone. (although you can argue that enforcing community standards needs some form of disciplinary regime, it is not Facebook’s primary objective to do so.) The main objective Facebook is really thriving for has …
… Third nothing to do with the individual at all. What it cares for is statistics. The goal is to drive the conversion rate for an advertisement campaign from 1.2% to 1.3% for instance.
Getting this difference wrong is one of the major misconceptions about our time. We are used to think of ourselves as individuals. But that’s increasingly not the way the world looks back at us. Instead of the individual (which means the un-dividable) it sees the dividual (the dividable): our economic, socio-demographic and biological characteristics, our interests, our behaviors and yes, at some point probably our OCEAN rating is what counts for these Institutions of control. We may think of these characteristics as part of our individual selves but they are everything but unique. And Facebook cares for them precisely because they are not unique, so they can put us into a target group and address this group instead of us personally.
For instance: Facebook doesn’t care if an ad really matches your interest or your taste as a person. It is not you, Facebook cares about, but people like you. It’s not you, but people like you that are now 0.1% more likely to click on the ad, that makes all difference and thus all the millions for Facebook.
People who are afraid of targeted advertisement because they think of it as exceptionally manipulative as well as people who laugh off targeted ads as a poor approach because the ad they were seeing the other day didn’t match their interest – both get this new way of social control wrong. They get it wrong because they can’t help thinking individually instead of dividualistic.
And this is why the data protection approach of giving you individual rights doesn’t provide the means to regulate a dividualistic social control regime. It’s just a mismatch of tools.
Although the argument provided here may seem quite complicated, the solution doesn’t need to be. In terms of policy I mostly propose a much more straightforward approach of regulation. We need to identify the dangers and the harmful practices of targeted advertisement and we need to find rules to address them specifically.
- For starters we need more transparency for political advertisement. We need to know which political ads are out there, who is paying for them, how much money has been paid, and how these ads are being targeted. This information has to be accessible for everyone.
- Another angle would be to regulate targeting on psychological traits. I feel psychological ads aren’t necessarily harmful but it is also not difficult to imagine harmful applications, like finding psychological vulnerable people and exploit their vulnerabilities to sell them things they neither need nor can afford. There are already examples for this. It won’t be easy to prohibit such practices, but it will be a more effective approach on the long run than trying to hide these vulnerabilities from potential perpetrators.
- There is also a need to break the power of the monopolistic data regimes like Facebook, Google and Amazon. But contrary to the public opinion their power is actually not a function of their ability to gather and process data, but to be in the unique position to do so. It’s because they mopolized the data and are able to exclude everybody else from using it, what makes them invincible. Ironically it was one of the few attempts of Mark Zuckerberg to open up his data silo by giving developers access through their API, which caused the Cambridge Analytica trouble in the first place. Not just ironically but also unfortunately, because there is already a crackdown going on against open APIs and that is a bad thing. Open APIs are exactly what we need the data monopolists to implement. We need them to open up their silos to more and more people; scientists, developers, third party service providers, etc. in order to tackle their powers, by seizing the exclusiveness of their data usage.
- On a broader level we need to set up society to be less harmful for personal data being out there. I know this is far reaching but here are some examples: Instead of hiding genetic traits from your health insurance provider, we need a health care system that doesn’t punish you for having them. Instead of trying to hide some of your characteristics from your employer, we need to make sure everybody has a basic income and is not existencial threatened to reveal information about themselves. We need much more policies like this to pad out society against the revealing nature of our digital media.
“Privacy” in terms of „informational self determination“ is not only a lost cause to begin with, but it doesn’t even help regulating dividualistic forces like Facebook. Every effective policy should consider the Kontrollverlust, that is to assume the data to be already out there and used in ways beyond our imagination. Instead of trying to capture and lock up that data we need ways to lessen the harm such data could possibly cause.
Or as Deleuze puts it in his text about societies of control: “There is no need to fear or hope, but only to look for new weapons.”