A Concept in Flux

Privacy – as time goes on, this word pops up in our collective vocabulary at ever-closer intervals. We're inundated with "privacy": in the news media, on billboards, in the rhetoric of tech companies. But with its increased use comes a flurry of confusion. It's time to take a step back and question: What exactly are the political leaders, journalists, and companies who speak about privacy referring to? And why does this matter?

In 1975, the legal philosopher Judith Jarvis Thomson observed that “the most striking thing about the right to privacy is that nobody seems to have any very clear idea what it is.” Though the world looks very different today, this statement remains true. Over the past several decades, an expansion of national security programs and general technological adoption have shaped societal notions of privacy, setting the scene for the global structures of surveillance that have become embedded in the Internet's infrastructure. In this climate, it is crucial that we think and speak clearly in reference to privacy, because the way we think about it today will inform how it is enshrined in our legislation, policies, textbooks, and technologies tomorrow.

Past Perceptions

A brief look back at the history of privacy in the Global North reveals the term’s slow evolution over the centuries. In the time of the ancient Greeks, Aristotle’s differentiation between the spheres of polis (public/political) and oikos (private/domestic) set the frame of reference for what might be considered the first strand of privacy: privacy of the body and the home. The concept was an integral guiding force in architecture – for the wealthy, at least – throughout the Middle Ages, and was galvanized during the Renaissance, when separate homes for families had become commonplace around Europe. 

The second strand of privacy – information privacy or data privacy – involves personal information, including correspondence, medical records, and financial information. This form of privacy began to develop much later on. The United States passed the Post Office Act in 1710, which prevented postal workers from reading people’s mail, but it wasn’t until the Gilded Age that privacy was acknowledged as a right. In 1890, Supreme Court Justice Louis Brandeis and lawyer Samuel Warren published their benchmark article The Right to Privacy in the Harvard Law Review, arguing that, although privacy was not explicitly enshrined in the Constitution, the “right to be let alone” was inherent in common law. The societal push for privacy that ensued was a response to the rise of an emerging technology (the camera, at the time) – a phenomenon that would occur over and over again in the following decades.

It wasn’t until after World War II that privacy came to be established as an international human right. The United Nations General Assembly adopted Article 12 on the Universal Declaration of Human Rights (UDHR) in 1948:

No one shall be subjected to arbitrary interference with [their] privacy, family, home or correspondence, nor to attacks upon [their] honor and reputation. Everyone has the right to the protection of the law against such interference or attacks.

Seven years later, the European Convention on Human Rights (ECHR) was ratified, including Article 8:

Everyone has the right to respect for [their] private and family life, [their] home and [their] correspondence.

However, the ECHR right to privacy is subject to certain restrictions “in accordance with the law” and “necessary in a democratic society”. The broad interpretation of these qualifiers by the U.S. government in particular has gone a long way to weaken privacy as a right and a civil liberty.

A key moment in the evolution of privacy came in 1967, when Alan Westin, Professor of Public Law and Government at Columbia University, who posited the concept as a matter of access and control. In his seminal work, Privacy and Freedom, Westin framed privacy as "the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others." This definition set the benchmark for a common understanding of informational privacy – one which propelled Westin to the forefront of the then-emerging field of privacy law. Over the following decades, however, the notion of privacy-as-control has come to be distorted by two powerful societal tides: the national security apparatus and the corporate imperatives of tech companies.

Current Context

In 1985, the sociologist Gary T. Marx wrote that “in a society where everyone feels as if [they are] a target for investigation, trust... is damaged. Indeed, today’s surveillance technologies may be creating a climate of suspicion from which there is no escape.” In the decades since, the metaphorical escape routes in our global information society have become blocked. The early 2000s marked a turning point for national security’s vendetta against privacy, and the the tech industry’s pioneering of the surveillance economy – forces that have shaken common understandings of the notion to its core.

A pivotal moment in our global reckoning with privacy’s dilution came in 2013, when Edward Snowden revealed the extent of the NSA’s global dragnet surveillance operation PRISM, many aspects of which were fortified in response to the terrorist attacks of September 11, 2001. In the government’s balancing act of anti-terrorism provisions versus civil liberties, enhanced national security measures severely degraded citizens’ right to privacy. This was evidenced in Congress’s passing of the USA Patriot Act (2001), which greatly increased the government’s domestic spying powers. See The Privacy Issue's guide to U.S. government surveillance for more.

Another blow to U.S. citizens’ privacy followed in 2008 with the passing of the Foreign Intelligence Surveillance (FISA) Amendment Act, which effectively enabled dragnet monitoring of all calls and emails which involved one recipient in the U.S. In justifying these measures under the banner of national security, which was positioned as an ends for protecting freedom, the U.S. government made privacy the enemy of freedom. Unfortunately, the U.S. is not the only Western regime to do so. For a richer understanding of dragnet surveillance and how to avoid it, read The Privacy Issue's primer on the subject.

Joining governments in surveillance efforts – albeit for different purposes – is the global tech industry, lead by Silicon Valley. Back in 1890, it was new advancements in photographic technology that prompted Justice Brandeis to stir up public awareness of privacy. Over a century later, emerging technology is still the driving force that is bringing the privacy debate back into the public arena. This time, it's exponentially-increasing computing power, paired with the proliferation of data, that has triggered the threat to privacy. Roger McNamee, tech investor and former Facebook advisor, explains the turning point: “Up until around 2000, the technology industry never had enough processing power, memory, storage or network bandwidth to build products that could be deeply integrated with our lives.” 

However, rather than the technology itself, it’s the way in which this processing power has been co-opted by tech companies to “unilaterally claim human experience as free raw material for translation into behavioral data,” that is the issue, explains Harvard Emeritus Professor Shoshana Zuboff. This phenomenon, which she terms "surveillance capitalism", originated in 2002, when Google engineers developed a sophisticated new system for highly-targeted advertising.

Under this system, every crumb of data we generate – not just as we browse online, but as we move through our cities and retreat into our own homes – is picked up by myriad forms of trackers, from cookies to GPS and sensors in IoT devices. It is then combined and sold on to advertisers in the market of behavioral advertising. Thanks to the vast amounts of data they hold on internet users, the sellers of online advertising space – notably Google and Facebook – enable advertisers to reach highly-specific audiences, which are categorized not only by demographic (gender, age, location) but also character traits. The underlying idea is that the more data gathered about people, the easier it will be to predict – and therefore influence – what they will do next. Thus, every mundane detail of our lives – in other words, the fabric of our personal, private experience – has become the most lucrative commodity of the Information Age, without our knowledge, consent, or ability to opt out. Browse The Privacy Issue's detailed guide to the adtech industry for more on this shift in big business.

Big Tech surveillance is the backdrop for a series of key privacy developments that unfolded in 2018. Early in the year, the Cambridge Analytica scandal broke, revealing that the British datamining firm had harvested the data of millions of Facebook users for use in political advertising – which whistleblower Christopher Wylie claimed swayed both Brexit and the election of Trump to victory in 2016. In May, General Data Protection Regulation (GDPR) came into effect in Europe, and one month later, the California Consumer Privacy Act (CCPA) was signed into law. Both of these laws were designed to give internet users more control over how their personal data is stored, and both are important steps forward for legal privacy protections – yet studies show that one year on, the way we think about, and care for, our privacy, hasn’t changed much – and the structures of the surveillance economy certainly haven’t either. 

The problem, according to the civil rights association European Digital Rights, (EDRi)is that long before these laws were passed, the evolution of the tech industry – its products, services and its business model – had damaged the ability of users to “meaningfully control their personal data by means of informed choices.” From unclear, lengthy, and downright-deceptive privacy policies to cookie notices claiming only to "optimize the user experience", the burden placed on individuals is far too high. In fact, explains the EDRi, people have become “so overloaded by requests to consent to the use of their data that informed choice becomes illusory.” Yet, because using the Internet is essential to our everyday lives, we cannot truly opt out – a hindrance that undermines the notion of privacy-as-control.

So why does all this matter? Technological developments and data collection are only going to increase exponentially. Two decades after the birth of surveillance capitalism and the weakening of privacy through national security measures, we’re only just awakening to the new reality that has crept up around us. As we collectively figure out how to deal with it, we find ourselves at a critical tipping point, at which the values we prioritize today will inform the policies, laws, textbooks, and technologies of tomorrow and, ultimately, the kind of society we want to live in. Nothing less than the fundamental democratic pillars of individual autonomy and decision rights are at stake.

Future Frameworks

If privacy is to have a future in the Information Age, we must stop taking its meaning for granted and start being specific about what we want to protect. This will require a reframing of what we understand privacy to be – one that makes clear the many interconnected rights and responsibilities it embodies. 

One way is to start considering privacy beyond its value as an individual right. “The objective must be understood as maintaining individual and collective liberty, and the justification as not only supporting individual self-development and flourishing but also democratic forms of governance,” Deirdre Mulligan, Faculty Director of the Berkeley Center for Law & Technology, explained to us at The Privacy Issue. “This requires privacy to serve as a check on the accretion of power by the state and large corporations fueled by persistent and broad surveillance.” Perceiving privacy as a check on power imbues the concept itself with power, and reveals how important it is to the proper functioning of free and open societies.  

The journalist and software engineer Jon Evans builds on the idea of privacy as a collective liberty, writing in TechCrunch, “Privacy is like voting. An individual’s privacy, like an individual’s vote, is usually largely irrelevant to anyone but themselves... but the accumulation of individual privacy or lack thereof, like the accumulation of individual votes, is enormously consequential.” Evans regards the data collection that forms the basis of today’s society of surveillance as a “massive public security problem” – with three major implications. 

First, he argues, the absence of privacy has a chilling effect on individual thought and dissent. Second, if privacy is commodified – as data ownership advocates call for – it becomes a class issue, further entrenching the power imbalance between those with greater and lesser means. Third, the accumulation of personal data will continue to manipulate public opinion and behavior – as was the case in the 2016 U.S. presidential election. Understood in this way, the need for governmental intervention to update and reinforce privacy protections becomes undeniable. If we defend privacy as a commons, concludes Evans, “then we can’t start thinking of it as an individual asset to be sold to surveillance capitalists. It, and we, are more important than that.”

A further element in the reframing of privacy comes from Sarah Igo, Associate Professor of History at Vanderbilt University and author of The Known Citizen – A History of Modern Privacy in America. Igo reasoned to us at The Privacy Issue that privacy’s future rests on a renegotiation of the things we’re willing to give it up for – beginning with convenience:

Many of the privacy invasions we experience now, from social profiling to commercial tracking, made their appeal to consumers under the banner of convenience. This was true from the days of the first credit bureaus in the nineteenth century. Personal information exchanged for ease and efficiency seemed a reasonable, even necessary bargain, in individual instances. But what wasn’t clear initially were the accumulated effects of this bargain across multiple domains of society, and under technological conditions that continually eroded the barriers among the data-holders. 

This shift made those profiting from the collection of personal data more and more powerful. It also made them less susceptible to reasonable legal regulation and control. Reversing the most worrisome incursions on privacy will require sustained political will and action. But it will also require us to examine very carefully how much we want to elevate the value of convenience over the value of what was once prized as “privacy” – a sense of discretion and control over one’s own biographical details, physical movements, intimate relationships, and personal desires.

These details, movements, relationships, and desires are incorporated in a concept that Maciej Cegłowski has termed "ambient privacy", which he defines as “the understanding that there is value in having our everyday interactions with one another remain outside the reach of monitoring, and that the small details of our daily lives should pass by unremembered. What we do at home, work... school, or in our leisure time does not belong in a permanent record.” Ambient privacy, according to Cegłowski, is not the property of individuals, but of the world around us. Yet as the law posits privacy as an individual right, it can’t respond adequately to the bigger issue at stake. Cegłowski warns of the dangers of inaction:

Ambient privacy plays an important role in civic life. When all discussion takes place under the eye of software, in a for-profit medium working to shape the participants’ behavior, it may not be possible to create the consensus and shared sense of reality that is a prerequisite for self-government. If that is true, then the move away from ambient privacy will be an irreversible change, because it will remove our ability to function as a democracy.

How can these different interpretations of privacy be reconciled, to bring constructive clarity to the conversations we have about its future? Perhaps the answer is revealed when we focus less on the word "privacy" itself, and more on the values we want it to protect. Do we mean privacy as freedom, civil liberty, autonomy, control? Privacy versus convenience, national security, surveillance? Questioning the specific meanings of the words we use when we talk about privacy – including the concepts above – will help individuals, governments, and companies to speak and act on the subject of privacy with tangible impact.

On this basis, The Privacy Issue makes the following recommendations:

1. Governments should lead the charge for a heightened understanding of the connections between privacy and democracy, those both overt (e.g. how data brokers like Cambridge Analytica influenced the democractic process in the U.S. and UK in 2016, and might do so again in 2020) and more subtle (e.g. the effect that the erosion of privacy has on speech, dissent, protest, and other pillars of a free society.) Increased resources must be invested in research and policy to strengthen the institutions and provisions of democratic government, developing adaptable frameworks for protecting the varied and evolving values we attribute to privacy. 

2. Companies, too, must take responsibility for setting new industry standards that center on their customers’ needs as humans, not users. This can begin with a re-evaluation of company terms and conditions and privacy policies, both in terms of content (e.g. "Are the ways in which we collect data truly necessary? What are the less-intrusive alternatives?") and form (e.g. "Is this written in a transparent and straightforward way? Do people have enough information to give their informed consent, or revoke it easily?") Tuning in to – and responding to – customer desires and priorities will be instrumental to company future-proofing as privacy becomes an increasing priority for consumers.

3. Individuals should exercise their consumer power by reflecting on privacy and what it means for them, as well as fighting for the protection of those values. We can spark conversations about privacy with friends, colleagues, and communities, engaging in conscious tech consumption – making informed choices about the products and services we bring into our lives. People who are living in democracies can speak with local representatives and implore them to take action to enhance our collective rights to privacy. These seemingly small, everyday actions, driven by clarity of thought and intention, have the power to reimbue privacy’s myriad meanings with value – clarifying and cementing the role of privacy in the present and the future.