Question

1. What is the privacy paradox, and when was it first defined?

The term "privacy paradox" describes the contradiction between privacy-related attitudes and behavior. We know our privacy is being exploited, and we feel like we’re losing control of our data, yet statistics show we’re not changing our online behavior to claim it back. Instead, we keep feeding data-collection machines that undermine our privacy.

The first mention of the term "privacy paradox" is attributed to Hewlett-Packard employee Barry Brown (2001), who used it in regard to the use of supermarket loyalty cards by customers in spite of their concerns about privacy. In 2006, professor and author Susan B. Barnes studied the concept in relation to teenagers freely giving up personal information on social networks. In his keynote speech at the 40th ICDPPC conference in 2018, the late European Data Protection Supervisor Giovanni Buttarelli elaborated on the privacy paradox. Buttarelli framed the problem in the context of data collection and its rise as the core business model of the Internet, stating that the issue is “not that people have conflicting desires to hide and to expose. The paradox is that we have not yet learned how to navigate the new possibilities and vulnerabilities opened up by rapid digitization.”

As the potential harms of sharing information online become evident, public concern increases. However, our behavior isn't adjusting accordingly – we continue to candidly divulge personal information, click "accept" to cookie notices and terms of service, use identical passwords for multiple websites, and generally underestimate the worth of our data. Some of us value our entire browsing history at the price of a Big Mac. The proportion of U.S. Internet users who admit it would be “hard to give up” social media use increased by 12% between 2014 to 2018. And even after the Cambridge Analytica data scandal broke, only 8% of respondents in a study by the financial firm Raymond James said they would stop using Facebook, while 48% said they would not change their use.

Question

2. What are the main contributing factors to the privacy paradox?

As we navigate the Internet, we instinctively perform a cost-benefit analysis, weighing the risk of privacy loss against convenience gained. Fueled by emotion and intuition, we tend to downplay the privacy risks, and choose convenience.

It can be difficult to place a concrete value on privacy because we often don't realize its value until we’re tangibly and conspicuously harmed. If we haven’t been affected by the loss of privacy in the past – whether a hack, a phishing scam, or even creepy targeted advertising – the risks seem abstract and hard to quantify, almost as if it's not our problem.

For many of us, our browsing, login, password and Internet browsing habits developed before we were aware of what was happening to our personal information – and now the effort required to change our patterns of behavior can feel overwhelmingly difficult. It’s easy to despair and feel apathetic, to say “my data’s already out there, there’s no point in doing anything now.”

Our privacy cost-benefit analysis tips heavily toward convenience when it comes to privacy policies, which countless studies have deemed too burdensome, too long, and too complicated. These documents are, essentially, written in a style and length that deter people from reading them. We continue to opt for the fast and easy option of ticking and clicking away our privacy, perhaps especially when we're traveling or in a rush.

The thing is, many of us need – and want – to share information about ourselves online. Today, having a comprehensive Web identity and a flourishing network on social media are important forms of social currency. Increasingly, trust is gauged by a person's online presence – for getting hired, dating, engaging in activism, or participating in the sharing economy. These activities, represent the boundless freedom and potential of the Internet: a technological wonder with the potential to bring people closer to each other while empowering us to explore the vast sea of knowledge at our fingertips. How did this all get so broken?

Question

3. Why is the privacy paradox a problem?

If we don’t care enough to change our online behavior, and if we don’t demand more of the tech companies that claim to be working for us, they will continue to track, gather and sell more and more information. This creates an ideal opportunity for exploitation, creating opportunities for coercion, for our thoughts and behavior to be controlled.

The more information Big Tech companies have about us, the more accurately they can predict our browsing and buying habits, our political leanings and affiliations – every detail about our lives that should be ours, and ours alone. The better network actors can predict our behavior, the easier it becomes for them to manipulate it – as we continue to see in elections around the world.

Harvard Business School professor Shoshana Zuboff argues that the privacy paradox isn’t a paradox at all. In fact, it’s to be expected given the structural imperatives of surveillance capitalism. In her view, the inconsistencies between our beliefs and our behavior are the “predictable consequence of the pitched battle between supply and demand, expressed in the difference between what surveillance capitalism imposes upon us and what we really want.” Zuboff popularized the term surveillance capitalism to describe the “economic logic that has hijacked the digital for its own purposes.” This form of capitalism, she explains, claims “private human experience” – our browsing history, the photos we post, our GPS location, the conversations we have in our homes, etc. – as “free raw material” that is transformed into behavioral data, used to predict what we will do, think, choose, and buy next – and sold on to advertisers and other markets that trade in “human futures”.

These systems of surveillance, Zuboff explains, undermine democracy by “[evading] individual awareness, undermining human agency, eliminating decision rights, diminishing autonomy and depriving us of the right to combat.” Big Tech companies such as Facebook, Google, Amazon and Microsoft, and the data brokers lurking in their shadows – continue to accumulate vast amounts of knowledge, which they use for their own commercial gain. Though up until now, democracy "has slept”, allowing surveillance capitalism to flourish with minimal impediment from the law, things are slowly beginning to change.

In 2019, we’re at a pivotal moment when lawmakers are beginning to wake up to the need for regulation. Yet, though we rely on our democratic institutions to legislate and develop policy that will protect us from the harms of surveillance capitalism, they’ve also been conducting dragnet surveillance of their own, in the name of national security, as Edward Snowden revealed in 2013. With the development of AI and biometric identification measures, government surveillance will only continue to grow in breadth and intensity in future. If the institutions we rely on to protect our rights are themselves perpetuating the same kinds of harms, who do we have left to turn to?

Question

4. Can the privacy paradox be resolved, and if so, how?

It will take a society-wide effort – beginning with legislation and government policies, as well as new tech industry standards – to overcome the privacy paradox.

The role of the law in restoring privacy to individuals is twofold: First, amplifying the rights and protections in place for individuals, making it easier for people to understand and claim their rights, and second, overhauling how the tech industry is regulated. Progress was made with the introduction of the EU’s GDPR and the CCPA in 2018, but national legislation in the U.S., and in many countries around the world, is yet to follow. Second, Zuboff calls for the interruption of “surveillance capitalism’s data supplies and revenue flows,” the goal being to outlaw the “secret theft of private experience”. Much of the debate on how to do this currently revolves around breaking up the stranglehold monopolies that the likes of Google, Facebook and Amazon have on the tech sector, so as to reign in their power, raise accountability standards, and allow new players to offer alternatives which are consistent with the needs of humans and the functioning of democracy.

When it comes to what the tech industry can do, Harvard researcher Dan Svirsky argues that the starting point must be how our technological tools and digital products are created. “We need to approach building these products with user preferences in mind,” he told The New York Times. This means centering design on both human rights and human needs – for example, ensuring that a product or service’s settings are switched to private by default. Tech company policies must also be overhauled to include a wide range privacy protections. Terms of service, privacy policies, cookie and other notices must be written in ways that are clear, easy to understand, and allow us to give consent that is truly informed, or alternatively, to revoke consent simply, and without adverse consequences. The setting of new industry standards within tech will require a cross-industry effort and close consultation with consumer rights and privacy advocacy groups which represent individuals’ needs.

Question

5. What steps can I take – and encourage others to take – to escape the privacy paradox?

We as individuals also need to reassess our relationship with the tech products and services we have privacy concerns about. Though this feels unfair, inconvenient and often overwhelming, there are plenty of straightforward measures we can take right away to reclaim control of our data and rebuild our digital habits.

While the burden for data protection should not be on each of us to manage for ourselves, we can’t rely on industry and government policies to solve the issues relating to data privacy anytime soon. So we’re going to have to take responsibility for our own conscious tech consumption. This will involve taking an honest look at our browsing habits, and figuring out what needs to change. The good news is that many of the biggest steps we can take to protect your privacy are pretty simple. Here are ten straightforward suggestions:

  1. Switch the default settings of your laptop, phone, and other devices to more privacy-respecting choices and dig into your app settings for similar options.
  2. Use a privacy-focused browser like Firefox or Brave.
  3. Search DuckDuckGo or another search engine that doesn’t spy on you.
  4. Download tracker-blocking browser extensions such as uBlock Origin or Privacy Badger.
  5. Get a VPN that aligns with your values. Here are IVPN’s values.
  6. Use a password manager to secure accounts such as KeePass, KeePassX, or BitWarden.
  7. Switch to multifactor authentication or two-factor authentication wherever possible.
  8. Find and close old accounts that you don’t use anymore.
  9. Delete your Facebook account. Don’t forget to download your data first.
  10. Stop using Google, the market leader in surveillance capitalism, and use alternatives instead.

Finally, keep up to date with the latest developments in policy and legislation, industry news, and advocacy efforts. Here at The Privacy Issue, we're dedicated to providing you with the tools you need to break free from the privacy paradox.