personal privacy essay

Why We Care about Privacy

  • Markkula Center for Applied Ethics
  • Focus Areas
  • Internet Ethics
  • Internet Ethics Resources

The importance of privacy for human dignity, autonomy, and relationships

bird of paradise flower

On the "solutionless dilemma" of users being misled by chatbots

Too many users don't realize that chatbots sound certain but shouldn’t; is that a design flaw?

ABC 7 WXYZ Detroit Logo

Irina Raicu, director, Internet ethics, quoted by ABC News/ WXYZ, Detriot.

personal privacy essay

Irina Raicu, director, internet ethics, quoted by The Wall Street Journal.

Social Sciences

© 2024 Inquiries Journal/Student Pulse LLC . All rights reserved. ISSN: 2153-5760.

Disclaimer: content on this website is for informational purposes only. It is not intended to provide medical or other professional advice. Moreover, the views expressed here do not necessarily represent the views of Inquiries Journal or Student Pulse, its owners, staff, contributors, or affiliates.

Home | Current Issue | Blog | Archives | About The Journal | Submissions Terms of Use :: Privacy Policy :: Contact

Need an Account?

Forgot password? Reset your password »

Home

Sign up to Newsletter

What Is Privacy?

privacy

What is privacy?

Privacy is a fundamental right, essential to autonomy and the protection of human dignity, serving as the foundation upon which many other human rights are built.

Privacy enables us to create barriers and manage boundaries to protect ourselves from unwarranted interference in our lives, which allows us to negotiate who we are and how we want to interact with the world around us. Privacy helps us establish boundaries to limit who has access to our bodies, places and things, as well as our communications and our information.

The rules that protect privacy give us the ability to assert our rights in the face of significant power imbalances.

As a result, privacy is an essential way we seek to protect ourselves and society against arbitrary and unjustified use of power, by reducing what can be known about us and done to us, while protecting us from others who may wish to exert control.

Privacy is essential to who we are as human beings, and we make decisions about it every single day. It gives us a space to be ourselves without judgement, allows us to think freely without discrimination, and is an important element of giving us control over who knows what about us.

Why does it matter?

In modern society, the deliberation around privacy is a debate about modern freedoms.

As we consider how we establish and protect the boundaries around the individual, and the ability of the individual to have a say in what happens to him or her, we are equally trying to decide:

  • the ethics of modern life;
  • the rules governing the conduct of commerce; and,
  • the restraints we place upon the power of the state.

Technology has always been intertwined with this right. For instance, our capabilities to protect privacy are greater today than ever before, yet the capabilities that now exist for surveillance are without precedent.

We can now uniquely identify individuals amidst mass data sets and streams, and equally make decisions about people based on broad swathes of data. It is now possible for companies and governments to monitor every conversation we conduct, each commercial transaction we undertake, and every location we visit. These capabilities may lead to negative effects on individuals, groups and even society as it chills action, excludes, and discriminates. They also affect how we think about the relationships between the individual, markets, society, and the state. If the situation arises where institutions we rely upon can come to know us to such a degree so as to be able to peer into our histories, observe all our actions, and predict our future actions, even greater power imbalances will emerge where individual autonomy in the face of companies, groups, and governments will effectively disappear and any deemed aberrant behaviour identified, excluded, and even quashed.

Perhaps the most significant challenge to privacy is that the right can be compromised without the individual being aware. With other rights, you are aware of the interference -- being detained, censored, or restrained. With other rights, you are also aware of the transgressor -- the detaining official, the censor, or the police.

Increasingly, we aren’t being informed about the monitoring we are placed under, and aren’t equipped with the capabilities or given the opportunity to question these activities.

Secret surveillance, done sparingly in the past because of its invasiveness, lack of accountability, and particular risk to democratic life, is quickly becoming the default.

Privacy International envisions a world in which privacy is protected, respected and fulfilled. Increasingly institutions are subjecting people to surveillance, and excluding us from being involved in decisions about how our lives are interfered with, our information processed, our bodies scrutinised, our possessions searched.  We believe that in order for individuals to participate in the modern world, developments in laws and technologies must strengthen and not undermine the ability to freely enjoy this right.

Is privacy a right?

Privacy is a qualified, fundamental human right. The right to privacy is articulated in all of the major international and regional human rights instruments, including:

United Nations Declaration of Human Rights (UDHR) 1948, Article 12: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” International Covenant on Civil and Political Rights (ICCPR) 1966, Article 17: “1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour or reputation. 2. Everyone has the right to the protection of the law against such interference or attacks.”

The right to privacy is also included in:

  • Article 14 of the United Nations Convention on Migrant Workers;
  • Article 16 of the UN Convention on the Rights of the Child;
  • Article 10 of the African Charter on the Rights and Welfare of the Child;
  • Article 4 of the African Union Principles on Freedom of Expression (the right of access to information);
  • Article 11 of the American Convention on Human Rights;
  • Article 5 of the American Declaration of the Rights and Duties of Man,
  • Articles 16 and 21 of the Arab Charter on Human Rights;
  • Article 21 of the ASEAN Human Rights Declaration; and
  • Article 8 of the European Convention on Human Rights.

Over 130 countries have constitutional statements regarding the protection of privacy, in every region of the world.

An important element of the right to privacy is the right to protection of personal data. While the right to data protection can be inferred from the general right to privacy, some international and regional instruments also stipulate a more specific right to protection of personal data, including:

  • the OECD's Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,
  • the Council of Europe Convention 108 for the Protection of Individuals with Regard to the Automatic Processing of Personal Data,
  • a number of European Union Directives and its pending Regulation, and the European Union Charter of Fundamental Rights,
  • the Asia-Pacific Economic Cooperation (APEC) Privacy Framework 2004, and
  • the Economic Community of West African States has a Supplementary Act on data protection from 2010.

Over 100 countries now have some form of privacy and data protection law.

However, it is all too common that surveillance is implemented without regard to these protections. That's one of the reasons why Privacy International is around -- to make sure that the powerful institutions such as governments and corporations don't abuse laws and loopholes to invade your privacy. 

Video: What Is Privacy?

Related content.

Surveillance camera

PI’s response on proposed draft RICA Bill

Privacy International's response to the South African Parliament's call for submissions on the Regulation of Interception of Communications and Provision of Communication-related Information Amendment Bill (the RICA Bill).

Photo by Mohammad Rahmani on Unsplash

Afghanistan: What Now After Two Decades of Building Data-Intensive Systems?

Over the last 20 years, vast data-intensive systems were deployed in Afghanistan by national and foreign actors. As we highlight some of these systems we present our concerns as to what will happen to them.

Dirk Vande Ryse, Head of the Frontex Situation Centre (FSC), on the left, giving explanations to Dimitris Avramopoulos, next to the screen of the FSC

Space: The Final Frontier of Europe’s Migrant Surveillance 

Companies are deploying satellites capable of tracking signals and selling access to the data collected to government agencies. We explain what this nascent industry is selling, why border agencies are among their customers, and why it matters.

crowd

UK mass interception laws violates human rights and the fight continues...

On 25 May 2021, a Grand Chamber judgment against the UK broke new ground in the regulation of bulk interception capabilities requiring enhanced safeguards to protect the rights to privacy and freedom of expression against abuse. It is a complex judgment with lights and shades, and the fight against mass surveillance is not over. Find here our initial take on the judgment and what comes next.

  • Skip to main content
  • Keyboard shortcuts for audio player

Life Kit

  • LISTEN & FOLLOW
  • Apple Podcasts
  • Google Podcasts
  • Amazon Music

Your support helps make our show possible and unlocks access to our sponsor-free feed.

Your Technology Is Tracking You. Take These Steps For Better Online Privacy

Laurel Wamsley at NPR headquarters in Washington, D.C., November 7, 2018. (photo by Allison Shelley)

Laurel Wamsley

The steps to protect your security are more clear-cut than those for privacy.

Before I became a reporter at NPR, I worked for a few years at tech companies.

One of the companies was in the marketing technology business — the industry that's devoted in part to tracking people and merging their information, so they can be advertised to more effectively.

That tracking happens in multiple senses: physical tracking, because we carry our phones everywhere we go. And virtual tracking, of all the places we go online.

The more I understood how my information was being collected, shared and sold, the more I wanted to protect my privacy. But it's still hard to know which of my efforts is actually effective and which is a waste of time.

So I reached out to experts in digital security and privacy to find out what they do to protect their stuff – and what they recommend most to us regular folks.

Here's what they told me.

How Are Apple, Amazon, Facebook, Google Monopolies? House Report Counts The Ways

How Are Apple, Amazon, Facebook, Google Monopolies? House Report Counts The Ways

1. to protect your accounts, practice good security hygiene..

There are some steps that make sense for almost all of us, says Eva Galperin , director of cybersecurity at the Electronic Frontier Foundation. Those include using strong passwords, two-factor authentication, and downloading the latest security updates.

She and other experts make a distinction between privacy and security when it comes to your data. Security generally refers to protecting against someone trying to access your stuff — such as stealing your credit card number or hacking your accounts. Privacy is more often used to talk about keeping your movements from being tracked for purposes of advertising or surveillance.

It turns out that the steps to protect your security are more clear-cut than those for privacy — but we'll come back to that.

Edward Snowden: Why Does Online Privacy Matter?

TED Radio Hour

Edward snowden: why does online privacy matter.

Use strong passwords or passphrases for your accounts. Longer than a password, passphrases should be strong and unique for each site. Don't use 1234. Bring some randomness and special characters into it. And don't use the same password for different websites: You don't want all your accounts to be compromised just because one gets hacked.

Use a password manager to keep track of your passwords, Galperin says — then all you have to do is remember the passphrase for your password manager.

Turn on two-factor authentication for your important accounts. You've seen this: Usually you're asked to put in your mobile number so that you can receive a text with an additional number you input before you can log in.

That's the most common type of two-factor authentication — but it's not the strongest, Galperin says, because SMS messages can be intercepted by your Internet provider, law enforcement or the government.

If you want to go a step further, Galperin recommends using an application that sends the second factor to an app on your phone, such as Authy or Google Authenticator , as these are harder to intercept. (Full disclosure here: NPR receives funding from Google and Facebook.) You can also use a physical key you carry with you that plugs into your computer's USB port and serves as the second factor.

6 tips for making a career change, from someone who has done it

6 Tips For Making A Career Change, From Someone Who Has Done It

Download the latest security updates.

Those nudges you get from your computer or phone to install the latest security update? You should download those.

"Most applications, when they're compromised, are not compromised by scary zero-day bugs that nobody knows about," Galperin says. "They are compromised by problems that everybody knows exist that have been publicly reported, and that the company has fixed and they have issued a patch in their security update. But if you do not take the security update, you do not get the benefit of the work of the security engineers at that company."

How To Sign Up To Work The Polls On Election Day

How To Sign Up To Work The Polls On Election Day

2. beware of phishing..

Not all attacks on our security come through malware or hackers invisibly breaking into your account. It's common that we're tricked into handing over our passwords or personal information to bad actors.

These attempts can happen via email, text message or a phone call. And generally they're trying to get your username and password, or perhaps your Social Security number. But there are often signs that these messages aren't legit – spelling or grammar errors, links to websites other than the one it should be linking to, or the email is coming from a weird domain.

If it feels fishy, it might be phishing.

Twitter Expands Warning Labels To Slow Spread of Election Misinformation

Twitter Expands Warning Labels To Slow Spread of Election Misinformation

3. protect what matters most..

Depending on your situation, you might want to take additional precautions to safeguard your privacy and security.

To figure out what steps people should take to safeguard their stuff, Galperin suggests you make a security plan. The Electronic Frontier Foundation has a guide to doing this, which starts by asking yourself these questions:

  • What do I want to protect?
  • Whom do I want to protect it from?
  • How bad are the consequences if I don't?
  • How likely is it to need protecting?
  • And how much trouble am I willing to go through to try to protect it?

Resources For Securing Your Data

The Surveillance Self-Defense site from the Electronic Frontier Foundation is a good place to start. Here's its guide to making your own security plan and figuring out what you most want to protect.

From Tactical Tech, here are handy how-to kits for different scenarios, including securing your data , increasing your online privacy and making your phone less addictive .

You can use the answers to those questions to focus your efforts on securing the things that matter most to you.

4. Delete some apps from your phone. Use a browser instead.

Matt Mitchell is a tech fellow at the Ford Foundation, and the founder of CryptoHarlem , an organization that teaches people to protect their privacy, including from surveillance.

Apps can learn a lot about you due to all the different types of data they can access via your phone. Seemingly harmless apps – like say, a flashlight app — could be selling the data they gather from you.

That's why Mitchell recommends "Marie Kondo-ing" your apps: Take a look at your smartphone and delete all the apps you don't really need. For many tasks, you can use a browser on your phone instead of an app.

Privacy-wise, browsers are preferable, because they can't access as much of your information as an app can.

I mentioned to Mitchell that even though I use Facebook and Twitter, I don't have those apps on my phone — partly so that I'll use them less, and partly for privacy reasons. I wanted to know — did I accomplish anything by not having those apps on my phone?

"You've accomplished a lot," he says. He compares it to oil companies turning crude into petrol: Your data can be turned into profit for these companies. "Every time you don't use an app, you're giving them less data, which is less money."

Mitchell says that's true even if you've been on Facebook a long time, and it feels like the company already knows everything about you. He compares it to smoking: It's never too late to cut back or quit — you'll still benefit by giving it less data to harvest.

5. To protect your chats, use an encrypted app for messaging.

If you want the contents of your messages to be secure, it's best to use an app that has end-to-end encryption, such as Signal or WhatsApp. That means you and the recipient can read the message you send — but no one in the middle.

But even though the contents of your messages are protected by encryption in apps such as Signal and WhatsApp, your metadata isn't — and someone could learn a lot about you from your metadata, Galperin warns. She compares it to what you can learn just by looking at the outside of an envelope in the mail: who sent it to whom, when and where it was sent from.

And WhatsApp is owned by Facebook — so when you share your contacts with WhatsApp, Facebook is getting that info, though it can't read the contents of your messages.

If you're on an iPhone, iMessages are encrypted when you're messaging another iOS device — but not when you're messaging an Android phone. Signal offers encrypted messaging on both Android and iPhone.

What about Facebook Messenger? Jen King , director of privacy at Stanford Law School's Center for Internet and Society, advises against using the Messenger app.

The app "has access to far more info on your phone than using Facebook through a browser," she says, recommending something such as WhatsApp or regular SMS texting instead.

And if encryption matters to you, be careful about backing up your chats to the cloud. If you back up your WhatsApp messages to iCloud or Google Drive , for example, they're no longer encrypted.

"That backup is just a database. And that database is easy for someone to open and read," Mitchell says, if they were able to access your cloud account. To keep your messages from prying eyes, turn off cloud backups and delete existing WhatsApp backups from iCloud or Google Drive.

California Rings In The New Year With A New Data Privacy Law

California Rings In The New Year With A New Data Privacy Law

6. turn off ad personalization..

Whenever possible, Mitchell recommends going into your settings and turning off ad personalization, which often gives companies permission to do invasive tracking.

Opting Out Of Ad Personalization On Some Major Platforms

Google and Android

Here's a link to limit ad personalization on Google and Android.

This page shows you how to opt out of ad personalization on Apple. As of this writing, it hasn't been updated for iOS 14. If you have updated to iOS 14, go to Settings > Privacy > Apple Advertising > turn off Personalized Ads.

  • On this page , you can go to the ad settings tab and toggle the settings to not allowed.
  • This page has steps to disconnect your activity off Facebook that is shared with Facebook, and clear that history.
  • On the Off-Facebook activity page , under What You Can Do, you can click on More Options > Manage Future Activity > and toggle it to off. ( This page has those steps.)

This page explains how to opt out of ad personalization.

He also recommends going to myactivity.google.com and deleting everything you can. On the left, there's a tab that says "Delete activity by." Select "All time." On your My Google Activity page, you can turn off Web & App Activity, Location History and YouTube History.

"It will show you every search term and everything you've ever done, every YouTube video you've ever looked at, all that stuff," he says. "It'll say, are you sure you want to delete this? 'Cause if you delete this, it might affect some stuff." Mitchell says: Delete it.

7. It's difficult to protect your privacy online if there aren't laws to protect your privacy online.

Tighter privacy settings only get you so far without laws that protect your privacy, says Ashkan Soltani , the former chief technologist for the Federal Trade Commission and one of the architects of the 2018 California Consumer Privacy Act .

Activist Aims To Strengthen California's Consumer Privacy Act

There are laws around health information and credit and financial information, he explains, and some states have Internet privacy-related laws .

But nationally, the U.S. doesn't have a universal data privacy law safeguarding everyday online privacy.

Soltani says he rarely recommends steps such as using ad blockers or VPNs for most people. They require too much attention and persistence to deliver on privacy, and even then they are limited in their effectiveness.

"The incentives are so high on the other side," Soltani says, "to uniquely identify people and track them that [users] will never have enough motivation and incentive to do it to the degree of this multibillion dollar ad tech industry."

So how do you protect your privacy? Get involved and call your congressperson, he says — tell the policymakers that you care about online privacy.

8. Start small and take it one step at a time.

Faced with this landscape, getting a tighter hold on your digital privacy and security can feel daunting. But Galperin has this sound advice: Just do a little bit at a time.

You don't need to make a list of all of your accounts to integrate into a password manager — you can just do each account as you log into it.

Even just doing the basics — strengthening your passwords, turning on two-factor authentication and watching out for scammers — can make your accounts a lot more secure. Then keep going: There are a lot of other steps you might want to take, depending on your needs.

We're going to be on the Internet for a long time. The more each of us understands how our data are collected and used — and how to keep private what we want to keep private — the better, safer and healthier our digital lives will be.

The podcast portion of this episode was produced by Audrey Nguyen. She also contributed research.

We'd love to hear from you. Leave us a voicemail at 202-216-9823, or email us at [email protected] .

For more Life Kit, subscribe to our newsletter .

  • Life Kit: Life Skills

Slouching Toward ‘Accept All Cookies’

When everything we do online is data to be harvested, resignation is easy. But there’s a better way to think about digital privacy.

A colorful, pixelated person running

Listen to this article

Listen to more stories on curio

W e are all shedding data like skin cells. Almost everything we do with, or simply in proximity to, a connected device generates some small bit of information—about who we are, about the device we’re using and the other devices nearby, about what we did and when and how and for how long. Sometimes doing nothing at all—merely lingering on a webpage—is recorded as a relevant piece of information. Sometimes simply walking past a Wi-Fi router is a data point to be captured and processed. Sometimes the connected device isn’t a phone or a computer, as such; sometimes it’s a traffic light or a toaster or a toilet. If it is our phone, and we have location services enabled—which many people do, so that they can get delivery and Find My Friends and benefit from the convenience of turn-by-turn directions—our precise location data are being constantly collected and transmitted. We pick up our devices and command them to open the world for us, which they do quite well. But they also produce a secondary output too—all those tiny flecks of dead skin floating around us.

Our data are everywhere because our data are useful. Mostly to make people money: When someone opens up their phone’s browser and clicks on a link—to use the most basic example—a whole hidden economy whirs into gear. Tracking pixels and cookies capture their information and feed it to different marketers and companies, which aggregate it with information gleaned from other people and other sites and use it to categorize us into “interest segments.” The more data gathered, the easier it is to predict who we are, what we like, where we live, whom we might vote for, how much money we might have, what we might like to buy with it. Once our information has been collected, it ricochets around a labyrinthine ad-tech ecosystem made up of thousands of companies that offer to make sense of, and serve hyper-targeted ads based on, it.

Our privacy is what the internet eats to live. Participating in some part or another of the ad-tech industry is how most every website and app we use makes money. But ad targeting isn’t the only thing our data are good for. Health-care companies and wearables makers want our medical history and biometric data—when and how we sleep; our respiratory rate, heart rate, steps, mile times; even our sexual habits—to feed us insights via their products. Cameras and sensors, on street corners and on freeways, in schools and in offices, scan faces and license plates in order to make us safer or identify traffic patterns. Monitoring software tracks students taking tests and logs the keystrokes of corporate employees. Even if not all of our information goes toward selling ads, it goes somewhere . It is collected, bought, sold, copied, logged, archived, aggregated, exploited, leaked to reporters, scrutinized by intelligence analysts, stolen by hackers, subjected to any number of hypothetical actions—good and bad, but mostly unknowable. The only certainty is that once our information is out there, we’re not getting it back.

I t’s scary and concerning , but mostly it’s overwhelming. In modern life, data are omnipresent. And yet, it is impossible to zoom out and see the entire picture, the full patchwork quilt of our information ecosystem. The philosopher Timothy Morton has a term for elements of our world that behave this way: A hyperobject is a concept so big and complex that it can’t be adequately described. Both our data and the way they are being compromised are hyperobjects.

Climate change is one too: If somebody asks you what the state of climate change is, simply responding that “it is bad” is accurate, but a wild oversimplification. As with climate change, we can all too easily look at the state of our digital privacy, feel absolutely buried in bad news, and become a privacy doomer, wallowing in the realization that we are giving our most intimate information to the largest and most powerful companies on Earth and have been for decades. Just as easy is reading this essay and choosing nihilism, resigning yourself to being the victim of surveillance, so much so that you don’t take precautions.

These are meager options, even if they can feel like the only ones available. Digital privacy isn’t some binary problem we can think of as purely solvable. It is the base condition and the broader context of our connected lives. It is dynamic, meaning that it is a negotiation between ourselves and the world around us. It is something to be protected and preserved, and in a perfect world, we ought to be able to guard or shed it as we see fit. But in this world, the balance of power is tilted out of our reach. Imagine you’re in a new city. You’re downloading an app to buy a ticket for a train that’s fast approaching. Time is of the essence. You hurriedly scroll through a terms-of-service agreement and, without reading, click “Accept.” You’ve technically entered a contractual agreement. Now consider that in such a moment, you might as well be sitting at a conference table. On one side is a team of high-priced corporate lawyers, working diligently to shield their deep-pocketed clients from liability while getting what they need from you. On the other side is you, a person in a train station trying to download an app. Not a fair fight.

So one way to think of privacy is as a series of choices. If you’d like a service to offer you turn-by-turn directions, you choose to give it your location. If you’d like a shopping website to remember what’s in your cart, you choose to allow cookies. But companies have gotten good at exploiting these choices and, in many cases, obscuring the true nature of them. Clicking “Agree” on an app’s terms of service, might mean, in the eyes of an exploitative company, that the app will not only take the information you’re giving up but will sell it to, or share it with, other companies.

Understanding that we give these companies an inch and they take a mile is crucial to demystifying their most common defense: the privacy paradox. That term was first coined in 2001 by an HP researcher named Barry Brown who was trying to explain why early internet users seemed concerned about data collection but were “also willing to lose that privacy for very little gain” in the form of supermarket loyalty-rewards programs. People must not actually care so much about their privacy, the argument goes, because they happily use the tools and services that siphon off their personal data. Maybe you’ve even convinced yourself of this after almost two decades of devoted Facebooking and Googling.

But the privacy paradox is a facile framework for a complex issue. Daniel J. Solove, a professor at George Washington University Law School, argues the paradox does not exist, in part because “managing one’s privacy is a vast, complex, and never-ending project that does not scale.” In a world where we are constantly shedding data and thousands of companies are dedicated to collecting it, “people can’t learn enough about privacy risks to make informed decisions,” he wrote in a 2020 article. And so resignedly and haphazardly managing our personal privacy is all we can do from day to day. We have no alternative.

But that doesn’t mean we don’t care. Even if we don’t place a high value on our personal data privacy, we might have strong concerns about the implications of organizations surveilling us and profiting off the collection of our information. “The value of privacy isn’t based on one’s particular choice in a particular context; privacy’s value involves the right to have choices and protections,” Solove argues. “People can value having the choice even if they choose to trade away their personal data; and people can value others having the right to make the choice for themselves.”

This notion is fundamental to another way to think of privacy: as a civil right. That’s what the scholar Danielle Keats Citron argues in her book The Fight for Privacy . Privacy is freedom, and freedom is necessary for humans to thrive. But protecting that right is difficult, because privacy-related harm is diffuse and can come in many different forms: At its most extreme, it can be physical (violence and doxxing), reputational (the release of embarrassing or incorrect information), or psychological (the emotional distress that comes along with having your intimate information taken from you). But, according to work by Solove and Citron, proving harm that goes beyond concrete economic loss is difficult in legal terms.

Citron argues in her book that we need a new social compact, one that includes civic education about privacy and why it is important. Simply understanding our right to privacy won’t vaporize overly permissive, opt-out data collection. It won’t completely correct the balance of power. But it will begin to give us a language for what is at stake when a new company or service demands our information with few safeguards. This education is not just for children but for everyone: executives, tech employees, lawmakers. It is a way to make the fight a bit fairer.

A nd how should we think about our data—all that digital dandruff? Scale is part of the problem here: Giving up an individual piece of location data may not feel all that meaningful, but having all of your movements tracked might constitute a violation. Context is also important: A piece of private sexual-health data may be a guarded secret of life-and-death import to the person it originated from; to a health-care conglomerate, that data point may be worth a fraction of a fraction of a cent. But when data are sliced and categorized and placed into profiles and buckets, their value increases. In 2020, Facebook made 97.9 percent of its revenue—nearly $85 billion—off of targeted ads, pinpointed by such data collection. Data, in the aggregate, is an asset class, one that powers innovative technologies and inflates bottom lines.

In a 2019 essay, the technologist Can Duruk discussed an analogy that, he admits, is a bit cliché: Data is the new oil . Extracting it is dirty, and storing it is dangerous . “We are barely recognizing the negative externalities of decades of oil production and consumption now, and it took us almost destroying the planet,” he writes. “We should do a better job for data.”

It’s interesting to imagine a society that would force companies to treat data as an oil-like commodity, something valuable, rather than digital ephemera in inexhaustible supply—where not only would the environmental toll of leaks and spills be remedied but victims could attempt to hold liable those trusted with storage. Maybe we’d demand a sort of supply-chain transparency to trace the flow of the product around the world. Maybe we’d find a way to quantify the externality.

Digital privacy’s climate-change analogy is not perfect, but when it comes to calls to action, the parallel is helpful. No single law or innovation could adequately reshape the world we’ve spent decades building. Quick fixes or sweeping legislative changes may very well have unintended consequences. We cannot totally reverse what we’ve put into motion. But there is always a reason to push for a better future. Last year, the environmentalist, author, and activist Bill McKibben wrote about a climate question he hears frequently: How bad is it? He is unsparing in his assessment but never overly alarmist. “Despair is not an option yet,” he writes. “At least if it’s that kind of despair that leads to inaction. But desperation is an option—indeed, it’s required. We have to move hard and fast.”

W hen reckoning with a subject as complex and fundamental as our digital privacy, metaphor is appealing—I’ve certainly reached for it throughout this essay. Our information is oil: a pollutant, a liability, a thing that powers the world. It’s skin cells: floating all around us. It’s a hyperobject: impossible to understand in its entirety all at once. If our data are what the internet feeds off of, maybe each piece—every datum, every bit of information from every tiny thing we do—is a calorie: incredibly powerful in the aggregate but invisible and incomprehensible to the naked eye, a sort of hypo -object.

We keep grasping for these metaphors because all are helpful, but none is quite sufficient. The internet as we know it is a glorious, awful, intricate, sprawling series of networks that needs our information in order to function. We cannot go back to a time before this was true—before turn-by-turn directions and eerily well-targeted ads, before we carried little data-collection machines in our pockets all day—and nor would all of us want to. But we can demand much more from the reckless stewards of our information. That starts with understanding what exactly has been taken from us. The fight for our privacy isn’t just about knowing what is collected and where it goes—it is about reimagining what we’re required to sacrifice for our conveniences and for a greater economic system. It is an acknowledgment of the trade-offs of living in a connected world, but focusing on what humans need to flourish. What is at stake is nothing less than our basic right to move through the world on our terms, to define and share ourselves as we desire.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Early debates on privacy began at the end of the nineteenth century, when the potential intrusion of photography and the (tabloid) press was first recognized. When contrasted with the concerns that we face today due to the smart devices surrounding us, collecting data, and influencing our opinions and behavior, the old worries look quite innocent. Recent technology has led to previously unimagined ways of obtaining information about people, for observing them, listening in on their conversations, monitoring their activities and locating their whereabouts. These are not simply “new technologies”: they fundamentally change the social practices in which they are embedded. Furthermore, the problem is not simply that having a smartphone enables companies to collect huge amounts of personal data, but that this data is used to create profiles of users that can be exploited for political and commercial purposes.

Yet there are also social changes of an entirely different sort that have, in various ways, produced constant shifts in the boundaries separating the private and the public realms. These changes include, for example, the fact that women can no longer simply be assigned to the realm of domestic and family labor but are increasingly playing—and wanting to play—an equal role in gainful employment and the public sphere. Another social change is that ever since the 1960s, intimacy and sexuality are no longer banished to the private domain but are now openly portrayed and displayed in a whole range of (social) media.

An analysis of these changes in the societal meanings of the private and the public shows that interest in re-conceptualizing privacy—which is embedded in a broader political and legal endeavor of finding and creating appropriate privacy protections—is due to three distinct social-historical processes. Firstly, recent developments in information technology threaten the protection of personal privacy in fundamentally new ways; secondly, there have been radical changes in the relation between the sexes, prompting a concomitant reconfiguration of the private sphere; and thirdly, there has been an intrusion of intimacy into the public realm through previously private themes that have turned public, accompanied by shifts in notions of individuality, subjectivity, and authenticity. These developments suggest that there is not one history of the concept of privacy, but that the rules that protect privacy today (and the reflection that has gone along with those rules) have been driven by developments and concerns in different political and social areas. The history of privacy may therefore include more than what counts as “private” at any particular time (Westin 1967; Elshtain 1981; Benn & Gaus 1983; B. Moore 1984; Ariès & Duby 1987; Weintraub & Kumar 1997; McKeon 2007; Vincent 2016; Igo 2018). Finally, these developments bring to light the thoroughly conventional nature of the separation between public and private life.

Against this background, we can see that there is no single definition, analysis or meaning of the term “privacy”, either in ordinary language or in philosophical, political and legal discourse. The concept of privacy has broad historical roots in legal, ethical, sociological and anthropological discourses. In this article, we will first focus on the histories of privacy in various discourses and spheres of life. We will also discuss the history of legislating privacy protections in different times and (legal) cultures. In the second part, we will consider a range of critiques of privacy—both domestic privacy and the right to privacy—and all the relevant arguments and counterarguments forming those debates.

The third part of this article is devoted to substantial discussions of privacy in the literature. This literature distinguishes between descriptive accounts of the meaning of privacy (which describe what is in fact protected as private), and normative accounts of privacy. We will also review discussions that treat privacy as an interest with moral value, and those that refer to it as a moral or legal right that ought to be protected only by social conventions or also by the law. As a starting point, we will look more precisely at the various meanings of “privacy”. We will present the semantics of the concept, which some authors claim to be not unifiable, only possessing a Wittgensteinian sense of family likeness (Solove 2008). The description of what is in fact protected as private is followed by normative accounts of privacy defending its value , and the extent to which it should be protected. The question of whether privacy has to be protected as a conventional interest, a moral right, or a legal right, has been contested for a long time.

The final section of this article is the longest and most extensive. There, contemporary debates on privacy in public discourse will be considered, as well as a range of philosophical, legal, and anthropological theories, from privacy and health to group privacy, the social dimensions of privacy, and the relationship between privacy and power. In the end, it will be concluded that the many debates regarding privacy, across very different fields of society, show that the problem of privacy determines to a large degree the personal, social and political lives of citizens. A free, autonomous life of well-lived relations—intimate, social, and political—is becoming increasingly endangered. Debates about privacy are therefore becoming more and more important, not just in academia, but in the societal public sphere as well.

1.1 The History of Conceptualizing the Private Sphere

1.2 the history of informational privacy, 1.3 history of legal protection, 2.1 thomson’s reductionism, 2.2 posner’s economic critique, 2.3 the communitarian critique, 2.4 the feminist critique, 3.1 semantics, 3.2 definitions and meanings, 3.3.1 intrinsic vs instrumental, 3.3.2 access simpliciter, 3.3.3 controlling access, 3.3.4 controlling access: three dimensions, 3.3.5 decisional privacy, 3.3.6 informational privacy, 3.3.7 local privacy, 4.1 recent debates on the value or function of privacy, 4.2 the conflict between privacy and other values or rights, 4.3 the cultural relativity of the value of privacy, 4.4 the democratic value of privacy, 4.5 group privacy, 4.6 social dimensions of privacy, 4.7.1 the datafication of life: work, 4.7.2 the datafication of life: commercial health apps, 4.7.3 the datafication of life: the privacy paradox, 4.7.5 the datafication of life: the quantified self and the duty to protect one’s own privacy, 4.8 the privacy of animals, 4.9 epistemological issues, 4.10 privacy, surveillance, and power, 4.11 privacy, colonialism, and racism, 4.12 the future of privacy, rights/regulations/decisions, other internet resources, related entries, 1. the history of privacy.

To understand the history of privacy, one must first consider

  • the history of the distinction between the private and the public sphere, in both ancient and modern liberal thought;
  • conceptualizations of informational privacy; and
  • the history of a legal right to privacy.

These notions are all connected. Separating them here is done principally for heuristic reasons.

Aristotle’s distinction between the public sphere of politics and political activity, the polis , and the private or domestic sphere of the family, the oikos , is the starting point for philosophical discussions of privacy ( Politics 1253b, 1259b). Aristotle’s classic articulation of the private domain is one of necessity, restriction, confinement, and subjection to the laws of nature and reproduction. For Aristotle (and for a modern Aristotelian such as Hannah Arendt), there is a clear social ontology that makes it seem natural for certain things, persons and activities to be regarded as private, and others as public. The private domain is the domain of the household,

the sphere where the necessities of life, of individual survival as well as of continuity of the species, [are] taken care of and guaranteed. (Arendt 1958 [1998: 45])

Although there has been persistent concern for domestic privacy throughout history and across many cultures (in cultural theory and [art] history, for instance; see Ariès 1960 [1962]; Ariès & Duby 1985&1987; Vincent 2016), in philosophical theory there remains a research-gap between Aristotle’s theory of privacy and the classical liberal theory, starting with Hobbes and Locke. This is in contrast to (art-)historical analyses, which comprehensively consider domestic privacy from the early Middle Ages until the early twentieth century (see especially the very informative Vincent 2016).

In liberal theory, the public/private distinction is taken to refer to the appropriate realm of governmental authority as opposed to the realm reserved for self-regulation, along the lines initially analyzed by Locke in his Second Treatise on Government (Locke 1690), and later by John Stuart Mill in his essay On Liberty (Mill 1859). The distinction arises again in Locke’s discussion of property, which can also be found in his Second Treatise . In the state of nature, all the world’s bounty is held in common and is in that sense public. But one possesses oneself and one’s own body, and one can also acquire property by combining one’s labor with it. These cases are considered one’s private property. In the liberal tradition, Rawls also distinguishes between the private (which includes the domestic sphere) and the public.

As will be discussed in §2 , classical liberal theory from Hobbes and Locke to Rawls, together with the naturalistic distinction between the private-as-domestic and the public, has been criticized by feminist and contemporary liberal thinkers. That the division between the private and the public is always conventional (and not natural) in character, has been maintained by almost all theories of privacy dating from the last five or so decades. New approaches to the theory of privacy call for a redescription of the private, and a reformulation of the idea of equal rights to privacy and freedom that is no longer inconsistent with the principles of a liberal democracy based on equal rights (Allen 1988 & 1989; Jean Cohen 1992 and 2002; see also §3 below).

The recent history of the moral right to informational privacy is linked to the liberal discourse on privacy and freedom. It found its start with a well-known essay by Samuel Warren and Louis Brandeis, “The Right to Privacy” (1890; see also Gordon 1960; Prosser 1960; Glancy 1979; Keulen & Kroeze 2018: 32 and 44–45; Sax 2018: 147–150). Citing “political, social, and economic changes” and a recognition of “the right to be let alone”, which counts as the first definition of informational privacy, Warren and Brandeis argued that existing law afforded a way to protect the privacy of the individual, and they sought to explain the nature and extent of that protection (1890: 193). Focusing in large part on increasing levels of publicity enabled by the burgeoning newspaper industry and recent inventions such as photography, they emphasized the invasion of privacy brought about by the public dissemination of details relating to a person’s private life. Warren and Brandeis felt that a variety of existing cases could be protected under a more general right to privacy, which would safeguard the extent to which one’s thoughts, sentiments, and emotions could be shared publicly by others. Urging that they were not attempting to protect any items or intellectual property produced, but rather the peace of mind attained by such protection, they claimed that the right to privacy was based on a principle of “inviolate personality”, which was part of a general right to the immunity of the person, “the right to one’s personality” (1890: 195 and 215).

Warren and Brandeis believed that the privacy principle was already part of common law dealing with the protection of one’s home, but new technology made it important to explicitly and separately recognize this protection under the name of a right to privacy. They suggested that limitations on this right could be determined by analogy with the law of defamation and slander, and it would not prevent publication of information about public officials running for office, for example. Warren and Brandeis thus laid the foundation for a concept of a right to privacy that has become known as the right to control over information about oneself; their central and most influential concept remains the right to be left alone (1890: 193).

Although the first legal privacy cases after the publication of their paper did not recognize a right to privacy, it wasn’t long before public discourse, as well as both state and federal courts in the US, were endorsing and expanding that right. In an attempt to systematize and more clearly describe and define the new right of privacy being upheld in tort law, William Prosser wrote that what had emerged were four different interests in privacy :

  • intrusion into a person’s seclusion or solitude, or into his private affairs;
  • public disclosure of embarrassing private facts about an individual;
  • publicity placing one in a false light in the public eye; and
  • appropriation of one’s likeness for the advantage of another (1960: 389).

One could say that Prosser’s approach is in fact a legal one, since he is examining the right to privacy in tort law.

The history of informational privacy is rather short, as we saw, and although many developments have taken place since the 1960s, these are better discussed in the next section, as well as the systematic sections later in this article (but see Igo [2018] for a general overview of the modern American history).

During the twentieth century, the right to privacy was advanced to the status of a human right. It was a completely novel entry into the catalog contained in the Universal Declaration of Human Rights, and it is unusual in lacking any predecessors in state constitutions or basic laws. As Diggelmann and Cleis (2014: 441) have pointed out, “the right’s potential was dramatically underestimated at the time of its creation”. Having originated in a somewhat accidental manner, the right to privacy has since become one of the most important human rights (Weil 1963; Volio 1981; Michael 1994; Feldman 1997; Richardson 2017). Article 12 of the 1948 Universal Declaration of Human Rights reads:

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

20 years later, the same right was enshrined by Article 17, Paragraph 1 of the International Covenant on Civil and Political Rights (1966), albeit in slightly different terms.

Turning our focus to the history of a right to privacy in Europe in particular, Article 8 of the European Convention on Human Rights , drafted in 1950, reads somewhat differently while expressing the same idea:

Everyone has the right to respect for his private and family life, his home and his correspondence.

In 1981, the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108) provided specific rules for the protection of personal data (Dalla Corte 2020: 125). In 2000, with the Charter of Fundamental Rights of the European Union , there was for the first time a formal differentiation between the two rights: a right to privacy on the one hand, and a right to the protection of personal data on the other. This formal distinction was the first in an international declaration of rights. Article 7 of the Charter provides a right to “respect for private and family life”, and Article 8 provides a right to the “protection of personal data” (see also González Fuster 2014). The latter plays an important role in securing the right to informational privacy.

Further examination of the history of the rights to privacy and data protection in Europe, reveals several important developments. The first of these was the step taken in 1983 by the Federal Constitutional Court of Germany. In its landmark decision on the constitutionality of the 1983 Census Act, the judgment (BVerfG 65, 1(43)) was influential in designating the right to privacy as informational self-determination . The ruling stated that

[if] individuals cannot, with sufficient certainty, determine what kind of personal information is known to certain parts of their social environment, and if it is difficult to ascertain what kind of information potential communication partners are privy to, this could greatly impede their freedom to make self-determined plans or decisions. A societal order, and its underlying legal order, would not be compatible with the right to informational self-determination if citizens were no longer able to tell who knows what kind of personal information about them, at what time and on which occasion.

The next important step was taken in 1995, with the adoption of the Data Protection Directive by the European Union , which was a legally binding directive (Directive 95/46/EC) adopted by EU member states. The purpose of this directive was twofold: to harmonize data protection laws, so as to facilitate cross-border trade by companies, and to protect people’s rights and freedoms when their personal data are used. Thus, the directive was largely based on the rationale of market integration. In 2016, the Data Protection Directive was followed up in Europe by the General Data Protection Regulation (GDPR). The GDPR has been in law since 2016, but its provisions became enforceable only in 2018. The GDPR is the most consequential regulatory development in information policy in decades (see Lindroos-Hovinheimo 2021), imposing significant fines on companies that fail to comply. As Hoofnagle et al. (2019) explain, the GDPR brings personal data into a complex and protective regulatory regime. Many data protection principles (for instance about data security) have been incorporated into US law as well, albeit mostly in Federal Trade Commission settlements with companies (Hoofnagle et al. 2019).

The juridification and jurisdiction in the US, especially concerning a constitutional right to privacy, has taken a very different course. On the one hand, there have been many advances with respect to laws protecting informational privacy. While US data protection law is fragmented, there have been recent developments that may result in closer alignment with EU law (e.g., the California Consumer Privacy Act of 2018 and the American Data Privacy and Protection Act—the latter being a Bill introduced in the US House of Representatives in 2022).

On the other hand, the most significant focus in US law is on decisional privacy, the right to make decisions (here with respect to one’s body) without interference from others, i.e., on same-sex marriage and abortion. In 1965, a right to privacy, independent of informational privacy and the Fourth Amendment, was recognized explicitly by the Supreme Court. It has commonly been referred to as the constitutional right to privacy. The right was first recognized in Griswold v. Connecticut (381 U.S. 479), which overturned the convictions of the Director of Planned Parenthood and a doctor at Yale Medical School for dispersing contraceptive-related information, instruction, and medical advice to married persons. Justice Brennan’s explanation contained the now famous sentence:

The right to privacy gives an individual , married or single, the right to be free from unwarranted governmental intrusion into matters so fundamentally affecting a person as the decision whether to bear or beget a child.

The constitutional right to privacy was described by Justice William O. Douglas as protecting a zone of privacy covering the social institution of marriage and the sexual relations of married persons. (For further commentary, see Allen 1988; Jean Cohen 1992; Inness 1992; Tribe 1990; DeCew 1997; Turkington & Allen 1999.)

Despite controversy over Douglas’s opinion, the constitutional privacy right was soon relied on to overturn a prohibition on interracial marriage, to allow individuals to possess obscene matter in their own homes, and to allow the distribution of contraceptive devices to individuals, both married and single. However, the most famous application of this right to privacy was as one justification for a woman’s right to have an abortion, a right defended in the 1973 judgment of Roe v. Wade (410 U.S. 113). It has since been used in subsequent legal decisions on abortion. As the explanation formulated by Justice Blackmun famously put it,

this right of privacy… is broad enough to encompass a woman’s decision whether or not to terminate her pregnancy. ( Roe v. Wade , 410 U.S. 113 [1973]: 153; see Dworkin 1994)

Bork (1990), however, views the Griswold v. Connecticut decision as an attempt by the Supreme Court to take a side on a social and cultural issue, and as an example of bad constitutional law (for criticism of Bork’s view, see Inness 1992; Schoeman 1992; Johnson 1994; DeCew 1997).

Regardless of anything else it might be, the right to privacy was seen as a right that incorporates reproductive liberties. Precisely which personal decisions regarding reproductive liberties have been protected by this right to privacy has varied depending on the makeup of the Court at any given time. For example, in the 1986 Bowers v. Hardwick judgment (478 U.S. 186; Subsequently overturned in Lawrence v. Texas 2003), it was decided that this right did not render anti-sodomy laws in Georgia unconstitutional, despite the intimate sexual relations involved.

In June 2022, in Dobbs v. Jackson Women’s Health Organization (No. 19–1392, 597 U.S.), the Supreme Court (then with a majority of Conservative judges) overruled its previous decision in Roe v. Wade and Planned Parenthood v. Casey (505 U.S. 833 [1992]). The majority argued that abortion cannot be counted as a constitutional right, since the constitution does not mention it, and abortion was not “deeply rooted” in American history. More specifically, the court argued that the right to privacy implied by the Fourteenth Amendment does not include a woman’s right to abortion. As a consequence of Dobbs v. Jackson Women’s Health Organization , individual states have the power to regulate access to abortion (see Tribe 2022).

Note that the debate in the US is mostly focused on aspects of decisional privacy. In Europe, the right to same sex marriage, as well as the right to abortion, are conceived of as rights to personal freedom, and have therefore not been discussed in the context of theories of privacy. (For further commentary on this difference, see §3 .)

2. Critiques of Privacy

Reductionists are named for their view that privacy concerns are analyzable or reducible to claims of other sorts, such as infliction of emotional distress or property interests. They deny that there is anything useful in considering privacy as a separate concept. Reductionists conclude, then, that there is nothing coherent, distinctive or illuminating about privacy interests. Probably the most famous fundamental critique of a right to privacy stems from the work of Judith Jarvis Thomson (1975). Having noted that there is little agreement on exactly what a right to privacy entails, Thomson examines a number of cases that have been thought to constitute violations of the right to privacy. On closer inspection, however, Thomson argues all those cases can be adequately and equally well-explained in terms of violations of property rights or rights over the person, such as a right not to be listened to. Ultimately the right to privacy, on Thomson’s view, is merely a cluster of rights, consisting of several irreducible “grand rights” (for instance, the right to property), and a number of reducible “ungrand rights” (such as the right not to be subjected to eavesdropping, the right to keep one’s belongings out of view, and so on). The “right over one’s own person” consists of several ungrand rights, such as (amongst others) the right not to be looked at—but is itself not a grand right (Thomson 1975: 304–306). The different rights in the cluster can be fully explained by, for instance, property rights or rights to bodily security. Therefore, the right to privacy, in Thomson’s view, is “derivative” in the sense that there is no need to find what is common in the cluster of privacy rights, and it is also derivative in its importance and justification. Any privacy violation is better understood as the violation of a more basic right, e.g., the right not to be listened to.

This reductionist view was quickly discarded by Thomas Scanlon, who argues in a direct reply to Thomson’s thesis that we have, for instance, a general right not to be looked at that

[a]s far as I can see I have no such general rights to begin with. I have an interest not to be looked at when I wish not to be (…). But rights directly corresponding to these interests would be too broad to be part of a workable system. (1975: 320)

The right to privacy is grounded in “the special interests that we have in being able to be free from certain kinds of intrusions” (Scanlon 1975: 315). Scanlon’s reply to Thomson can be read as an attempt to find a common ground for different aspects of privacy (DeCew 1997; Rachels 1975; Reiman 1976; Gavison 1980; see also §3 and §4 ).

Raymond Geuss (2001), on the other hand, seconds Thomson’s reductionist line of criticism but adds to it a more fundamental consideration:

Judith Jarvis Thomson has argued very persuasively that this right does not exist in the sense that it fails to designate any kind of coherent single property or single interest. That does not mean that none of the various things that have come to be grouped under “privacy” are goods—far from it, many of them are extremely important and valuable—only that they are disparate goods, and the perfectly adequate grounds we have for trying to promote them have little to do with one another. (2001: 145)

The very distinction between private and public, he argues, already relies on the assumption that there exists a unified liberal distinction that is set in stone politically, and uncontested. But this assumption displays not only a mistaken conception of the distinction between public and private, but also a mistaken conception of politics. In “real politics”, all distinctions and values are contested. As a result, the liberal distinction is illusory and ideological. According to Geuss, this becomes apparent only when one recognizes the deep heterogeneity of privacy, its reducibility to other interests, and the plurality of very different values attached to its various meanings (Geuss 2001).

Richard Posner (1978) also presents a critical account of privacy. He argues that from an economic perspective, we should not assign a property right to the personal information of individuals, when doing so leads to economic inefficiencies. This theoretical claim is linked to an empirical claim that common law follows a similar economic logic. Posner does not appear to deny that there is something that we can call “privacy” and a “right to privacy”. Instead, he argues that the notion of privacy should be attributed in a different way, following an economic analysis of the distribution of property rights to personal information. Strictly speaking, then, Posner does not present a fundamental critique of privacy, but rather an account of privacy which is based on considerations of economic efficiency, and he argues that privacy is protected in ways that are economically inefficient. With respect to information, in Posner’s view privacy should only be protected when access to personal information would reduce its value (for example, allowing students access to their letters of recommendation makes those letters less reliable and thus less valuable, and hence they should remain confidential or private). Focusing on privacy as control over information about oneself, Posner argues that concealment or the selective disclosure of information is often used to mislead or manipulate others, or for private economic gain. The protection of individual privacy is therefore less defensible than others suppose, because it does not maximize wealth.

Communitarian approaches find it suspicious that many recent theories of privacy rely on the concept of individual (negative) freedom as the raison d’être for privacy, and it is this connection between privacy and freedom or autonomy that is called into question (see Roessler 2008: 699). Privacy in communitarian thought is instead conceived of as a realm or dimension of life concerned with specific practices , also (or even primarily) relevant to the community at large. Accordingly, these practices must be understood not as a realm to which the individual has a claim qua autonomous being, but as one conceded to the individual as a member of the community (Sandel 1982; Elshtain 1995; Etzioni 1999 & 2004).

The idea underlying the communitarian point of view, particularly that of Sandel (1982), is that liberal theories of privacy necessarily conceive the self as disembodied and egocentric in nature. This is not only inconsistent in epistemological terms but also normatively undesirable from a political perspective, because communities and communal practices already take priority over the formation of individual identity. Communitarians therefore claim that privacy should not primarily be understood as an individual right to (physical or sexual) self-determination, but rather as protection given to practices that depend on being sheltered from the view of others (Etzioni 1999: 183; 2004: 30). Etzioni’s concept of privacy comprises its decisional, informational, as well as local (the privacy of the home) aspects (Etzioni 1999). The communitarian viewpoint has been criticized, however (e.g., Jean Cohen 2002: 42). These critics argue that it is incorrect to hold that a theory of privacy based on the idea of individual freedom and autonomy cannot at the same time conceive of the self as relational in nature, and as constituted and contextualized in a variety of respects. Feminist theories of privacy insist that individual rights come before communal duties, because it is otherwise impossible to guarantee equal freedom to take decisions pertaining to one’s life and one’s body. In particular, communal practices and traditions may prove repressive and discriminatory, making an individual right to privacy indispensable (Allen 1988: 700; Fraser 1992; Morris 2000).

The feminist critique of the theorization of the private sphere starts by questioning the idea of the realm of privacy as that of the natural, of women, feelings, hearth and home, and of emotional care for the male members of society, as well as the raising of children. The “natural” coding of the separation between private and public, therefore, is one which follows precisely the borderline separation of the sexes (Okin 1989; Pateman 1989; Phillips 1991; Jean Cohen 1992; Fraser 1992; Landes 1998; DeCew 2015). The target of this feminist critique is classical liberalism, and it has been influential. Rawls—the most influential liberal thinker of the twentieth century—accepted at least part of this criticism as justified, and he revised his own theory as a result (Rawls 2001: 166).

However, in classical liberal theory there is a double interpretation of the private domestic realm. On the one hand, the private domain is valued positively as the domestic sphere that is sheltered; on the other hand, the private sphere is the sphere of women and therefore inferior to the public sphere, according to the coding of a patriarchal society. As a result, the domestic sphere (including the family) is valued and prized as the realm that is sheltered from the demands of a hostile world. However, it is associated with “women”, while the public sphere is associated with “men”. The private is thus characterized as inferior to the public, just as nature is considered inferior in relation to culture (Okin 1991).

In the history of privacy, we are confronted with yet another double reading: even though liberal theory since Hobbes and Locke has advocated equal liberties for all citizens, it has clung to a natural conception of privacy that patently contradicts the notion of equal rights. This is because it grants those rights to men only, and not to women (Locke 1690; Hobbes 1651). As feminist theory has argued, this seems to have little to do with nature and more to do with power and culture. Seen in purely normative terms, nature provides us with no argument as to why certain activities (or persons) should be considered “private”, and others “public” (Pateman 1989; Phillips 1991; Jean Cohen 1992; Fraser 1992; Ortner 1974 [1998]). The classical liberal moral (and later also legal) right to privacy and private freedom has to be separated from the natural interpretation, which is still looms large in the background of everyday culture.

It is necessary to examine the feminist critique from yet another angle. In principle, early radical egalitarian feminist approaches are skeptical with respect to any possible conceptualization of privacy. The best-known of these skeptical approaches is the one developed by MacKinnon (1987, 1989, 1991; see also Olsen 1991). For MacKinnon, the appeal to legal or moral rights to privacy is but a further manifestation of the attempt to push women back into an ideologically constituted realm of privacy defined as the non-political or pre-political, and only ever concede them rights insofar as they are seen as different or deviant. Privacy can be dangerous for women when it is used to cover up repression and physical harm inflicted on them, perpetuating the subjection of women in the domestic sphere and encouraging non-intervention by the state. Such a concept of privacy, according to MacKinnon, fails to call the sexual hierarchy into question. Instead, it simply preserves the social power structures that find expression in the correlation of women with the private, and men with the public.

In response to MacKinnon’s argument, one objection is that it fails to make a clear enough distinction between a natural, pre-political concept of privacy on the one hand (which is rejected not only by MacKinnon herself, but also by other theories of privacy) and a legal-conventional concept of privacy on the other (Allen 1988; Jean Cohen 1992). A more reasonable view, according to Anita Allen (1988), is to recognize that while privacy can be used as a shield for abuse, it is unacceptable to reject privacy completely based on harm done in private (see also DeCew 1997: 86). A complete rejection of privacy would result in everything being made public, leaving the domestic sphere open to complete scrutiny and intrusion by the state. Allen and other theorists (such as Pateman 1989: 118–136; see especially Jean Cohen 1992) suggest that societies in general, as well as traditional conceptual divisions between the private and the public, can be criticized and revised (or, as Cohen [1992] puts it, “redescribed”). Therefore, feminists should adopt a concept of privacy that is not in the gender-specific, natural tradition, but is instead oriented towards the notion of freedom (see Allen 1988). The challenge is to find a way for the state to take seriously the domestic abuse that used to be allowed in the name of privacy, while also preventing the state from imposing itself onto the most intimate parts of women’s lives. This means drawing new boundaries for justified state intervention, and thus understanding the public/private distinction in new ways (see §3 and §4 below).

Another feminist perspective on privacy is related to the critique of liberalism in a different way. The approaches to privacy that are linked to freedom and autonomy have been criticized from the perspective of a theory of power (Brown 1995; Cornell 1995). Skepticism towards such approaches arises because they follow from, and are consonant with, other (liberal) dichotomies (such as subject-object or having rights or no rights) that are thought to be essentially exclusionary and discriminatory. It is further argued that such conceptions fail to take into account and criticize the power structures inherent in society, which are therefore also inherent in the structures protecting privacy.

Feminist approaches are far from homogeneous. They range from examples that appear to reject any conceptualization of privacy whatsoever (e.g., Brown 1995; see also the different but equally critical perspective on liberal conceptualizations in Geuss 2001), to those that propose alternative ways of determining privacy, as is the case with Morris, who discussing Pitkin (1981) argues that

privacy should be reconstructed rather than abandoned, for otherwise it is impossible to think critically about central problems in democratic theory—among them the very possibility of citizens’ representing, or translating into a common language, what is most singular, secret, ineffable, internal, that is, private, about themselves. (2000: 323)

She defends a “positive political theory of privacy” which she understands as being part of a democratic theory (2000: 323).

3. Meaning and Value

When considering the concept of the private, it is sometimes difficult to separate the descriptive element of meaning from the normative determination. The determination of the meaning of privacy often contains clear normative elements, such as when Fried (1968) claims that the meaning of privacy consists in protecting trust and intimacy, or when Nissenbaum (2010) defines privacy as the adequate flow of information (which requires protection). In this article, an attempt will be made to separate the descriptive and normative aspects as clearly as possible. In the following section, an overview of the relation between the concept of privacy and other concepts will first be given, followed by a descriptive overview of the meaning of privacy. Finally, we will discuss the various normative determinations that have been given to privacy.

One initial approach to determining the meaning of “privacy” is to examine how it is related to similar words. This is what is called the “semantics” of privacy. First, “private” should be distinguished from “intimate” (Gerstein 1978; Benn & Gaus 1983; Bok 1982; Allen 1988; Inness 1992; Dworkin 1994; see also the threefold differentiation of the meaning of “private” in Weinstein 1971; for the discussion that follows, see Roessler 2001 [2005: 9–12]). What is intimate can also be private, but is not necessarily so: We also speak, for instance, of forms of “public intimacy” in aesthetic contexts (think of John Lennon and Yoko Ono at the Hilton in Amsterdam). “Intimacy” has erotic or sexual connotations, as well as connotations of proximity and vulnerability that also—but not only—have to do with the exposure of one’s body (Inness 1992). Secondly, “private” must be distinguished from “secret”. What is private can be secret, but it is not necessarily so. For instance, the question of where I practice my religious beliefs is private, but not necessarily secret. Another example are medical data: these data are informationally private but not secret; they are known to many people (in the health system) and we would not generally call them “secret”.What is secret can be private, but is also not necessarily so—for example, when one speaks of state secrets (Bok 1982: 10–14; Mokrosinska 2020). Semantic overlaps occur when privacy is dependent on something being completely hidden or concealed, in other words on a secret, as with secret diaries or secret ballots. Of relevance here is Wasserstrom (1984), who interprets privacy above all as the realm of what is secret, hidden or concealed, and thus seeks to bring to light the connotations of deception and deceit.

Another important semantic relation is that of the predicate “private” to the predicate “public”; the latter is often defined in opposition to the former. In everyday language, there are two distinct semantic models underlying the various uses of “private” and “public” (Benn & Gaus 1983: 7–10). The first is an “onion” model, which allows one to distinguish between different layers of privacy. The center of the onion is the realm of personal or bodily intimacy and privacy, including not only one’s body, but also one’s private diary, as opposed to which everything else is regarded as “public”. The second layer of the onion comprises the classic realm of privacy, viz. the family and other intimate relationships. In opposition to the family, the outside world of society and the state constitutes the public realm. The outer layer of the onion is society at large—the realm of economic structures or public civil society—that counts as “private” with respect to intervention by the state. It therefore forms yet another realm of privacy in the face of the public realm of the state and its possible interference (Okin 1991).

In a metaphorical sense, the second model in everyday usage lies perpendicular to the first. For this second semantic model, the term “private” is predicated of actions we carry out, or decisions that we make, no matter where we happen to be. Going to church is thus a private matter. In this second sense, the concept of privacy describes a protected sphere or dimension of action and responsibility, where individuals can act in a way that is independent of decisions and influences from the public realm of state institutions and society at large. This second model also comprises informational privacy, since information about myself which I want to keep private (medical data etc) is not left at home, in a layer of the onion. I carry it with me wherever I go, therefore privacy has to be applicable wherever we are.

Both of the semantic models mentioned above play a role in the following definitions. From early on, the difficulties of developing a systematic and general definition of privacy have been recognized. We start with the most influential definition (at least since the twentieth century): the definition of the concept given by Justices Warren and Brandeis (see §1.2 ). Their conception of (the right to) privacy means rather generally the “right to be left alone”. Warren and Brandeis (1890: 214; for the history of Warren and Brandeis’ interest in privacy see Prosser (1960)) also refer to “the dignity […] of the individual” in discussing the right to privacy, in that respect for privacy is seen as acknowledging the dignity of persons, and their respective personalities. Furthermore, it should be mentioned that the aim of their article was not to explicitly define privacy, but rather to answer the question of “whether our law will recognize and protect the right to privacy” (1890: 196; “our law” refers to the US legal framework). This general definition of the concept of privacy, made in terms of respect for personality, dignity, and “being left alone”, prepared the field not only for detailed legal discussions, but also for efforts to define the term from a philosophical perspective.

Both of the semantic models mentioned above play a role in the following definitions. From early on, the difficulties of developing a systematic and general definition of privacy have been recognized. We start with the most influential definition (at least since the twentieth century): the definition of the concept given by Justices Warren and Brandeis (see §1.2 ). Warren and Brandeis famously summarized this right to privacy as the right “to be let alone” (1890: 214; for the history of Warren and Brandeis’ interest in privacy see Prosser 1960). They underscore that

the intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the defining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprises and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury. (Warren & Brandeis 1890: 196)

As the foundation of their conception of the right to privacy, Warren and Brandeis refer to “the dignity […] of the individual” or, as pointed out by Sax in his discussion of Warren and Brandeis,

the idea of the “inviolate personality” (Warren & Brandeis 1890: 205), or, put differently, “the more general right to the immunity of the person,—the right to one’s personality” (Warren & Brandeis 1890: 207). It is ultimately up to the individual to decide how she wants to be, think, and act. (Sax 2018: 149)

It should be mentioned that the aim of their article was not to explicitly define privacy, but rather to answer the question of “whether our law will recognize and protect the right to privacy” (Warren & Brandeis 1890: 196; “our law” refers to the US legal framework). This general definition of the concept of privacy, made in terms of respect for personality, dignity, and “being left alone”, nevertheless prepared the field not only for detailed legal discussions, but also for efforts to define the term from a philosophical perspective.

There have been various ways in which understanding the differences in determination of the meaning of privacy have been categorized. The two most prominent are reductionism and coherentism , as has been mentioned above. Reductionists are generally critical of the effort to carve out a special category of harms or interests best referred to as unique privacy harms or interests. Coherentists, meanwhile, defend the coherent fundamental value of privacy interests. In contrast, Ferdinand Schoeman introduced a somewhat different terminology. According to Schoeman, a number of authors believe that “there is something common to most of the privacy claims” (1984b: 5). Schoeman refers to this approach as the “coherence thesis”. Positions which

deny both the coherence thesis and the distinctiveness thesis argue that in each category of privacy claims there are diverse values at stake of the sort common to many other social issues and that these values exhaust privacy claims. […] The thrust of this complex position is that we could do quite well if we eliminated all talk of privacy and simply defended our concerns in terms of standard moral and legal categories. (Schoeman 1984b: 5)

These latter theorists are referred to as reductionists . Judith Thomson puts into question the idea that privacy is a distinct concept altogether, insofar as she finds that there is no clear idea of what the right to privacy is. Instead, she suggests that “the right to privacy is itself a cluster of rights” (1975: 306), such as the right to property or the right over the person (see §2.1 ). As has been mentioned in earlier sections, Thomson’s reductionist view was dismissed by Thomas Scanlon. For Scanlon, realms of privacy are always “conventionally defined”, irreducible, and obtain in their own right; that is, they cannot be marked out by means of other rights or claims. “Our zone of privacy,” writes Scanlon,

could be defined in many different ways; what matters most is that some system of limits to observation should be generally understood and observed. (1975: 317–318)

The fact that such limits exist—however varied they may turn out to be—is for Scanlon an indication of the irreducibility of privacy.

Nonetheless, the diversity of possible definitions and characterizations, as well as the many possible fields of application, have continued to pose a challenge. While Scanlon’s reply to Thomson can be read as an attempt to find common ground for different aspects of privacy (Scanlon 1975), Judith DeCew (1997) proposes to systematize the concept of privacy by putting forward a “cluster account” that highlights connections between the different interests covered by the concept, without reducing privacy to these different interests:

I argue that privacy is best understood as a cluster concept covering multiple privacy interests, including those enhancing control over information and our need for independence as well as those enhancing our ability to be self-expressible and to form social relationships. (DeCew 1997: 73)

There are other authors who have questioned the possibility of developing a general, comprehensive definition of privacy. One of the most innovative recent approaches to the meaning (and value) of privacy is that of Helen Nissenbaum. She takes note of “the conceptual quagmire to claim a definition—its definition—of privacy” (2010: 3) and proposes a different approach that renounces to the attempt of providing a single, unifying definition. For Nissenbaum, the right to privacy is best understood as a “right to appropriate flow of personal information” (2010: 127). Generally, the appropriate flow of personal information is governed by context-relative information norms. These are characterized by four parameters, which are the specific contexts, the actors, the information types, and (importantly) the transmission principles (see Nissenbaum 2010: 140–141). A transmission principle is a “constraint on the flow of information from party to party in a context.” (Nissenbaum 2010: 145) What counts as being private information depends on the different norms imposed on the flow of information governing different social contexts, such as the contexts of education, religion, security, or politics (the “contextual integrity” of various contexts; Nissenbaum 2010: 127–231). The adequate transmission principle can, depending on the informational norm, in some contexts (for instance intimate relations) be understood as the control of access of the persons involved. In that sense, Nissenbaum is not strictly arguing against Control-Access approaches, although she argues for their limited use in the general framework of informational privacy. Note that Nissenbaum writes about informational privacy and does not discuss other dimensions of the concept.

Finally, one of the most widely discussed privacy theorists is Daniel Solove (2004, 2008, 2011). Solove famously observed that privacy is a “concept in disarray” (2008: 1) and argues that it is futile to look for a unified definition. Instead, he appeals to Wittgenstein’s idea of “family resemblances”, and proposes understanding privacy as consisting of “many different yet related things” (2008: 9).

It can be concluded from the discussion presented here that in the privacy literature, no clear definition has been made that everyone can agree on. Neither is there a clear scope for privacy: in the US, the conceptual and philosophical discussions regarding the meaning and definition of “privacy” are mostly framed in terms of legal discussions of privacy in the context of US constitutional law. Following this reasoning, a number of theorists defend the view that privacy has broad scope , inclusive of the multiple varieties of privacy issue described by the US Supreme Court, even though there is no simple definition of privacy see Schoemann (1992), Parent (1983), Henkin (1974), Thomson (1975), Gavison (1980) and Bork (1990).

3.3 Normative Approaches

Let us now have a closer look at the normative side of the concept of privacy’s value or function. As will be seen, when determining the supposedly descriptive meaning of privacy, attempts are usually made to describe privacy at the same time in normative terms. Let us emphasize again that it is particularly difficult to separate the descriptive and normative (value-laden) aspects of the concept of privacy.

With the above in mind, we can begin by separating instrumental and intrinsic approaches to the value of privacy. We speak of the instrumental value of privacy when it is valued for the sake of something else (e.g., intimate relations, the body, freedom, autonomy, or dignity; Stigler 1980; Posner 1981). Intrinsic value is when privacy is valued for its own sake, without reference to any other objects, concepts of value, or dimensions in life (Warren & Brandeis 1890; Bloustein 1964; Gerstein 1978; Gavison 1980; Parent 1983).

This idea of an intrinsic value to privacy has, however, been criticized. An example is Fried’s criticism:

It is just because this instrumental analysis makes privacy so vulnerable that we feel impelled to assign to privacy some intrinsic significance. But to translate privacy to the level of an intrinsic value might seem more a way of cutting off analysis than of carrying it forward. (1970: 140)

Fried thus claims that even when we say we value something no matter what, we can still ask the question, “why should this be so?” However, the distinction between intrinsic and extrinsic value is generally a widely debated philosophical topic—not only in relation to privacy. One should therefore not expect it to be settled in the domain of the philosophy of privacy (see entry on intrinsic vs. extrinsic value ).

Access-based approaches have been put forward to answer the question of the meaning and value of privacy. A variety of formulations can be found in the literature (e.g., Thomson 1975; Gavison 1980; Reiman 1995; Allen 2000; Tavani & Moor 2001). Reiman, for instance, defines privacy as “the condition in which others are deprived of access to you” (1995: 30). Similarly, Allen suggests that

privacy refers to a degree of inaccessibility of a person or information about her to others’ five senses and surveillance devices. (2000: 867)

A classic formulation has been offered by Ruth Gavison: “An individual enjoys perfect privacy when he is completely inaccessible to others” (1980: 428). All these suggestions assume that we value privacy and therefore need some restriction of access to it, see especially Reiman (1995).

Sissela Bok defines privacy as

the condition of being protected from unwanted access by others—either physical access, personal information, or attention. Claims to privacy are claims to control access. (1982: 10)

Privacy is here defined as a condition in which one is protected in various respects from undesired intrusions by other people. Such a broadly-based definition still seems likely to cover the whole range of meanings of the concept of privacy. Similar approaches are also found with Benn (1988) and Schoeman (1992).

The access-based approach has been criticized, however. For instance, it is clear that one does not enjoy privacy after falling into a crevasse, even though it does comply with the condition of “inaccessibility” (see Schoeman 1984b: 3). If a state of isolation, seclusion or secrecy is enforced and not freely chosen (in other words, when the person in question has no control over access), then one would not describe it as “private”. Westin (1967: 40) describes the solitary confinement of prisoners as an example of “too much” privacy. Furthermore, this criticism shows that we think of privacy as a distinctly interpersonal phenomenon. If I am stranded on a desert island, it makes no sense to say I enjoy complete privacy because there are no other people present that I could be inaccessible to (Fried 1968).

Other approaches to the meaning and value of privacy take the idea of control as their very starting point, i.e., control over specific areas of privacy. The so-called control-based approach has been adopted in many writings on privacy in academic literature and beyond (Westin 1967; Fried 1968; Scanlon 1975; Parent 1983; Inness 1992; Bok 1982; Boyd 2010). Most control-based approaches justify this interpretation of the value of privacy with the enabling of freedom: individual freedom and autonomy are not possible without the protection of private life (see Allen 1988; Jean Cohen 2002). It should be pointed out, however, that access-based conceptions of privacy could also imply that we value privacy because it enables freedom. This is in fact Reiman’s position (1995), and so this should not be seen as a defining line of separation between the approaches. Finally, Bloustein argues that “the intrusion [of privacy] is demeaning to individuality, is an affront to personal dignity” (1964: 962).

The classic and very influential approach advocated by Westin defines privacy as control over information:

Privacy is the claim of individuals, groups, or institutions to determine for themselves when, how and to what extent information about them is communicated to others. (1967: 7; see also Gross 1971)

In a similar vein to Westin, Fried defines privacy as “the control we have over information about ourselves” (1968: 482). Fried also asserts that the reason we value privacy is that relationships with other individuals (characterized by love, friendship and trust, for example) essentially depend on the ability to share certain information which we would not share with those not in the relationship. Privacy thus provides the “means for modulating […] degrees of friendship” (Fried 1968: 485), because we share more information with very close friends than we would with others. Hence, what we really care about is the ability to control the information that we share with others. The right to privacy would protect the ability of an individual to shape meaningful relations with others and would be justified by the importance of relationships such as friendship, love and trust in human life (Fried 1968: 484).

This control can not only be understood as being concerned with informational privacy (as with Westin), but also in much broader terms. Iris Young writes: “The Private [is] what the individual chooses to withdraw from public view” (1990: 119–120). Control is here conceived as a retreat from visibility in the public eye, as is echoed by Julie Inness:

Privacy is the state of the agent having control over a realm of intimacy, which contains her decisions about intimate access to herself (including intimate informational access) and her decisions about her own intimate actions. (1992: 56)

Since the values ascribed to privacy can differ, access-based approaches are compatible with this justification of the value of privacy in terms of intimacy and closeness. Most control-based approaches, however, justify the value of privacy by citing the enabling of freedom: individual freedom and autonomy are not possible without the protection of a private life (Jean Cohen 2002). Jean Cohen (2002) gives a theoretical defense of a freedom-based view of the right to privacy. She defends a constructivist approach to privacy rights and intimacy, arguing that privacy rights protect personal autonomy, and that a right to privacy is indispensable for a free and autonomous life. Freedom and autonomy are in the control access-based interpretation inherently linked with privacy: if I want to control the access to myself (or to information about myself), then it is an expression of my freedom to do so. Privacy has, in these approaches, such a high value because it protects my freedom to do what I want to do at home (within the limits of the law), to control what people know about me (also within limits) and to decide freely about myself and my body (Jean Cohen 2002; Roessler 2004: 71ff). One aspect of the freedom or autonomy of persons is their freedom from unwanted observation and scrutiny. Benn (1984) emphasizes how respect for privacy effectively expresses respect for persons and their personhood. He explains that “[a] man’s view of what he does may be radically altered by having to see it, as it were, through another man’s eyes” (1984: 242). The necessity of adopting a different perspective on one’s own behavior can be seen as a restriction of freedom in the sense of the freedom of self-presentation The basic idea of these approaches is that the freedom to play different roles, to present oneself in different ways, presupposes that a person, in a given relationship, can hide (keep private) aspects which she does not want to be seen, to share. The other person would see her in a different way, if he knew this other side of her. Thus, the freedom to present oneself in this way, in this relationship, would be thwarted. The social freedom to play different roles is dependent on being able to present oneself differently to neighbors, students, one’s spouse. It is dependent on the protection of privacy (Allen 1988; Goffman 1959 for further discussion; Goffman is an oft-quoted reference in the privacy literature). We need privacy precisely to afford us spaces free from observation and scrutiny, in order to achieve the liberal ideals of freedom—such as the ideal of personal relations, the ideal of “the politically free man”, and the ideal of “the morally autonomous man” (Benn 1984: 234).

Adam Moore (2010) adopts yet another approach, which nonetheless should be mentioned under the heading of control-based approaches. In the explicit tradition of Aristotelian teleology, Moore steps off from an account of human nature to explain the value of privacy. Human nature, he argues, allows humans to flourish in a particularly human way. In order to flourish, humans need to develop their rational faculties. This allows them, among other things, to live an autonomous life. Among the necessary favorable external conditions which human beings need to flourish, are the rights to and norms of privacy (A. Moore 2003).

The control-based approach has led to the suggestion that different dimensions of control can be distinguished, and that these have a direct correlation with different areas of privacy. From a normative standpoint, these dimensions—not spaces—of privacy serve to protect, facilitate, and effectuate access to what is conceived of as private, and to identify a range of different privacy norms that are supposed to protect and enable personal autonomy.

Other authors suggest that the control-access definition of privacy is best understood as a cluster concept covering interests in (a) control over information about oneself; (b) control over access to oneself, both physically and mentally; and (c) control over one’s ability to make important decisions about family and lifestyle, so as to be self-expressive and to develop varied relationships (DeCew 1997).

Roessler (2001 [2005]) suggests that three dimensions of privacy should be identified, namely decisional privacy, informational privacy, and local privacy, meaning the traditional “private sphere”, mostly the home (the place, as opposed to information or actions). The justification for these different dimensions lies, she argues, in their role in protecting personal autonomy. Without the protection of privacy in these three dimensions, an autonomous and well-lived life in a liberal democracy would not be possible. She explains:

The dimension of decisional privacy serves to secure the scope for a subject to make decisions and take action in all his social relations. The dimension of informational privacy serves to secure a horizon of expectations regarding what others know about him that is necessary for his autonomy. The dimension of local privacy serves to protect the possibilities for spatial withdrawal upon which a subject is dependent for the sake of his autonomy. [The] aim is to show what the value of privacy inheres in and in what way the violation of these dimensions of privacy also entails a violation of the individual autonomy of the subject. (Roessler 2001 [2005: 16])

Protection of personal privacy with respect to these three dimensions is also constitutive of social life (Fried 1968), as well as crucial for democratic decision-making procedures (Stahl 2020; A. Roberts 2022).

The three dimensions are anchored in different traditions of the conception of privacy, and have been discussed for many years. It is in recent years that decisional privacy , or the privacy of actions, has become a specialist term in the literature. Norms of decisional privacy allow for the control of access to one’s decisional sphere. This concerns certain forms of behavior in public, questions of lifestyle, and more fundamental decisions and actions where we may with good reason tell other people that such-and-such a matter is none of their business (see Lanzing 2016; Sax 2018).

A decisive factor in coining the concept of decisional privacy was the ruling of the US Supreme Court in the Roe v. Wade case. As a result of this landmark case, feminist theory has treated sexual freedom of action, the privacy of intimate and sexual acts, and the woman’s right of sexual self-determination as central elements in the theory of privacy (Allen 1988). In the literature on privacy, decisive significance is also given to the privacy of the body (Gatens 1996, 2004). This includes the woman’s newly-won right to conceive of her body as private to the extent that she can decide for herself whether or not to bear a child, and thus enjoy the right of reproductive freedom.

Sexual harassment and sexual orientation are two further central aspects of decisional privacy, both of which concern the link between sexuality, the body, and identity, and are decisive for the societal coding and meaning of privacy. Protection from sexual harassment and the respect for diverse sexual orientations form dimensions of decisional privacy precisely because it is the privacy of the body that is vulnerable to infringement (see Jean Cohen 2002 for a comprehensive discussion). For more on issues linked to power, see §4 .

We can distinguish between different aspects of decisional privacy according to their social context, but the argument underlying the claim to protection of such privacy remains structurally the same. If one understands a person’s self-determination and autonomy to consist in the right to be the (part-) author of her own biography, this must mean that within different social contexts she can demand that her decisions and actions are respected (in the sense that they are “none of your business”) by both social convention and state law. The limits to this form of privacy are regulated by convention and are of course subject to constant renegotiation. Yet this sort of respect for a person’s privacy—applicable also to public contexts—is especially relevant for women. (For relevant examples, see Nagel 1998a & 1998b; Allen 1988; Fraser 1996; Gatens 2004). The spectrum of decisional privacy thus extends from reproductive rights to freedom of conduct in public space.

Norms of informational privacy allow people to control who knows what about them. The knowledge other people have about us shapes the ways in which we can present ourselves, and act around others. Informational privacy is thus essentially linked to individual freedom and autonomy since it enables different forms of self-presentation, as well as enabling different forms of social relationship (see Roessler 2001 [2005: 111–141] for more detail).

Debates about informational privacy hearken back to the interpretation of the US Constitution, beginning with the essay written by Justices Warren and Brandeis. (That essay was written after what they felt was an invasion of privacy by intrusive paparazzi in 1890.) It was in that essay that, for the first time, the right to be left alone was described as a constitutional right to privacy, in the sense that information about a person is worthy of protection even when it involves something that occurs in public (see §1.2 ).

This form of privacy is relevant, primarily in friendships and love relationships, and serves both as protection of relationships and as protection within relationships. In some theories of privacy, this actually constitutes the very heart of privacy in the form of “relational privacy”, which guarantees opportunities for withdrawal that are constitutive for an authentic life (Fried 1968; Rachels 1975). For further details, see entry on privacy and information technology .

The dimension of local privacy refers to the classic, traditional place of privacy, thought of in terms of its most genuine locus: one’s own home. As we have already seen, this form of local privacy is not derived from a “natural” separation of spheres, but rather from the value of being able to be able to withdraw to within one’s own four walls (see §1 and §2 above).

Traditionally, two different aspects of privacy are of relevance here: solitude and “being-for-oneself” on the one hand, and the protection of family communities and relationships on the other. Firstly, people seek the solitude and isolation provided by the protection of their private dwelling, in order to avoid confrontation with others. This aspect of privacy also comes to the fore in the work of Virginia Woolf and George Orwell, for both of whom the privacy of the room—the privacy to write or think—is a precondition for the possibility of self-discovery and an authentic life (Orwell 1949; Woolf 1929).

Local privacy also offers protection for family relationships. The privacy of the household provides the opportunity for people to deal with one another in a different manner, and to take a break from roles in a way that is not possible when dealing with one another in public. This dimension or sphere of privacy, however, is especially prone to generate the potential for conflict. As has been made clear from previous discussions, this has been a particularly important starting point for feminist criticism. A conflict arises here between traditional conceptions of privacy as constitutive of a loving family haven, which has nothing to do with demands for justice or equal rights (Honneth 2004; contrast with Rawls 1997), and the critical feminist approach (see Okin 1989 & 1991; Young 2004).

4. Contemporary Debates

Contemporary debates on privacy are manifold, lively, and sometimes heated. They turn on a multitude of issues that are not only limited to informational privacy, but also include other dimensions of privacy.

Conceptual and normative debates are still pervasive and persistent in the literature, with the role of autonomy and the control-access approach being typical of contemporary discussion. A general discussion of privacy, not focused on one particular aspect but rather presenting the different threats to privacy, and its role and associated tensions in discussions of advancing technology, can be found in different recent comprehensive monographs, for instance in Rule (2007), also in Rotenberg, Scott, and Horwitz (2015), in Citron (2022) as well as in Francis and Francis (2017); Focusing on more particular aspects of the normative debates, Koops et al. (2017) have given a very helpful and informative overview over the different possibilities of defining the meaning and value of privacy. They aim at a typology of eight concepts of privacy:

Our analysis led us to structure types of privacy in a two-dimensional mode, consisting of eight basic types of privacy (bodily, intellectual, spatial, decisional, communicational, associational, proprietary, and behavioral privacy), with an overlay of a ninth type (informational privacy) that overlaps, but does not coincide, with the eight basic types. (2017: 483)

They also very helpfully explain the differences between access-based, control-based and other approaches to the function and value of privacy. Marmor (2015) argues that in the past it has been increasingly difficult to argue for a general interest which privacy is to protect, especially because of the reductionist critique of Thomson (1975); see §2.1 ). Marmor goes on to claim, however,

that there is a general right to privacy grounded in people’s interest in having a reasonable measure of control over the ways in which they can present themselves (and what is theirs) to others. (2015: 3–4)

Marmor presents an interesting version of a control-based theory of privacy in defending the view that we have a basic interest in having (reasonable) control over evaluations of ourselves, which also means that we have an interest in a predictable environment and a predictable flow of information (see 2015: 25). He is thereby in fact connecting to other control-based approaches (see Jean Cohen 1992, also Goffman 1959) as well as theories of privacy as contextual integrity (see Nissenbaum 2010). Mainz and Uhrenfeldt (2021) also support the control-access approach to privacy, claiming that

there is at least a pro tanto reason to favor the control account of the right to privacy over the access account of the right to privacy. (2021: 287)

In a slightly different manner, Gaukroger (2020: 416) claims that privacy not only protects us when we act in morally right ways, but also when we make use of the “freedom to be bad”—which he defends as a general good. Lundgren, however, disputes the control-access approach because of its fundamental difficulties. Following Parent, Lundgren argues that if we adopt a control based account, we lose privacy every time we give someone access to whatever it is we want to keep private which in fact means that we lose more and more of our privacy. Lundgren therefore argues for a “limited access” (Lundgren 2020: 173) conception, criticizing many different forms of control-access accounts (2020: 167 fn 7). (By implicitly following Bloustein’s (1964) plea for the centrality of the concept of human dignity, Floridi (2016) contends that the concept of dignity should be the foundation of interpreting informational privacy, as understood by the GDPR.)

A further development concerns the theory of contextual integrity , as developed by Helen Nissenbaum. Nissenbaum’s framework of Contextual Integrity has won enormous attention as the standard to evaluate flows of personal information, their legitimacy and acceptability but has not remained undisputed. On the one hand, Nissenbaum’s paradigm has led to a great number of articles applying it to different societal fields and technology developments. Shvartzshnaider et al. (2019) present “a method for analyzing privacy policies using the framework of contextual integrity (CI)”. The authors claim that this

method allows for the systematized detection of issues with privacy policy statements that hinder readers’ ability to understand and evaluate company data collection practices. (Shvartzshnaider et al. 2019: 162)

Nissenbaum (2019) herself applies the approach of contextual integrity to a complex data network and argues that especially the contextual integrity approach is able to identify those sources of disruption in novel information practices which should be criticized (2019: 221; see also Nissenbaum 2015). Another interesting example is Shaffer (2021) who applies the Contextual Integrity Framework to new technologies for smart cities. Winter and Davidson (2019) apply the approach of contextual integrity to the problem of data governance in health information, since advances in collecting data

also pose significant data governance challenges for ensuring value for individual, organizational, and societal stakeholders as well as individual privacy and autonomy.

They go on to

investigate how forms of data governance were adapted, as PHI [Personal Health Information] data flowed into new use contexts, to address concerns of contextual integrity, which is violated when personal information collected in one use context moves to another use context with different norms of appropriateness. (2019: 36)

On the other hand, some criticism has been raised which argues that Nissenbaum does not actually provide normative standards to judge which information flows should be seen as ethically justified and which as detrimental (Rule 2019). Rule argues that

notions that norms underlying any given domain of human conduct are unambiguous or uncontested simply do not withstand close examination. Indeed, most social norms, particularly in rapidly changing domains of human conduct like privacy practices, are volatile and highly contested. (2019: 260)

In a similar vein, DeCew (2015) questions the idea that in any given context, the governing norms might be not up to ethical standards and still defendable on the basis of the integrity of the context and the norms governing it, as is the case of the governing norms in the traditional family (DeCew 2015: 215).

Finally, the theory that private data should be seen as private property should be mentioned. This idea originated with Lessig (2002) and has since been discussed by a number of authors. Arguments exist for and against the idea that the agents generating datapoints—the source of the data—have a right to claim those datapoints as private property (Schneider 2021; see also Tufekci 2015; Wu 2015).

Newell e.a. (2015) ushers in a cogent articulation and defense of privacy even when privacy seems to conflict with other important values. Recent technological developments are concerning, with implications for the moral, legal and social foundations and interrelationships between privacy, security and accountability. Katell and Moore (2016: 3) make use of a control-based definition of privacy, stating that a right to privacy is “a right to control access to, and uses of, places, bodies, and personal information”. They also write that

the ability to control access to our bodies, capacities, and powers, and to sensitive personal information, is an essential part of human flourishing or well-being. (Katell & Moore 2016: 5)

Kenneth Himma

argues that security is a more important right that always “trumps” privacy, which on his view is not an absolute or fundamental right, but merely “instrumental” to other rights. (Katell & Moore 2016: 12 and Himma 2016; both in A. Moore 2016)

Himma’s defense is based on his view that security is fundamental to survival—our most valuable duty and obligation. In contrast, responding to this view, Adam Moore defends privacy over security with multiple arguments, perhaps the most powerful of which is demonstrating “the importance of privacy as a bulwark against the tyrannical excesses of an unchecked security state” (Katell & Moore 2016: 13 and Moore 2016b). As the authors in this volume note, there is good reason to conclude that privacy, security and accountability are all morally valuable.

Over the course of the last few years (and especially between 2020 and 2023), the conflict between privacy and health, or privacy and safety, has been the topic of much discussion—particularly from the perspective of the COVID-19 pandemic. In an early attempt to develop suitable guidelines, Morley et al. (2020) have postulated that not only should privacy be protected, but also equality and fairness observed in digital contact-tracing. However, such contact-tracing apps clearly bring about a conflict between privacy and the health of people. Throughout the course of the pandemic, such contact-tracing apps were contested (see Bengio et al. 2020; Fahey & Hino 2020).

Schoeman (1984b) has pointed out that the question of whether or not privacy is culturally relative can be interpreted in different ways. One question is whether privacy is deemed valuable to all peoples, or whether its value is relative and subject to cultural differences (see Westin 1967; Rachels 1975; Allen 1988; A. Moore 2003). Another question is whether there are any aspects of life that are inherently private and not just conventionally so. There is also some literature on the different cultures of privacy in different regions of the world. A growing number of articles are concerned with privacy conceptions in China, not only due to technological developments over the last years (as detailed in Ma 2019 for a relational, Western-Eastern conception of privacy; H. Roberts 2022), but also on earlier periods in Chinese culture, for instance on Confucian and Taoist thinking (Whitman 1985; for attitudes towards healthcare, see Chen et al. 2007). Basu (2012) writes about the Indian perception of privacy, based on India’s cultural values, and offers an explanation for why their concept of privacy seems to extend beyond the often-dominating public-private dichotomy. Capurro (2005) deals with intercultural aspects of privacy, particularly with regard to differences between Japanese and Western conceptions (see also Nakada & Tamura 2005). For a perspective on the general Buddhist theory of privacy, see Hongladarom (2016). The ubuntu perspective on privacy is discussed by Reviglio and Alunge (2020).

Theorists have repeatedly pointed to the connection between the protection of privacy, understood as the protection of individual autonomy, and the protection of democracy (see Goold 2010: 38–48). Hughes, for example, speaks of privacy as a “bulwark against totalitarianism” (2015: 228), while Spiros Simitis describes the right to privacy as “a constitutive element of a democratic society” (1987: 732). Probably the best-known advocate of privacy with a view to democracy is Ruth Gavison. She argues that the protection of privacy both supports and encourages the moral autonomy of citizens, which is an essential precondition of democratic societies (Gavison 1980: 455; see also Hughes 2015: 228; Simitis 1987: 732; Solove 2008: 98–100; Schwartz 1999). There can be no democratic self-determination without the protection of privacy. Hence, government intervention for the security of citizens becomes an ideology that threatens democracy when its goal is no longer the freedom of individuals—that is, when citizens are no longer treated by the state as democratic subjects, but as objects.

Mokrosinska emphasizes privacy as a democratic value, thus strengthening that value when it comes into competition with freedom of expression and other political interests. Privacy can facilitate setting aside deep disagreements in order for political engagement in a democracy to proceed. Thus, Mokrosinska proposes a strategy for mediating between privacy and free speech when they collide (Mokrosinska 2015). Lever, in a monograph on privacy and democracy, develops a comprehensive theory and argues that privacy is in different ways essential for democracies and for the freedom, equality, and solidarity of democratic subjects (Lever 2013). From a different perspective, Stahl (2020) discusses ways in which “surveillance of intentionally public activities” should be criticized. Drawing on the theories of contextual integrity (Nissenbaum) and of the public sphere (Habermas), Stahl argues that

strategic surveillance of the public sphere can undermine the capacity of citizens to freely deliberate in public and therefore conflicts with democratic self-determination. (2020: 1)

The democratic value of privacy also plays a central role in Republicanism , since threats to privacy are always also threats to democracy. Republicans point out the value of privacy by contrasting it to liberal theories, which cannot explain the possible threat to privacy—not just the actual interference—as an encroachment on the freedom of subjects. Andrew Roberts (2015: 320) writes that for republicans, because privacy

is a pre-requisite for effective participation in political life, and republicans consider such participation to be the essence of self-government and the means through which a polity can secure conditions of freedom, in a republican democracy individual privacy will be seen as a collective good.

(For a far more elaborate account along similar lines, see A. Roberts 2022; as well as Schwartz 1999)

Group privacy has entered philosophical discussion at a rather late stage, since it was only with the new forms of information and communication technology that groups could be targeted and surveilled efficiently. Research on group privacy addresses the fact that it is not individuals that are targeted by data collection and profiling practices, but rather groups of individuals who share certain relevant features. An important and influential collection of essays is that of Taylor, Floridi, and van der Sloot (2017a). Taylor et al. point out that

profiling and machine learning technologies are directed at the group level and are used to formulate types, not tokens—they work to scale, and enable their users to target the collective as much as the individual. (2017b: 1)

The book discusses divergent perspectives on what a group is, how groups should be addressed with regard to privacy, which elements of the problem can be addressed using current legal and conceptual tools, and which will require new approaches.

Especially interesting is Floridi’s argument that

groups are neither discovered nor invented but designed by the level of abstraction (LoA) at which a specific analysis of a social system is developed. Their design is therefore justified insofar as the purpose, guiding the choice of the LoA, is justified. (Floridi 2017: 83)

Floridi states that the claims that groups have rights and groups have privacy are taken together in the argument that groups can have rights to privacy, and that indeed sometimes it is only the group which has privacy and not its members. (2017: 83; see also Taylor 2017: 13–36; van der Sloot 2017: 197–224). Loi and Christen (2020) agree with Floridi’s claim that groups can have rights to privacy. However, they argue, against Floridi, for the distinction between two different concepts of privacy for groups:

The first (…) deals with confidential information shared with the member of a group and inaccessible to (all or a specific group of) outsiders. The second (…) deals with the inferences that can be made about a group of people defined by a feature, or combination thereof, shared by all individuals in the group. (Loi & Christen 2020: 207)

Loi and Christensen claim that it is the latter, the inferential notion of privacy which is the one explaining group privacy. An absolute right to this privacy, they conclude, is implausible.

Puri (2021) criticizes the conventional liberal conception of privacy that focuses excessively on the identification of the individual as inadequate in safeguarding the individual’s identity and autonomy. Puri therefore develops

a theoretical framework in the form of a triumvirate model of the group right to privacy (GRP), which is based on privacy as a social value… [he also formulates] the concept of mutual or companion privacy, which counterintuitively states that in the age of Big Data Analytics, we have more privacy together rather than individually. (Puri 2021: 477)

Interestingly, Puri (2021) connects the question of group privacy with the one of the social dimensions of privacy. However, although both discourses share the critique of individualist conceptions of privacy and are interested in more than the individual’s protection of privacy, the latter tackles a different problem. The upshot of the debates around the social dimensions of privacy is the claim that privacy is not (only) a right or need for individuals, but protects, and ought to protect relations as well. Not all relations make up a group, and privacy can, in some cases, constitute the very relation it protects, as Roessler and Mokrosinska (2013) argue. James Rachels and Charles Fried both recognized that privacy has a social value: relationships can only be protected when certain norms of privacy apply both within relationships and to relationships (Fried 1968; Rachels 1975). The various norms of informational privacy do not merely regulate the social relationships and roles that we have in life, they actually make them possible. The perspective of the individual seems insufficient to account for many of the concerns raised in debates around privacy-invasive technologies. With ever-greater frequency, privacy-invasive technologies have been argued to endanger not only individual interests, but also to affect society and social life more generally. Therefore, not only the safeguarding of individual freedoms, but also the constitution and regulation of social relationships, are essential to privacy norms (see Roessler & Mokrosinska 2015).

Following Fried (1968) and his theory describing the relational character of the protection of individual privacy, one can argue that, while endorsing norms of informational privacy which should protect individual privacy, this privacy at the same time plays an essential role in social relations. Therefore, the protection of privacy is not always and not necessarily in conflict with societal interests (Roessler & Mokrosinska 2013: 771).

In recent years, several scholars have taken important steps towards developing a social approach to privacy. Arguing that an important aspect of the significance of informational privacy is that it goes beyond the interests of the individuals it protects, these scholars have emphasized the way in which privacy enables social and professional relationships, democratic decision-making processes, and political participation. They have also stressed the necessary role of privacy for cooperation and trust within various associations, such as economic partnerships. Regan, Solove, and Nissenbaum also follow this line of thought. Famously, Regan argues that privacy is not only of value to the individual, but also to society in general. For Regan privacy is a common value, a public value, and also a collective value (1995: 213; see Regan 2015: 50; Hughes 2015). Solove claims that

[by] understanding privacy as shaped by the norms of society, we can better see why privacy should not be understood solely as an individual right… the value of privacy should be understood in terms of its contribution to society. (2008: 98, 171fn)

Solove believes privacy fosters and encourages the moral autonomy of citizens, a central requirement of governance in a democracy. Regan (1995), Solove (2008) and Nissenbaum (2010) took the first steps in analyzing the social dimensions and value of privacy in a democratic society and are now focusing on the role of privacy in political and social practice, law, media and communication, healthcare, and the marketplace.

The social dimensions of privacy also play a crucial role in the social analysis of the family. DeCew and Moore assess the public/private boundary in the family, given that family conventions are among the most crucial for this primary human socialization setting, and also given that structures in the family often are oppressive, as DeCew points out (see MacKinnon 1989).

4.7 Privacy and the Datafication of Daily Life

The issue of the datafication of daily life and its relation to privacy has been discussed from many different angles. One of the first areas of focus was “ambient intelligence” (initially analyzed by Brey 2005), and the “internet of things”, which is still very widely discussed. Wachter, for instance, writes that the

Internet of Things (IoT) requires pervasive collection and linkage of user data to provide personalised experiences based on potentially invasive inferences. Consistent identification of users and devices is necessary for this functionality, which poses risks to user privacy. (2018: 266)

She suggests guidelines for IoT developers which guarantee the protection of privacy in accordance with the GDPR, in cases such as smart homes and data gathered by Siri and Alexa (see also entries on privacy and information technology and ethics of artificial intelligence and robotics .)

Most of us spend a lot of time at work every day. It is the workplace where, in recent years, more and more employers have been observing what their employees are doing. This raises normative problems, such as the consequences of this surveillance for the autonomy of employees, as well as for the relationship between employees and, of course, the question of how much privacy one can claim at the workplace. A good overview of this topic is provided by Bhave, Teo, and Dalal (2020). Chamorro-Premuzic and Buchband (2020) demonstrate the importance of clear communications with employees explaining the reasons for, and existence of, corporate monitoring programs.

From the perspective of the datafied life, the development of commercial health apps—which have become increasingly commonplace in recent years—is particularly important. They became especially relevant given COVID-19 lockdown restrictions, when people became not only more interested in sporting activities, but also in measuring them and finding motivation through interpreting related data. Commercial health app developers have exploited this, and are often not sufficiently clear about the dangers of data collection for the protection of privacy (Sax 2021). Huckvale, Torous, and Larsen (2019) point to these dangers and explain their nature, as does Mulder (2019), who examines app providers and their marketing statements with respect to the extent to which they actually meet the requirements of the GDPR. A different, but equally important point concerns women’s health data and how the recent decision of the US Supreme Court to overturn Roe v. Wade affects the privacy of the health-data of women (see Cox 2022; the Dobbs v. Jackson Women’s Health Organization is also relevant at this point).

It is often reported that people who talk generously about their private lives in social media still claim that they regard informational privacy as a very valuable asset, and that the state has an obligation to guarantee their informational privacy protection. Observations about the online behavior of young people in particular have led theorists to use the term “privacy paradox” in recent years (Hargittai & Marwick 2016: 3737). Martin and Nissenbaum (2016), however, argue against a paradox: they present a survey with

questions that insert ranges of values for the respective parameters contextual integrity asserts as fundamental to informational standards. (Martin & Nissenbaum 2016: 217)

The argument from contextual integrity demonstrates that whether people see any given data flow as violating privacy depends on the values for five parameters: sender, recipient, data subject, transmission principle, and type of information. Martin and Nissenbaum argue that if one describes a situation in which information or data flows without specifying values for all the parameters, one is thereby offering an ambiguous description. Their study on “sensitive” information demonstrates that although people consistently rank certain types of information as more or less sensitive, their reactions vary quite significantly when the recipient is specified, e.g., health information to an advertiser versus to a physician. Thus, when people post information on social media, we can draw no conclusions about whether this means they care or do not care about privacy because they may consider it appropriate to share the information in question with friends, though not with advertisers etc. (2016: 217).

Hargittai and Marwick, on the other hand, interpret the privacy paradox differently and argue that while young adults are aware of the dangers of sharing information online, they feel that they have to “acknowledge that individuals exist in social contexts where others can and do violate their privacy” (2016: 3737). Hoffman, Lutz, and Ranzini (2016), in contrast, explain the endless sharing of personal data, while simultaneously defending the value of informational privacy, as a kind of “privacy cynicism”. In contrast, Draper and Turow analyze this attitude as “resignation” and have recently defended a theoretical framework which conceptualizes “digital resignation as a rational response to consumer surveillance” (Draper & Turow 2019: 1824).

One aspect of mass data collection concerns the possibility of self-observation: one’s own behavior can be measured comprehensively, and every quantifiable detail can be noted. The first such “lifeloggers” were Gordon Bell and Jim Gemmell. They had a grand vision of recording life as a whole, similar to the protagonists in Dave Eggers’ dystopian novel The Circle (Bell & Gemmell 2009 [2010]; see Eggers 2013 and his 2021; Lanzing 2016). As a rule, self-trackers use self-observation techniques more selectively. They are mostly concerned with data that is necessary for their health and well-being, but not with all-round self-observation.

Allen points to one of the consequences of living with a life log: she argues that life logs make memory superfluous and calls this activity a “freezing of the past” (Allen 2008: 48). This makes it impossible to change oneself. She further argues that this complete abandonment of privacy—also characteristic of social media—leads to people, especially young adults, becoming less autonomous. Allen therefore pleads for a duty to protect our own privacy (Allen 2011) and makes the case that protecting one’s own privacy should be required:

We need to restrain choice—if not by law, then somehow. Respect for privacy rights and the ascription of privacy duties must both be a part of a society’s formative project for shaping citizens… some privacy should not be optional, waivable, or alienable. (Allen 2011: 172)

Alan Westin (1967) surveyed several studies of animals, demonstrating that a desire for privacy is not restricted to humans. One of the more recent contemporary debates on privacy concerns the question of whether animals should have (a right to) privacy. Pepper (2020) defends such a right, since nonhuman animals also have an

interest in shaping different kinds of relationships with one another by giving us control over how we present ourselves to others… which grounds a right against us. (2020: 628; see also Paci et al. 2022)

Research on the relationship between privacy and knowledge—that is, on epistemological issues—is a relative newcomer to the arena of privacy debates. Fallis (2013) offers one of the first discussions of the issue, and criticizes the position that holds that “protecting privacy typically leads to less knowledge being acquired”, as well as the position that “lack of knowledge is definitive of having privacy”. He goes on to argue that

contra the defenders of the knowledge account of privacy , that someone knowing something is not necessary for someone else losing privacy about that thing. (Fallis 2013: 153)

For Kappel, it

seems obvious that informational privacy has an epistemological component; privacy or lack of privacy concerns certain kinds of epistemic relations between a cogniser and sensitive pieces of information. (2013: 179)

In his paper, Kappel aims at

[shedding] some light on the epistemological component of informational privacy [since] the nature of this epistemological component of privacy is only sparsely discussed. (2013: 179; see also the entry on privacy and information technology )

Since the publication of the essay by Warren and Brandeis, there have been enormous technological advances that have radically transformed not only the possibilities for keeping people under surveillance, but also our concepts of privacy, as well as freedom and autonomy. Opportunities for monitoring people are now available in private households, public spaces, and when surfing the Internet. The spaces of freedom in the public sphere could not exist under permanent (potential) surveillance and social control, and certainly not if we can no longer be sure about what data being collected, and into whose hands it might fall (see Julie Cohen 2008 for more on the relationship between privacy and visibility in the networked information age). If we can no longer be certain that we are not being surveilled and controlled, then we cannot even openly and self-determinedly debate potentially critical positions with others (Stahl 2016: 33–39; Rosen 2000: 207). Control and standardization (or normalization) are two sides of the same coin. In the face of increasingly pervasive data collection and digital surveillance, some authors have sought to move away from understandings of privacy that build on the notions of control over information or secrecy. Brunton and Nissenbaum, for instance, have observed a variety of practices that sought privacy through “obfuscation”, i.e., the “deliberate addition of ambiguous, confusing, or misleading information to interfere with surveillance and data collection” (2015: 1). The German sociologist Carsten Ochs suggests that these obfuscation practices correspond to a new type of informational privacy, which emerges in response to the specific challenges of a digitalized society and which he characterizes as “concealment” (“ Verschleierung ”) (2022: 496).

Brunton and Nissenbaum argue that obfuscation is a

tool particularly suited to the “weak”—the situationally disadvantaged, those at the wrong end of asymmetrical power relationships. (2015: 9)

It is useful to address the information asymmetry between those who collect data and can use it to monitor, predict and influence behaviors, and those whose data is collected and who may not even be aware of this data collection and usage (Brunton & Nissenbaum 2015: 48–51). These observations point to the relation between privacy and power, which has received significant attention in the recent literature. The erosion of privacy through data collection and processing increases the power of big technology companies and governments to influence the people whose data is collected (Véliz 2020). Privacy is also a matter of power, insofar as access to privacy is unequally distributed and marginalized groups are more vulnerable than others to privacy invasions. The legal theorist Skinner-Thompson (2021) has documented how minority communities, such as religious, ethnic or sexual minorities, are disproportionally exposed to surveillance due to a lack of legal protections of privacy.

Here, we would also like to draw attention to the computer intelligence consultant Edward Snowden, an employee at the NSA, who in 2013 published highly classified data from the NSA where he was working. These data revealed global surveillance programs by different states and the disclosure prompted many heated cultural debates in different countries, as well as international academic debates. From the beginning, one of the questions was whether Snowden had actually violated the privacy of individuals or whether the publication of state secrets should be judged differently than the publication of individual secrets; see Mokrosinska on the difference between secrecy and privacy (Mokrosinska 2020). On the general privacy issues involved in this affair, see Greenwald 2014; Rubel 2015; MacNish 2018; Lucas 2014. For the general societal debates, the Snowden revelations were significant in drawing new attention to structural forms of surveillance and possible violations of privacy.

In analyses of the new “surveillance state”, many authors discuss the different social contexts in which violations of informational privacy may in various ways coincide with restrictions of freedom. Zuboff (2019) analyses and criticizes the surveillance capitalism and the “instrumentarian power” of this surveillance state, where privacy is only seen as data and therefore as a tradable good. At the same time, she criticizes the process of transforming personal life into behavioral data and commercial products. For Zuboff, personal data is not a commodity that should be traded arbitrarily, since it is essentially constituted by human experience (see also Roessler 2015; Wang 2022). Susskind’s discussion of “future power” as the power to “scrutiny” and to “perception-control” (2018) is equally critical and pessimistic about the protection of informational privacy in the surveillance state. Similar pessimism is expressed by Harcourt (2015), who conceptualizes the decline and loss of privacy as the loss of the individual. At the same time, he calls for forms of civil disobedience in the surveillance state (2015). For the political abuse of data power, see Cadwalladr and Graham-Harrison (2018) on the so-called Cambridge-Analytica Scandal.

Specific aspects of the surveillance state are highlighted by authors using the concept of data colonialism, which brings out the persistent exploitative power inherent in datafication (Wang 2022). The term “colonialism” is connected to cruel practices of domination and exploitation in human history. As Couldry and Mejias (2019) emphasize, “data colonialism” is not used to describe the physical violence and force that was often seen in historical colonialism, but at the same time, the frame of colonialism is not used “as a mere metaphor” either. Instead, the term is used “to refer a new form of colonialism distinctive of the twenty-first century” (2019: 337), which involves a widespread dispossession and exploitation of our human lives. By using the term “data colonialism” instead of “data capitalism”, or some similar formulation, the crueler side of “today’s exposure of daily life to capitalist forces of datafication” is highlighted (2019: 337; see also Thatcher, O’Sullivan, & Mahmoudi 2016: 994).

More specifically, racist structures and colonial heritage play a constitutive role in the conceptualization of privacy. Fraser (1992) analyzed the simultaneous constitution of privacy and the public sphere, in a cogent interpretation of the hearings for the confirmation of Clarence Hill to the US Supreme Court. Race, sex and class were the determinants of the construction of privacy:

The way the struggle unfolded, moreover, depended at every point on who had the power to successfully and authoritatively define where the line between the public and the private would be drawn. It depended as well on who had the power to police and defend that boundary. (Fraser 1992: 595)

Nagel (1998a and b), meanwhile, is critical of this theory of interdependence of the concepts privacy and the public sphere, but the idea that personal privacy is deeply connected with societal and political power relations plays an important role in the literature from the last decade. There has been intensified research on the racist and colonial constructions of privacy. Couldry and Mejias (2019) generally criticize the relation between big data and the contemporary subject, while Arora (2019) argues more specifically that that many assumptions about internet use in developing countries are wrong, since they presuppose a Western conception of this use and the idea of privacy in talking about and dealing with the Global South. In her work, she analyses the real-life patterns of internet use in different countries (among them India and China) and seeks to answer questions why, for instance, citizens of states with strict surveillance policies seem to care so little about their digital privacy (Arora 2019: 717).

Translating human life into commodified data relations is not new, but rather a fact that has been critically examined by many scholars over the years. For example, Shoshana Zuboff (2019) has commented on the drive toward continually increasing data extraction as the essence of surveillance capitalism. From a different angle, Simone Browne interprets the “condition of blackness” as the key with the help of which state surveillance can precisely record and regulate the lives of subjects. Her book shows exactly how state surveillance emerged and learned from practices of colonial exploitation and slavery. Surveillance, Browne asserts,

is both a discursive and material practice that reifies boundaries, borders, and bodies around racial lines, so much so that the surveillance of blackness has long been, and continues to be, a social and political norm. (2015: 1)

Again, from the societal perspective of class and race, Bridges (2017) demonstrates in which ways the protection of personal privacy is deeply connected to class and race, and how it can be used to exacerbate the structures of social inequality. She points out that the poor are subject to invasions of privacy that can be perceived as gross demonstrations of governmental power without limits.

Over the last years, discussions have begun regarding the question of what lies “beyond privacy”. Since most big technology companies have been forced to conform to privacy laws, and even advertise their compliance, a significant question is whether a sort of “privacy-washing” is occurring, where the very focus on privacy undermines the interests which originally initiated those privacy concerns. Projects like “The Privacy Sandbox”, where Google, together with the advertisement-technology industry, developed seemingly privacy-friendly alternatives to third-party cookies, may be seen as cases of “privacy-washing”. Behind these supposedly good intentions lie fundamental questions about the future of privacy, as well as the future of the internet. For example, do we even want an internet that runs on personalized advertisements and massive data collection? Brunton and Nissenbaum are concerned that

information collection takes place in asymmetrical power relationships: we rarely have choice as to whether or not we are monitored, what is done with any information that is gathered, or what is done to us on the basis of conclusions drawn from that information. (2015: 49)

Sharon demonstrates that

in an interesting twist, the tech giants came to be portrayed as greater champions of privacy than some democratic governments. (2021: 45)

Thus, if we look back to the beginnings of informational privacy in the late nineteenth century, we can see that the apparent changes (the corporations themselves seem to care about privacy) express the very continuities of the threat to privacy. The academic and societal debates about these threats, however, make it clear that privacy has by no means lost its value and importance.

  • Allen, Anita L., 1988, Uneasy Access: Privacy for Women in a Free Society , (New Feminist Perspectives Series), Totowa, NJ: Rowman & Littlefield.
  • –––, 1989, “Equality and Private Choice: Reproductive Laws for the 1990s , edited by Nadine Taub and Sherrill Cohen”, Nova Law Review , 13(2): 625–648. [ Allen 1989 available online ]
  • –––, 2000, “Privacy-as-Data Control: Conceptual, Practical, and Moral Limits of the Paradigm Commentary”, Connecticut Law Review , 32(3): 861–876.
  • –––, 2008, “Dredging up the Past: Lifelogging, Memory, and Surveillance”, University of Chicago Law Review , 75(1): 47–74 (article 4).
  • –––, 2011, Unpopular Privacy: What Must We Hide? , (Studies in Feminist Philosophy), Oxford/New York: Oxford University Press. doi:10.1093/acprof:oso/9780195141375.001.0001
  • Arendt, Hannah, 1958 [1998], The Human Condition , (Charles R. Walgreen Foundation Lectures), Chicago: University of Chicago Press. Second edition, 1998.
  • Ariès, Philippe, 1960 [1962], L’enfant et la vie familiale sous l’Ancien Régime , (Civilisations d’hier et d’aujourd’hui), Paris: Plon. Translated as Centuries of Childhood: A Social History of Family Life , Robert Baldick (trans.), New York: Vintage Books, 1962.
  • De l’Empire romain à l’an mil , Paul Veyne (ed.), translated as From pagan Rome to Byzantium , 1992
  • De l’Europe féodale à la Renaissance , Georges Duby (ed.), translated as Revelations of the Medieval World , 1993
  • De la Renaissance aux lumières , Roger Chartier (ed.), translated as Passions of the Renaissance , 1993
  • De la Révolution à la Grande Guerre , Michelle Perrot (ed.), translated as From the fires of the Revolution to the Great War , 1994
  • De la Première Guerre mondiale à nos jours , Antoine Prost and Gérard Vincent (eds), translated as Riddles of Identity in Modern Times , 1998
  • Arora, Payal, 2019, “General Data Protection Regulation—A Global Standard? Privacy Futures, Digital Activism, and Surveillance Cultures in the Global South”, Surveillance & Society , 17(5): 717–725. doi:10.24908/ss.v17i5.13307
  • Basu, Subhajit, 2012, “Privacy Protection: A Tale of Two Cultures”, Masaryk University Journal of Law and Technology , 6(1): 1–34.
  • Bell, C. Gordon and Jim Gemmell, 2009 [2010], Total Recall: How the E-Memory Revolution Will Change Everything , New York: Dutton. Reprinted as Your Life, Uploaded: The Digital Way to Better Memory, Health, and Productivity , New York: Plume, 2010.
  • Bengio, Yoshua, Richard Janda, Yun William Yu, Daphne Ippolito, Max Jarvie, Dan Pilat, Brooke Struck, Sekoul Krastev, and Abhinav Sharma, 2020, “The Need for Privacy with Public Digital Contact Tracing during the COVID-19 Pandemic”, The Lancet Digital Health , 2(7): e342–e344. doi:10.1016/S2589-7500(20)30133-3
  • Benn, Stanley I., 1984, “Privacy, Freedom, and Respect for Persons”, in Schoeman 1984a: 223–244 (ch. 8). doi:10.1017/CBO9780511625138.009
  • –––, 1988, A Theory of Freedom , Cambridge/New York: Cambridge University Press. doi:10.1017/CBO9780511609114
  • Benn, S. I. and Gerald F. Gaus (eds.), 1983, Public and Private in Social Life , London/New York: Croom Helm/St. Martin’s Press.
  • Bhave, Devasheesh P., Laurel H. Teo, and Reeshad S. Dalal, 2020, “Privacy at Work: A Review and a Research Agenda for a Contested Terrain”, Journal of Management , 46(1): 127–164. doi:10.1177/0149206319878254
  • Bloustein, Edward J., 1964, “Privacy as an Aspect of Human Dignity: An Answer to Dean Prosser”, New York University Law Review , 39(6): 962–1007.
  • Bok, Sissela, 1982, Secrets: On the Ethics of Concealment and Revelation , New York: Pantheon Books.
  • Bork, Robert H., 1990, The Tempting of America: The Political Seduction of the Law , New York: Free Press.
  • Boyd, Danah, 2010, “Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications”, in Zizi Papacharissi (ed.), Networked Self: Identity, Community, and Culture on Social Network Sites , New York: Routledge, pp. 39–58.
  • Brey, Philip, 2005, “Freedom and Privacy in Ambient Intelligence”, Ethics and Information Technology , 7(3): 157–166. doi:10.1007/s10676-006-0005-3
  • Bridges, Khiara M., 2017, The Poverty of Privacy Rights , Stanford, CA: Stanford University Press.
  • Brown, Wendy, 1995, States of Injury: Power and Freedom in Late Modernity , Princeton, NJ: Princeton University Press.
  • Brunton, Finn and Helen Nissenbaum, 2015, Obfuscation: A User’s Guide for Privacy and Protest , Cambridge, MA: MIT Press.
  • Cadwalladr, Carole and Emma Graham-Harrison, 2018, “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach”, The Guardian , 17 March 2018, News Section. [ Cadwalladr and Graham-Harrison available online ]
  • Capurro, Rafael, 2005, “Privacy. An Intercultural Perspective”, Ethics and Information Technology , 7(1): 37–47. doi:10.1007/s10676-005-4407-4
  • Chamorro-Premuzic, Tomas and Richard Buchband, 2020, “If You’re Tracking Employee Behavior, Be Transparent About It”, Harvard Business Review , 23 December 2020.
  • Chen, Wei-Ti, Helene Starks, Cheng-Shi Shiu, Karen Fredriksen-Goldsen, Jane Simoni, Fujie Zhang, Cynthia Pearson, and Hongxin Zhao, 2007, “Chinese HIV-Positive Patients and Their Healthcare Providers: Contrasting Confucian Versus Western Notions of Secrecy and Support”, Advances in Nursing Science , 30(4): 329–342. doi:10.1097/01.ANS.0000300182.48854.65
  • Citron, Danielle Keats, 2022, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age , New York: W.W. Norton & Company.
  • Cohen, Jean L., 1992, “Redescribing Privacy: Identity, Difference, and the Abortion Controversy”, Columbia Journal of Gender and Law , 3(1). doi:10.7916/CJGL.V3I1.2354
  • –––, 2002, Regulating Intimacy: A New Legal Paradigm , Princeton, NJ: Princeton Univ. Press.
  • Cohen, Julie E., 2008, “Privacy, Visibility, Transparency, and Exposure”, University of Chicago Law Review , 75(1): 181–201 (article 4).
  • –––, 2019, “Turning Privacy Inside Out”, Theoretical Inquiries in Law , 20(1): 1–31. doi:10.1515/til-2019-0002
  • Cornell, Drucilla, 1995, The Imaginary Domain: Abortion, Pornography and Sexual Harassment , New York: Routledge.
  • Couldry, Nick and Ulises A. Mejias, 2019, “Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject”, Television & New Media , 20(4): 336–349. doi:10.1177/1527476418796632
  • Cox, David, 2022, “How Overturning Roe v Wade Has Eroded Privacy of Personal Data”, BMJ , 378: o2075. doi:10.1136/bmj.o2075
  • Craglia, M. (ed.), de Nigris, S., Gómez-González, E., Gómez, E., Martens, B., Iglesias, M., Vespe, M., Schade, S., Micheli, M., Kotsev, A., Mitton, I., Vesnic-Alujevic, L., Pignatelli, F., Hradec, J., Nativi, S., Sanchez, I., Hamon, R., Junklewitz, H., 2020, “Artificial Intelligence and Digital Transformation: early lessons from the COVID-19 crisis”, EUR 30306 EN , Luxembourg: Publications Office of the European Union. doi:10.2760/166278.
  • Dalla Corte, Lorenzo, 2020, “Safeguarding data protection in an open data world: On the idea of balancing open data and data protection in the development of the smart city environment and data protection in the development of the smart city environment”, PhD Thesis, Tilburg: Tilburg University.
  • DeCew, Judith Wagner, 1989, “Constitutional Privacy, Judicial Interpretation, and Bowers v. Hardwick”, Social Theory and Practice , 15(3): 285–303. doi:10.5840/soctheorpract198915314
  • –––, 1997, In Pursuit of Privacy: Law, Ethics, and the Rise of Technology , Ithaca, NY: Cornell University Press.
  • –––, 2007, “The Philosophical Foundations of Privacy”, in Encyclopedia of Privacy , William G. Staples (ed.), Westport, CT: Greenwood Press, pp. 404–414.
  • –––, 2012, “Privacy”, in The Routledge Companion to Philosophy of Law , Andrei Marmor, New York: Routledge, pp. 584–598.
  • –––, 2015, “The Feminist Critique of Privacy: Past Arguments and New Social Understandings”, in Roessler and Mokrosinska 2015: 85–103. doi:10.1017/CBO9781107280557.006
  • –––, 2016, “Privacy and Its Importance with Advancing Technology”, Ohio Northern University Law Review , 42(2): 471–492 (article 4).
  • –––, 2018, “The Conceptual Coherence of Privacy As Developed in Law”, in Core Concepts and Contemporary Issues in Privacy , Ann E. Cudd and Mark C. Navin (eds.), (AMINTAPHIL: The Philosophical Foundations of Law and Justice 8), Cham: Springer International Publishing, pp. 17–30. doi:10.1007/978-3-319-74639-5_2
  • Diggelmann, Oliver and Maria Nicole Cleis, 2014, “How the Right to Privacy Became a Human Right”, Human Rights Law Review , 14(3): 441–458. doi:10.1093/hrlr/ngu014
  • Draper, Nora A and Joseph Turow, 2019, “The Corporate Cultivation of Digital Resignation”, New Media & Society , 21(8): 1824–1839. doi:10.1177/1461444819833331
  • Dworkin, Ronald, 1994, Life’s Dominion: An Argument about Abortion, Euthanasia and Individual Freedom , New York: Vintage Books.
  • Eggers, Dave, 2013, The Circle , London: Penguin Books.
  • –––, 2021, The Every: or At last a sense of order, or The final days of free will, or Limitless choice is killing the world , New York: Vintage Books.
  • Elshtain, Jean Bethke, 1981, Public Man, Private Woman: Women in Social and Political Thought , Princeton, NJ: Princeton University Press.
  • –––, 1995, Democracy on Trial , New York: Basic Books.
  • Etzioni, Amitai, 1999, The Limits of Privacy , New York: Basic Books.
  • –––, 2004, The Common Good , Cambridge: Polity.
  • Fahey, Robert A. and Airo Hino, 2020, “COVID-19, Digital Privacy, and the Social Limits on Data-Focused Public Health Responses”, International Journal of Information Management , 55: article 102181. doi:10.1016/j.ijinfomgt.2020.102181
  • Fallis, Don, 2013, “Privacy and Lack of Knowledge”, Episteme , 10(2): 153–166. doi:10.1017/epi.2013.13
  • Feldman, David, 1997, “The developing scope of Article 8 of the European Convention on Human Rights”, European Human Rights Law Review , 3: 265–274.
  • Floridi, Luciano, 2016, “On Human Dignity as a Foundation for the Right to Privacy”, Philosophy & Technology , 29(4): 307–312. doi:10.1007/s13347-016-0220-8
  • –––, 2017, “Group Privacy: A Defence and an Interpretation”, in Taylor, Floridi, and van der Sloot 2017a: 83–100. doi:10.1007/978-3-319-46608-8_5
  • Francis, Leslie P. and John G. Francis, 2017, Privacy: What Everyone Needs to Know , New York: Oxford University Press.
  • Fraser, Nancy, 1992, “Sex, Lies, and the Public Sphere: Some Reflections on the Confirmation of Clarence Thomas”, Critical Inquiry , 18(3): 595–612. doi:10.1086/448646
  • –––, 1996, “Rethinking the Public Sphere”, in Habermas and the Public Sphere , Craig Calhoun, Cambridge, MA: MIT Press.
  • Fried, Charles, 1968, “Privacy”, The Yale Law Journal , 77(3): 475–493.
  • –––, 1970, An Anatomy of Values: Problems of Personal and Social Choice , Cambridge, MA: Harvard University Press.
  • Gajda, Amy, 2022, Seek and Hide: The Tangled History of the Right to Privacy , New York: Viking.
  • Gatens, Moira, 1996, Imaginary Bodies: Ethics, Power, and Corporeality , London/New York: Routledge. doi:10.4324/9780203418659
  • –––, 2004, “Privacy and the Body: The Publicity of Affect”, in Rössler 2004: 113–132.
  • Gaukroger, Cressida, 2020, “Privacy and the Importance of ‘Getting Away With It’”, Journal of Moral Philosophy , 17(4): 416–439. doi:10.1163/17455243-20202987
  • Gavison, Ruth, 1980, “Privacy and the Limits of Law”, Yale Law Journal , 89(3): 421–471.
  • Gerstein, Robert S., 1978, “Intimacy and Privacy”, Ethics , 89(1): 76–81. doi:10.1086/292105
  • Geuss, Raymond, 2001, Public Goods, Private Goods , (Princeton Monographs in Philosophy), Princeton, NJ: Princeton University Press.
  • Glancy, Dorothy J., 1979, “The Invention of the Right to Privacy”, Arizona Law Review , 21(1): 1–40.
  • Goffman, Erving, 1959, The Presentation of Self in Everyday Life , revised edition, New York: Anchor Books.
  • Goldenfein, J., Green, B., & Viljoen, S., 2020, “Privacy Versus Health Is a False Trade-Off”, Jacobin , April 17.
  • González Fuster G, 2014, The Emergence of Personal Data Protection as a Fundamental Right of the EU, Heidelberg: Springer.
  • Goold, Benjamin J., 2010, “How Much Surveillance is Too Much? Some Thoughts on Surveillance, Democracy, and the Political Value of Privacy”, in Overvåking i en rettstat , D. W. Schartum (ed.), Bergen: Fagbokforlaget, pp. 38–48. [ Goold 2010 preprint available online ]
  • Gordon, Harold R., 1960, “Right of Property in Name, Likeness, Personality and History”, Northwestern University Law Review , 55(5): 553–613.
  • Greenwald, Glenn, 2014, No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State , New York: Metropolitan Books/Henry Holt and London: Hamish Hamilton.
  • Gross, Hyman, 1971, “Privacy and Autonomy”, in Privacy: Nomos XIII , Roland Pennock and John W. Chapman (eds.), New York: Atherton Press, pp. 169–81.
  • Harcourt, Bernard E., 2015, Exposed: Desire and Disobedience in the Digital Age , Cambridge, MA: Harvard University Press.
  • Hargittai, Eszter and Alice Marwick, 2016, “‘What Can I Really Do?’ Explaining the Privacy Paradox with Online Apathy”, International Journal of Communication , 10: 3737–3757 (article 21). [ Hargittai and Marwick 2016 available online ]
  • Henkin, Louis, 1974, “Privacy and Autonomy”, Columbia Law Review , 74(8): 1410–1433.
  • Himma, Kenneth Einar, 2016, “Why Security Trumps Privacy”, in A. Moore 2016a: 145–170 (ch. 8).
  • Hobbes, Thomas, 1651, Leviathan , London: Crooke. Reprinted as Hobbes: Leviathan: Revised Student Edition, Richard Tuck (ed.), Cambridge: Cambridge University Press, 1996.
  • Hoffmann, Christian Pieter, Christoph Lutz, and Giulia Ranzini, 2016, “Privacy Cynicism: A New Approach to the Privacy Paradox”, Cyberpsychology: Journal of Psychosocial Research on Cyberspace , 10(4): article 7. doi:10.5817/CP2016-4-7
  • Hongladarom, Soraj, 2016, A Buddhist Theory of Privacy , (SpringerBriefs in Philosophy), Singapore: Springer Singapore. doi:10.1007/978-981-10-0317-2
  • Honneth, Axel, 2004, “Between Justice and Affection: The Family as a Field of Moral Disputes”, in Rössler 2004: 142–162.
  • Hoofnagle, Chris Jay, Bart van der Sloot, and Frederik Zuiderveen Borgesius, 2019, “The European Union General Data Protection Regulation: What It Is and What It Means”, Information & Communications Technology Law , 28(1): 65–98. doi:10.1080/13600834.2019.1573501
  • Huckvale, Kit, John Torous, and Mark E. Larsen, 2019, “Assessment of the Data Sharing and Privacy Practices of Smartphone Apps for Depression and Smoking Cessation”, JAMA Network Open , 2(4): e192542. doi:10.1001/jamanetworkopen.2019.2542
  • Hughes, Kirsty, 2015, “The Social Value of Privacy, the Value of Privacy to Society and Human Rights Discourse”, in Roessler and Mokrosinska 2015: 225–243. doi:10.1017/CBO9781107280557.013
  • Igo, Sarah E., 2018, The Known Citizen: A History of Privacy in Modern America , Cambridge, MA: Harvard University Press.
  • Inness, Julie C., 1992, Privacy, Intimacy, and Isolation , New York: Oxford University Press.
  • Johnson, Jeffrey L., 1994, “Constitutional Privacy”, Law and Philosophy , 13(2): 161–193.
  • Kappel, Klemens, 2013, “Epistemological Dimensions of Informational Privacy”, Episteme , 10(2): 179–192. doi:10.1017/epi.2013.15
  • Katell, Michael and Adam D. Moore, 2016, “Introduction: The Value of Privacy, Security and Accountability”, in A. Moore 2016a: 1–18.
  • Keulen, Sjoerd and Ronald Kroeze, 2018, “Privacy from a Historical Perspective”, in van der Sloot and de Groot 2018: 21–56. doi:10.1515/9789048540136-002
  • Koops, Bert-Jaap, Bryce Clayton Newell, Tjerk Timan, Ivan Skorvanek, Tomislav Chokrevski, and Masa Galic, 2017, “A Typology of Privacy”, University of Pennsylvania Journal of International Law , 38(2): 483–576.
  • Kupfer, Joseph, 1987, “Privacy, Autonomy and Self-Concept”, American Philosophical Quarterly , 24(1): 81–89.
  • Landes, Joan B. (ed.), 1998, Feminism, the Public and the Private , (Oxford Readings in Feminism), New York: Oxford University Press.
  • Lanzing, Marjolein, 2016, “The Transparent Self”, Ethics and Information Technology , 18(1): 9–16. doi:10.1007/s10676-016-9396-y
  • Lessig, Lawrence, 2002, “Privacy as Property”, Social Research , 69(1): 247–269.
  • Lever, Annabelle, 2012, On Privacy , London: Routledge. doi:10.4324/9780203156667
  • –––, 2013, A Democratic Conception of Privacy , London: AuthorHouse
  • Lindroos-Hovinheimo, Susanna, 2021, Private Selves: Legal Personhood in European Privacy Protection , (Cambridge Studies in European Law and Policy), Cambridge/New York: Cambridge University Press. doi:10.1017/9781108781381
  • Locke, John, 1690, Two Treatises of Government , London: Awnsham Churchill. New edition, Peter Laslett (ed.), Cambridge: Cambridge University Press, 1988.
  • Loi, Michele and Markus Christen, 2020, “Two Concepts of Group Privacy”, Philosophy & Technology , 33(2): 207–224. doi:10.1007/s13347-019-00351-0
  • Lucas, George R., 2014, “NSA Management Directive #424: Secrecy and Privacy in the Aftermath of Edward Snowden”, Ethics & International Affairs , 28(1): 29–38. doi:10.1017/S0892679413000488
  • Lundgren, Björn, 2020, “A Dilemma for Privacy as Control”, The Journal of Ethics , 24(2): 165–175. doi:10.1007/s10892-019-09316-z
  • Ma, Yuanye, 2019, “ Relational Privacy  : Where the East and the West Could Meet”, Proceedings of the Association for Information Science and Technology , 56(1): 196–205. doi:10.1002/pra2.65
  • MacKinnon, Catharine A., 1987, Feminism Unmodified: Discourses on Life and Law , Cambridge, MA: Harvard University Press.
  • –––, 1989, Toward a Feminist Theory of the State , Cambridge, MA: Harvard University Press.
  • –––, 1991, “Reflections on Sex Equality under Law”, Yale Law Journal , 100(5): 1281–1328.
  • Macnish, Kevin, 2018, “Government Surveillance and Why Defining Privacy Matters in a Post-Snowden World”, Journal of Applied Philosophy , 35(2): 417–432. doi:10.1111/japp.12219
  • Mainz, Jakob Thrane and Rasmus Uhrenfeldt, 2021, “Too Much Info: Data Surveillance and Reasons to Favor the Control Account of the Right to Privacy”, Res Publica , 27(2): 287–302. doi:10.1007/s11158-020-09473-1
  • Marmor, Andrei, 2015, “What Is the Right to Privacy?”, Philosophy & Public Affairs , 43(1): 3–26. doi:10.1111/papa.12040
  • Martin, Kirsten and Helen Nissenbaum, 2016, “Measuring Privacy: An Empirical Test Using Context to Expose Confounding Variables”, Columbia Science and Technology Law Review , 18(1): 176–218.
  • McKeon, Michael, 2007, The Secret History of Domesticity: Public, Private, and the Division of Knowledge , Baltimore, MD: Johns Hopkins University Press.
  • Mead, Margaret, 1928, Coming of Age in Samoa: A Psychological Study of Primitive Youth for Western Civilisation , New York: Blue Ribbon Books.
  • Menges, Leonhard, 2020, “Did the NSA and GCHQ Diminish Our Privacy? What the Control Account Should Say”, Moral Philosophy and Politics , 7(1): 29–48. doi:10.1515/mopp-2019-0063
  • Michael, James R., 1994, Privacy and Human Rights: An International and Comparative Study, with Special Reference to Developments in Information Technology , Brookfield, VT: Dartmouth Publishing Company.
  • Mill, John Stuart, 1859, On Liberty , London: John W. Parker and Son.
  • Mokrosinska, Dorota, 2015, “How much privacy for public officials?”, in Roessler and Mokrosinska 2015: 181–201. doi:10.1017/CBO9781107280557.013
  • –––, 2020, “Why States Have No Right to Privacy, but May Be Entitled to Secrecy: A Non-Consequentialist Defense of State Secrecy”, Critical Review of International Social and Political Philosophy , 23(4): 415–444. doi:10.1080/13698230.2018.1482097
  • Moore, Adam D., 1998, “Intangible Property: Privacy, Power, and Information Control”, American Philosophical Quarterly 35(4): 365–378.
  • –––, 2000, “Employee Monitoring and Computer Technology: Evaluative Surveillance v. Privacy”, Business Ethics Quarterly , 10(3): 697–709. doi:10.2307/3857899
  • –––, 2003, “Privacy: Its Meaning and Value” American Philosophical Quarterly , 40(3): 215–227.
  • –––, 2010, Privacy Rights: Moral and Legal Foundations , University Park, PA: Pennsylvania State University Press.
  • –––, (ed.), 2016a, Privacy, Security and Accountability: Ethics, Law and Policy , London: Rowman & Littlefield International Ltd.
  • –––, 2016b, “Why Privacy and Accountability Trump Security”, in Moore 2016a: 171–182 (ch. 9).
  • Moore, Barrington Jr, 1984, Privacy: Studies in Social and Cultural History , Armonk, NY: M.E. Sharpe.
  • Morley, Jessica, Josh Cowls, Mariarosaria Taddeo, and Luciano Floridi, 2020, “Ethical Guidelines for COVID-19 Tracing Apps”, Nature , 582(7810): 29–31. doi:10.1038/d41586-020-01578-0
  • Morris, Debra, 2000, “Privacy, Privation, Perversity: Toward New Representations of the Personal”, Signs: Journal of Women in Culture and Society , 25(2): 323–351. doi:10.1086/495441
  • Mulder, Trix, 2019, “Health Apps, Their Privacy Policies and the GDPR”, European Journal of Law and Technology , 10(1). [ Mulder 2019 available online ]
  • Nagel, Thomas, 1998a, “The Shredding of Public Privacy: Reflections on Recent Events in Washington”, Times Literary Supplement , 14 August 1998.
  • –––, 1998b, “Concealment and Exposure”, Philosophy & Public Affairs , 27(1): 3–30. doi:10.1111/j.1088-4963.1998.tb00057.x
  • Nakada, Makoto and Takanori Tamura, 2005, “Japanese Conceptions of Privacy: An Intercultural Perspective”, Ethics and Information Technology , 7(1): 27–36. doi:10.1007/s10676-005-0453-1
  • Newell, Bryce Clayton, Cheryl A. Metoyer, and Adam D. Moore, 2015, “Privacy in the family”, in Roessler and Mokrosinska 2015: 104–121. doi:10.1017/CBO9781107280557.013
  • Nissenbaum, Helen, 2010, Privacy in Context: Technology, Policy, and the Integrity of Social Life , Stanford: Stanford Law Books.
  • –––, 2015, “Respect for Context as a Benchmark for Privacy Online: What It Is and Isn’t”, in Roessler and Mokrosinska: 278–302. doi:10.1017/CBO9781107280557.016
  • –––, 2019, “Contextual Integrity Up and Down the Data Food Chain”, Theoretical Inquiries in Law , 20(1): 221–256. doi:10.1515/til-2019-0008
  • O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , London: Penguin Books.
  • Ochs, Carsten, 2022, Soziologie der Privatheit. Informationelle Teilhabebeschränkung vom Reputation Management bis zum Recht auf Unberechenbarkeit , Weilerswist: Velbrück Wissenschaft. doi:10.5771/9783748914877
  • Okin, Susan Moller, 1989, Justice, Gender, and the Family , New York: Basic Books.
  • –––, 1991, “Gender, the Public and the Private”, in Political Theory Today , David Held (ed.), Stanford, CA: Stanford University Press.
  • Olsen, Francis E., 1991, “A Finger to the Devil. Abortion. Privacy and Equality”, Dissent Magazine , Summer 1991: 377–382.
  • Ortner, Sherry B., 1974 [1998], “Is Female to Male as Nature is to Culture?”, in Woman, Culture and Society , Michelle Zimbalist Rosaldo and Louise Lamphere (eds), Stanford, CA: Stanford University Press, 67–87. Reprinted in Feminism: The Public and the Private , Joan b. Landes (ed.), Oxford: Oxford University Press, pp. 21–44.
  • Orwell, George, 1949, Nineteen Eighty-Four (1984) , London: Secker & Warburg.
  • Paci, Patrizia, Clara Mancini, and Bashar Nuseibeh, 2022, “The Case for Animal Privacy in the Design of Technologically Supported Environments”, Frontiers in Veterinary Science , 8: 784794. doi:10.3389/fvets.2021.784794
  • Parent, W.A., 1983, “Privacy, Morality, and the Law”, Philosophy & Public Affairs , 12(4): 269–288.
  • Pateman, Carole, 1989, The Disorder of Women: Democracy, Feminism, and Political Theory , Stanford, CA: Stanford University Press.
  • Pepper, Angie, 2020, “Glass Panels and Peepholes: Nonhuman Animals and the Right to Privacy”, Pacific Philosophical Quarterly , 101(4): 628–650. doi:10.1111/papq.12329
  • Phillips, Anne, 1991, Engendering Democracy , University Park, PA: Pennsylvania State University Press.
  • Pitkin, Hannah F., 1981, “Justice: On Relating Public and Private”, Political Theory , 9(3): 327–352.
  • Posner, Richard A., 1978, “John A. Sibley Lecture: The Right of Privacy”, Georgia Law Review , 12(3): 393–422. [ Posner 1978 available online ]
  • –––, 1981, “The Economics of Privacy”, American Economic Review , 71(2): 405–409.
  • Pozen, David E., 2015, “Privacy-Privacy Tradeoffs”, University of Chicago Law Review , 83(1): 221–247 (article 10).
  • Prosser, William L., 1960, “Privacy”, California Law Review , 48(3): 383–423.
  • Puri, Anuj, 2021, “A Theory of Group Privacy”, Cornell Journal of Law and Public Policy , 30(3): 477–538.
  • Rachels, James, 1975, “Why Privacy is Important”, Philosophy & Public Affairs , 4(4): 323–333.
  • Rawls, John, 1997, “The Idea of Public Reason Revisited”, The University of Chicago Law Review , 64(3): 765–807. doi:10.2307/1600311
  • –––, 2001, Justice as Fairness: A Restatement , Erin Kelly (ed.), Cambridge, MA: Belknap Press of Harvard University Press.
  • Regan, Priscilla M., 1995, Legislating Privacy: Technology, Social Values, and Public Policy , Chapel Hill, NC: University of North Carolina Press.
  • –––, 2015, “Privacy and the Common Good: Revisited”, in Roessler and Mokrosinska 2015: 50–70. doi:10.1017/CBO9781107280557.004
  • Reiman, Jeffrey H., 1976, “Privacy, Intimacy, and Personhood”, Philosophy & Public Affairs , 6(1): 26–44.
  • –––, 1995, “Driving to the Panopticon: A Philosophical Exploration of the Risks to Privacy Posed by the Highway Technology of the Future”, Santa Clara High Technology Law Journal , 11(1): article 27.
  • Reviglio, Urbano and Rogers Alunge, 2020, “‘I Am Datafied Because We Are Datafied’: An Ubuntu Perspective on (Relational) Privacy”, Philosophy & Technology , 33(4): 595–612. doi:10.1007/s13347-020-00407-6
  • Richards, Neil, 2015, Intellectual Privacy: Rethinking Civil Liberties in the Digital Age , New York: Oxford University Press.
  • Richardson, Megan, 2017, The Right to Privacy: Origins and Influence of a Nineteenth-Century Idea , (Cambridge Intellectual Property and Information Law), Cambridge/ New York: Cambridge University Press. doi:10.1017/9781108303972
  • Roberts, Andrew, 2015, “A Republican Account of the Value of Privacy”, European Journal of Political Theory , 14(3): 320–344. doi:10.1177/1474885114533262
  • –––, 2022, Privacy in the Republic , London: Routledge India. doi:10.4324/9781003079804
  • Roberts, Huw, 2022, “Informational Privacy with Chinese Characteristics”, in The 2021 Yearbook of the Digital Ethics Lab , Jakob Mökander and Marta Ziosi (eds.), (Digital Ethics Lab Yearbook), Cham: Springer International Publishing, 9–23. doi:10.1007/978-3-031-09846-8_2
  • Roessler [Rössler], Beate, 2001 [2005], Der Wert des Privaten , (Suhrkamp Taschenbuch Wissenschaft 1530), Frankfurt am Main: Suhrkamp. Translated as The Value of Privacy , R. D. V. Glasgow (trans.), Cambridge: Polity, 2005.
  • ––– (ed.), 2004, Privacies: Philosophical Evaluations , (Cultural Memory in the Present), Stanford, CA: Stanford University Press.
  • –––, 2008, “New Ways of Thinking about Privacy”, in The Oxford Handbook of Political Theory , John S. Dryzek, Bonnie Honig, and Anne Phillips (eds.), Oxford: Oxford University Press, pp. 694–712. doi:10.1093/oxfordhb/9780199548439.003.0038
  • –––, 2015, “Should personal data be a tradable good? On the moral limits of markets in privacy”, in Roessler and Mokrosinska 2015: 141–161. doi:10.1017/CBO9781107280557.013
  • –––, 2017a, “Privacy as a Human Right”, Felix Koch (trans.), Proceedings of the Aristotelian Society , 117(2): 187–206. doi:10.1093/arisoc/aox008
  • –––, 2017b [2021], Autonomie: ein Versuch über das gelungene Leben , Berlin: Suhrkamp. Translated as Autonomy: An Essay on the Life Well Lived , James C. Wagner (trans.), Cambridge, UK: Polity, 2021.
  • Roessler, Beate and Dorota Mokrosinska, 2013, “Privacy and Social Interaction”, Philosophy & Social Criticism , 39(8): 771–791. doi:10.1177/0191453713494968
  • ––– (eds.), 2015, Social Dimensions of Privacy: Interdisciplinary Perspectives , New York: Cambridge University Press. doi:10.1017/CBO9781107280557
  • Rosen, Jeffrey, 2000, The Unwanted Gaze: The Destruction of Privacy in America , New York: Random House.
  • Rotenberg, Marc, Jeramie Scott, and Julia Horwitz (eds), 2015, Privacy in the Modern Age: The Search for Solutions , New York: The New Press.
  • Rubel, Alan, 2015, “Privacy, Transparency, and Accountability in the Nsa?S Bulk Metadata Program”, in A. Moore 2016a: 183–202 (ch.10).
  • Rule, James B., 2007, Privacy in Peril: How We Are Sacrificing a Fundamental Right in Exchange for Security and Convenience , Oxford/New York: Oxford University Press. doi:10.1093/acprof:oso/9780195307832.001.0001
  • –––, 2019, “Contextual Integrity and Its Discontents: A Critique of Helen Nissenbaum’s Normative Arguments”, Policy & Internet , 11(3): 260–279. doi:10.1002/poi3.215
  • Sandel, Michael J., 1982, Liberalism and the Limits of Justice , Cambridge/New York: Cambridge University Press. Second edition, 1998. doi:10.1017/CBO9780511810152
  • Sax, Marijn, 2018, “Privacy from an Ethical Perspective”, in van der Sloot and de Groot 2018: 143–172. doi:10.1515/9789048540136-006
  • –––, 2021, Between Empowerment and Manipulation: The Ethics and Regulation of for-Profit Health Apps , (Information Law Series 47), Alphen aan den Rijn, The Netherlands: Kluwer Law International B. V.
  • Scanlon, Thomas, 1975, “Thomson on Privacy”, Philosophy and Public Affairs , 4(4): 315–322.
  • Schneider, Henrique, 2021, “The Institutions of Privacy: Data Protection Versus Property Rights to Data”, SATS , 22(1): 111–129. doi:10.1515/sats-2020-0004
  • Schoeman, Ferdinand David (ed.), 1984a, Philosophical Dimensions of Privacy: An Anthology , Cambridge/New York: Cambridge University Press. doi:10.1017/CBO9780511625138
  • –––, 1984b, “Privacy: Philosophical Dimensions of Privacy”, in Schoeman 1984a: 1–33 (ch. 1). doi:10.1017/CBO9780511625138.002
  • –––, 1992, Privacy and Social Freedom , (Cambridge Studies in Philosophy and Public Policy), Cambridge/New York: Cambridge University Press. doi:10.1017/CBO9780511527401
  • Schwartz, Paul M., 1999, “Privacy and Democracy in Cyberspace”, Vanderbilt Law Review , 52(6): 1607–1702.
  • Shaffer, Gwen, 2021, “Applying a Contextual Integrity Framework to Privacy Policies for Smart Technologies”, Journal of Information Policy , 11: 222–265. doi:10.5325/jinfopoli.11.2021.0222
  • Sharon, Tamar, 2021, “Blind-Sided by Privacy? Digital Contact Tracing, the Apple/Google API and Big Tech’s Newfound Role as Global Health Policy Makers”, Ethics and Information Technology , 23(S1): 45–57. doi:10.1007/s10676-020-09547-x
  • Shvartzshnaider, Yan, Noah Apthorpe, Nick Feamster, and Helen Nissenbaum, 2019, “Going against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis”, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing , 7: 162–170. doi:10.1609/hcomp.v7i1.5266
  • Simitis, Spiros, 1987, “Reviewing Privacy In an Information Society”, University of Pennsylvania Law Review , 135(3): 707–746.
  • Simko, Lucy, Jack Chang, Maggie Jiang, Ryan Calo, Franziska Roesner, and Tadayoshi Kohno, 2022, “COVID-19 Contact Tracing and Privacy: A Longitudinal Study of Public Opinion”, Digital Threats: Research and Practice , 3(3): article 25. doi:10.1145/3480464
  • Skinner-Thompson, Scott, 2021, Privacy at the Margins , Cambridge/New York: Cambridge University Press. doi:10.1017/9781316850350
  • Solove, Daniel J., 2002, “Conceptualizing Privacy”, California Law Review , 90(4): 1087–1156.
  • –––, 2004, The Digital Person: Technology and Privacy in the Information Age , New York: New York University Press.
  • –––, 2008, Understanding Privacy , Cambridge, MA: Harvard University Press.
  • –––, 2011, Nothing to Hide: The False Tradeoff between Privacy and Security , New Haven, CT: Yale University Press.
  • Stahl, Titus, 2016, “Indiscriminate Mass Surveillance and the Public Sphere”, Ethics and Information Technology , 18(1): 33–39. doi:10.1007/s10676-016-9392-2
  • –––, 2020, “Privacy in Public: A Democratic Defense”, Moral Philosophy and Politics , 7(1): 73–96. doi:10.1515/mopp-2019-0031
  • Stanley, Jay and Jennifer Stisa Granick, 2020, “The Limits of Location Tracking in an Epidemic”. ACLU White Paper, 8 April 2020, ACLU. [ Stanley and Granick 2020 available online ]
  • Stigler, George J., 1980, “An Introduction to Privacy in Economics and Politics”, The Journal of Legal Studies , 9(4): 623–644.
  • Susskind, Jamie, 2018, Future Politics: Living Together in a World Transformed by Tech , Oxford: Oxford University Press.
  • Tavani, Herman T. and James H. Moor, 2001, “Privacy Protection, Control of Information, and Privacy-Enhancing Technologies”, ACM SIGCAS Computers and Society , 31(1): 6–11. doi:10.1145/572277.572278
  • Taylor, Linnet, 2017, “Safety in Numbers? Group Privacy and Big Data Analytics in the Developing World”, in Taylor, Floridi, and van der Sloot 2017a: 13–36. doi:10.1007/978-3-319-46608-8_2
  • Taylor, Linnet, Luciano Floridi, and Bart van der Sloot (eds.), 2017a, Group Privacy: New Challenges of Data Technologies , Cham: Springer International Publishing. doi:10.1007/978-3-319-46608-8
  • –––, 2017b, “Introduction: A New Perspective on Privacy”, in their 2017a: 1–12. doi:10.1007/978-3-319-46608-8_1
  • Thatcher, Jim, David O’Sullivan, and Dillon Mahmoudi, 2016, “Data Colonialism through Accumulation by Dispossession: New Metaphors for Daily Data”, Environment and Planning D: Society and Space , 34(6): 990–1006. doi:10.1177/0263775816633195
  • Thomson, Judith Jarvis, 1975, “The Right to Privacy”, Philosophy & Public Affairs , 4(4): 295–314.
  • Tran, Cong Duc and Tin Trung Nguyen, 2021, “Health vs. Privacy? The Risk-Risk Tradeoff in Using COVID-19 Contact-Tracing Apps”, Technology in Society , 67: article 101755. doi:10.1016/j.techsoc.2021.101755
  • Tribe, Laurence H., 1990, Abortion: The Clash of Absolutes , New York: Norton.
  • –––, 2022, “Deconstructing Dobbs | Laurence H. Tribe”, New York Review of Books , 22 September 2022.
  • Tufekci, Zeynep, 2015, “What ‘Free’ Really Costs”, New York Times , New York edition, 4 June 2015, page A25. Appeared in online edition as “Mark Zuckerberg, Let Me Pay for Facebook”.
  • Turkington, Richard C. and Anita L. Allen, 1999, Privacy Law: Cases and Materials , (American Casebook Series), St. Paul, MN: West Group.
  • Veliz, Carissa, 2021, Privacy is Power: Why and How You Should Take Back Control of Your Data , London: Bantam Press.
  • van de Poel, Ibo, 2020, “Core Values and Value Conflicts in Cybersecurity: Beyond Privacy Versus Security”, in The Ethics of Cybersecurity , Markus Christen, Bert Gordijn, and Michele Loi (eds.), (The International Library of Ethics, Law and Technology 21), Cham: Springer International Publishing, pp. 45–71. doi:10.1007/978-3-030-29053-5_3
  • van der Sloot, Bart, 2017, “Do Groups Have a Right to Protect Their Group Interest in Privacy and Should They? Peeling the Onion of Rights and Interests Protected Under Article 8 ECHR”, in Taylor, Floridi, and van der Sloot 2017a: 197–224. doi:10.1007/978-3-319-46608-8_11
  • van der Sloot, Bart and Aviva De Groot (eds.), 2018, The Handbook of Privacy Studies: An Interdisciplinary Introduction , Amsterdam: Amsterdam University Press. doi:10.1515/9789048540136
  • Vincent, David, 2016, Privacy: A Short History , Cambridge/Malden, MA: Polity Press.
  • Volio, Fernando, 1981, “Legal Personality, Privacy and the Family” in The International Bill of Rights: The Covenant on Civil and Political Rights , Louis Henkin, New York: Columbia University Press, pp. 185–208.
  • Wachter, Sandra, 2018, “The GDPR and the Internet of Things: A Three-Step Transparency Model”, Law, Innovation and Technology , 10(2): 266–294. doi:10.1080/17579961.2018.1527479
  • Wacks, Raymond, 2010, Privacy: A Very Short Introduction , (Very Short Introductions), Oxford/New York: Oxford University Press. Second edition 2015.
  • Wang, Hao, 2022, Algorithmic Colonization. Automating Love and Trust in the Age of Big Data , PhD Dissertation, University of Amsterdam.
  • Warren, Samuel D. and Louis D. Brandeis, 1890, “The Right to Privacy”, Harvard Law Review , 4(5): 193–220.
  • Wasserstrom, Richard A., 1984, “Privacy: Some Arguments and Assumptions”, in Schoeman 1984a: 317–332 (ch. 14).
  • Weil, Gordon L., 1963, “The Evolution of the European Convention on Human Rights”, American Journal of International Law , 57(4): 804–827.
  • Weinstein, W.L., 1971, “The Private and the Free: A Conceptual Inquiry”, in Privacy: Nomos XIII , Roland Pennock and John W. Chapman (eds.), New York: Atherton Press, pp. 624–692.
  • Weintraub, Jeff Alan and Krishan Kumar (eds.), 1997, Public and Private in Thought and Practice: Perspectives on a Grand Dichotomy , (Morality and Society), Chicago, IL: University of Chicago Press.
  • Westin, Alan F., 1967, Privacy and Freedom , New York: Atheneum.
  • Whitman, Christina B., 1985, “Privacy in Confucian and Taoist Thought”, in Individualism and Holism: Studies in Confucian and Taoist Values , Donald J. Munro (ed.), Ann Arbor, MI: University of Michigan, Center for Chinese Studies, pp. 85–100. [ Whitman 1985 available online ]
  • Winter, Jenifer Sunrise and Elizabeth Davidson, 2019, “Big Data Governance of Personal Health Information and Challenges to Contextual Integrity”, The Information Society , 35(1): 36–51. doi:10.1080/01972243.2018.1542648
  • Woolf, Virginia, 1929, A Room of One’s Own , Richmond: Hogarth Press.
  • Wu, Tim, 2015, “Facebook Should Pay All of Us”, The New Yorker web site , 14 August 2015. [ Wu 2015 available online ]
  • Young, Iris Marion, 1990, Justice and the Politics of Difference , Princeton, NJ: Princeton University Press.
  • –––, 2004, “A Room of One’s Own: Old Age, Extended Care and Privacy”, in Rössler 2004: 168–186.
  • Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power , New York: PublicAffairs.
  • International Covenant on Civil and Political Rights , 16.12.1966, United Nations, Treaty Series, vol. 999, p. 171, Art. 12. [ International Covenant available online ]
  • Universal Declaration of Human Rights , General Assembly resolution 217A, 10.12.1948, Art. 12. [ Universal Declaration available online ]
  • [CETS No. 108] Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data , ETS 108, Strasbourg, 28/01/1981. [ CETS No. 108 available online ]
  • [ECHR] European Convention on Human Rights , signed 1950, effective 1953, In particular, Article 8: Right to respect for private and family life. [ ECHR available online ]
  • Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995: on the protection of individuals with regard to the processing of personal data and on the free movement of such data . [ Directive 95/46/EC available online ]
  • [GDPR] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1. [ GDPR available online ]
  • “Charter of Fundamental Rights of the European Union”, Official Journal of the European Union , 2010, 53(C83): 389–403. [ EU Fundamental Rights available online ]
  • BVerfG, Order of the First Senate of 15 December 1983 - 1 BvR 209/83 -, paras. 1–214. [ BVerfG available online (German with English abridged version) ]
  • Bowers v. Hardwick, 478 U.S. 186 (1986). [ Bowers v. Hardwick available online ]
  • Dobbs v. Jackson Women’s Health Organization, No. 19–1392, 597 U.S. (2022). [ Dobbs v. Jackson Women’s Health Organization opinion available online ]
  • Griswold v. Connecticut, 381 U.S. 479 (1965). [ Griswold v. Connecticut available online ]
  • Lawrence v. Texas, 539 U.S. 558 (2003). [ Lawrence v. Texas available online ]
  • Planned Parenthood of Southeastern Pennsylvania v. Casey, 505 U.S. 833 (1992). [ Planned Parenthood v. Casey available online ]
  • Roe v. Wade, 410 U.S. 113 (1973). [ Roe v. Wade available online ]
  • US House of Representative, “ H.R.8152 American Data Privacy and Protection Act”, proposed 2022. [ H.R.8152 available online ]
  • [CCPA] California Consumer Privacy Act of 2018 , [ CCPA available online ]
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Electronic Privacy Information Center (EPIC)

artificial intelligence: ethics of | autonomy: in moral and political philosophy | ethics, biomedical: privacy and medicine | feminist philosophy, interventions: philosophy of law | information technology: and privacy | legal rights | liberty: positive and negative | rights | rights: human | torts, theories of the common law of | value: intrinsic vs. extrinsic

Copyright © 2023 by Beate Roessler < b . roessler @ uva . nl > Judith DeCew < JDeCew @ clarku . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Read our research on: TikTok | Podcasts | Election 2024

Regions & Countries

1. views of data privacy risks, personal data and digital privacy laws.

Online privacy is complex, encompassing debates over law enforcement’s data access, government regulation and what information companies can collect. This chapter examines Americans’ perspectives on these issues and highlights how views vary across different groups, particularly by education and age. 

When managing their privacy online, most Americans say they trust themselves to make the right decisions about their personal information (78%), and a majority are skeptical that anything they do will make a difference (61%).

Bar charts showing that Most trust themselves to make the right decisions about their personal information online, but a majority also are skeptical anything they do will make a difference

Far fewer mention being overwhelmed by figuring out what they need to do (37%) or say privacy is not that big of a deal to them (29%).

Another 21% are confident that those with access to their personal information will do what is right.

Education differences

  • 81% of those with at least some college experience say they trust themselves to make the right decisions about their personal information online, compared with 72% of those with a high school diploma or less.
  • 67% of those with at least some college are skeptical that anything they do to manage their online privacy will make a difference, compared with half of those with a high school diploma or less formal education.

On the other hand, those with a high school education or less are more likely than those with some college experience or more to say that privacy isn’t that big of a deal to them and that they are confident that those who have access to their personal information will do the right thing.

Personal data and information

About 4 in 10 Americans are very worried about their information being sold or stolen, but this varies by race and ethnicity

The survey also explores the concerns people have about data collection and security – specifically, how they feel about three scenarios around companies, law enforcement and identity theft.

Roughly four-in-ten Americans say they are very worried about companies selling their information to others without them knowing (42%) or people stealing their identity or personal information (38%). Fewer are apprehensive about law enforcement monitoring what they do online (15%).

Racial and ethnic differences

However, some of these shares are higher among Hispanic, Black or Asian adults: 1

  • Roughly half of Hispanic, Black or Asian adults are very worried about people stealing their identity or personal information, compared with a third of White adults.
  • About one-in-five of each group are very worried about law enforcement monitoring their online activity; 10% of White adults say this.

Feelings of concern, confusion and a lack of control over one’s data

Americans are largely concerned and feel little control or understanding of how companies and the government collect, use data about them

A majority of Americans say they are concerned, lack control and have a limited understanding about how the data collected about them is used. This is true whether it’s the government or companies using their data. Similar sentiments were expressed in 2019, when we last asked about this .

Concern is high: 81% say they feel very or somewhat concerned with how companies use the data they collect about them. Fully 71% say the same regarding the government’s use of data.

People don’t feel in control: Roughly three-quarters or more feel they have very little or no control over the data collected about them by companies (73%) or the government (79%).

Understanding is low: Americans also say they don’t understand what these actors are doing with the data collected about them. Majorities say they have very little or no understanding of this, whether by the government (77%) or companies (67%).

Americans are now less knowledgeable than before about how companies are using their personal data. The share who say they don’t understand this has risen from 59% in 2019 to 67% in 2023.

They have also grown more concerned about how the government uses the data it collects about them, with the share expressing concern up from 64% to 71% over this same period.

While these sentiments have not changed significantly since 2019 among Democrats and those who lean toward the Democratic Party, Republicans and GOP leaners have grown more wary of government data collection. Today, 77% of Republicans say they are concerned about how the government uses data it collects about them, up from 63% four years earlier.

Growing shares say they don’t understand data privacy laws

Privacy laws and regulation

Americans are less knowledgeable about data privacy laws today than in the past.

Today, 72% of Americans say they have little to no understanding about the laws and regulations that are currently in place to protect their data privacy. This is up from 63% in 2019.

By comparison, the shares who say they understand some or a great deal about these laws decreased from 37% in 2019 to 27% in 2023.

Americans largely favor more regulation to protect personal information

Broad partisan support for more regulation of how consumer data is used

Overall, 72% say there should be more government regulation of what companies can do with their customers’ personal information. Just 7% say there should be less regulation. Another 18% say it should stay about the same.

Views by political affiliation

There is broad partisan support for greater involvement by the government in regulating consumer data. 

A majority of Democrats and Republicans say there should be more government regulation for how companies treat users’ personal information (78% vs. 68%).

These findings are largely on par with a 2019 Center survey that showed strong support for increased regulations across parties.

Trust in social media executives

A table showing most Americans don’t trust social media CEOs to handle users’ data responsibly, for example, by publicly taking responsibility for mistakes when they misuse or compromise it

Majorities of Americans say they have little to no trust that leaders of social media companies will publicly admit mistakes regarding consumer data being misused or compromised (77%), that these leaders will not sell users’ personal data to others without their consent (76%), and that leaders would be held accountable by the government if they were to misuse or compromise users’ personal data (71%).

This includes notable shares who have no trust at all in those who are running social media sites. For example, 46% say they have no trust at all in executives of social media companies to not sell users’ data without their consent.

Children’s online privacy: Concerns and responsibility

About 9 in 10 Americans are concerned that social media sites and apps know kids’ personal information

Most Americans say they are concerned about social media sites knowing personal information about children (89%), advertisers using data about what children do online to target ads to them (85%) and online games tracking what children are doing on them (84%).

Concern is widespread, with no statistically significant differences between those with and without children.

Majority of Americans say parents and technology companies should have a great deal of responsibility for protecting children’s online privacy

Another key question is who should be responsible for the actual protection of kids’ online privacy.

Fully 85% say parents bear a great deal of responsibility for protecting children’s online privacy. Roughly six-in-ten say the same about technology companies, and an even smaller share believe the government should have a great deal of responsibility. 

Law enforcement and surveillance

The survey also measured how acceptable Americans think it is for law enforcement to use surveillance tools during criminal investigations.

Older adults are more likely than younger adults to support law enforcement tracking locations, breaking into people’s phones during an investigation

Roughly three-quarters of Americans say it’s very or somewhat acceptable for law enforcement to obtain footage from cameras people install at their residences during a criminal investigation or use information from cellphone towers to track where someone is.

About one-in-ten Americans say they aren’t sure how they feel about law enforcement doing each of these things.

Age differences

Older adults are much more likely than younger adults to say it’s at least somewhat acceptable for law enforcement to take each of these actions in criminal investigations. 

For example, 88% of those 65 and older say it’s acceptable for law enforcement to obtain footage from cameras people install at their residences, compared with 57% of those ages 18 to 29.

In the case of a criminal investigation:

  • White adults are more likely than Hispanic and Black adults to think it’s acceptable for law enforcement to use information from cellphone towers to track people’s locations and to break the passcode on a user’s phone to get access to its contents.
  • White and Hispanic adults are more likely than Black adults to say it’s acceptable to require third parties to turn over users’ private chats, messages or calls.

AI and data collection

Majority of Americans say it’s unacceptable to use AI to determine public assistance eligibility, but views are mixed for smart speakers analyzing voices

Artificial intelligence (AI) can be used to collect and analyze people’s personal information. Some Americans are wary of companies using AI in this way.

Fully 55% of adults say using computer programs to determine who should be eligible for public assistance is unacceptable. Roughly a quarter say it’s an acceptable use of AI.

Roughly half (48%) think it is unacceptable for social media companies to analyze what people do on their sites to deliver personalized content. Still, 41% are supportive of this.

Views are mixed when it comes to smart speakers analyzing people’s voices to learn who is speaking. Statistically equal shares say it’s unacceptable and acceptable (44% and 42%, respectively).

And some Americans – ranging from 10% to 17% – are uncertain about whether these uses are acceptable or not.

  • 49% of adults 50 and older say it’s unacceptable for a smart speaker to analyze people’s voices to learn to recognize who’s speaking. This share drops to four-in-ten among adults under 50.
  • Similarly, 56% of those 50 and older say social media companies analyzing what people do on their sites to deliver personalized content is unacceptable. But 41% of those under 50 say the same.
  • There are no differences between those under 50 and those 50 and older over whether computer programs should be used to determine eligibility for public assistance.

Trust in companies that use AI

Most Americans who have heard of AI don’t trust companies to use it responsibly and say it will lead to unease and unintended uses

In addition to understanding people’s comfort level with certain uses of AI, the survey also measured the public’s attitudes toward companies that are utilizing AI in their products.

Among those who have heard of AI:

  • 70% say they have little to no trust in companies to make responsible decisions about how they use AI in their products.
  • Roughly eight-in-ten say the information will be used in ways people are not comfortable with or that were not originally intended.
  • Views are more mixed regarding the potential that using AI to analyze personal details could make life easier. A majority of those who have heard of AI say this will happen (62%). Regarding differences by age, adults under 50 are more optimistic than those 50 and older (70% vs. 54%). 
  • 87% of those with a college degree or higher say companies will use AI to analyze personal details in ways people would not be comfortable with. Some 82% of those with some college experience and 74% with a high school diploma or less say the same.
  • 88% of those with a bachelor’s degree or more say companies will use this information in ways that were not originally intended. This share drops to 80% among those with some college experience and 71% among those with a high school diploma or less.
  • About three-quarters of those with a college degree or more (74%) say this information will be used in ways that could make people’s lives easier. But this share drops to 60% among those with some college experience and 52% among those with a high school diploma or less.
  • This survey includes a total sample size of 364 Asian adults. The sample primarily includes English-speaking Asian adults and, therefore, it may not be representative of the overall Asian adult population. Despite this limitation, it is important to report the views of Asian adults on the topics in this study. As always, Asian adults’ responses are incorporated into the general population figures throughout this report. Asian adults are shown as a separate group when the question was asked of the full sample. Because of the relatively small sample size and a reduction in precision due to weighting, results are not shown separately for Asian adults for questions that were only asked of a random half of respondents (Form 1/Form 2). ↩
  • Half of respondents were asked the questions above, and the other half received the same questions with the added context of it being a “criminal investigation where public safety is at risk.” Differences in response were largely modest. See Appendix A for these findings. ↩

Sign up for our Internet, Science and Tech newsletter

New findings, delivered monthly

Report Materials

Table of contents, what americans know about ai, cybersecurity and big tech, quiz: test your knowledge of digital topics, majority of americans say tiktok is a threat to national security, as ai spreads, experts predict the best and worst changes in digital life by 2035, how black americans view the use of face recognition technology by police, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

National Security Versus Personal Privacy Essay (Critical Writing)

There are many critiques developed regarding the way of providing protection to American citizens. The contemporary debate among the US security shareholders is whether the national security or personal privacy of the US citizens comes first. The balance between these two contrasting security needs of Americans cannot be considered without sacrificing some elements of the two.

The majority of the critiques argue that sacrificing personal privacy in the direct sense violates the liberty of Americans (Noble, 2013). On the other hand, the critics that oppose this idea relate the events of September 11, 2001 to a bleach of security through the increased personal privacy prevailing in the country. This essay supports that national security should be given more priority than personal security.

Policymakers have a role to play in ensuring that efficient systems are executed to enhance national security. The question is how much security does an ordinary American needs and how much liberty should be extended to the same person. Of course, living in a country with all the liberty in terms of personal privacy, but a poor security system does not make sense.

With considerations of the security systems put forth by the US security agencies, drastic changes have taken place, affecting individual personal privacy for the American citizens with the aim of countering terrorism in the US and the world at large. Noble (2013) purports that majority of policymakers of the world have been construed to put forward motions to design and install superior security systems that provide the utmost security.

The fight against terrorism requires better tools and mechanisms. Zero tolerance to terror and prioritization of lives calls for the need to support national security over personal privacies (O’Harrow Jr, 1999). As a defender of the national surveillance, national security translates to saving lives of the US citizens.

This connotation has been supported by Feinstein, who justifies the installation of a US government surveillance system, where she asserts that the events of September 11 could have been averted if the National Security agency had enough resources with regard to the mass collection of metadata of the people in the US at the time (Denenberg, 2013). Every good comes with a price.

Considering that Feinstein’s intuitions were true, the US citizens require paying a little price of their privacy, which includes some levels of speech freedoms. In addition, the government so far has endeavored the secret installation of security systems for feasibility testing their best in terms of commanding better mechanisms of counter terrorism.

In fact, the majority of US citizens have not been aware of the presence of such systems until the enactment of such laws as those that provide justification for the use of such security systems and their systematic but secret desecration of personal privacy (O’Harrow Jr, 1999). The definition of terrorism is controversial with regard to the US government’s participation in their quest for war on terror in countries like Afghanistan and Iraq.

However, various security measures entailing violation of personal privacy and hence civil liberties to prevent future terrorism attempts in the US have so far proved to be successful (Denenberg, 2013). These security practices have been adopted in many first countries, although in a less elaborate manner than in the US. The intelligence agencies in the US have blended their systems and services to and from a global web in order to offer comprehensive global surveillance.

Terrorist attacks in the US show that national security issues should be addressed urgently. In many instances, the government provides non-classified information regarding their security systems and the extent to which they violate liberty of expression but critiques over-represent what is placed forth (O’Harrow Jr, 1999).

For instance, when balancing between the personal rights of American citizens against national security systems, the government intelligence agencies provide protection to all citizens from terrorist threats and actions, with reverence of security value. In this regard, the national security considers some liberties as a sacrifice in order to offer maximum protection of its citizens.

In the last twenty years, the US has faced two terror bombings, the first taking place in Atlanta during the 1996 Olympic Games, and the second early in the new millennium, which killed around three thousand American citizens (Denenberg, 2013).

George Bush instituted the “War on Terror” program to exterminate terrorism from the face of the earth. Ever since, there are minimal incidences of terrorism globally. In fact, these acts show the vulnerability of the US to terrorism and call for dire measures to counter terrorism including the sacrifice of personal liberties of speech.

Enhanced levels of personal privacy affect national security negatively. In addition to increased awareness of possibilities of terror in the US, the increasing globalization with movements of people and materials across the US borders, the primary solution to national security problem is to compromise personal privacy and institute security systems able to cumulatively collect metadata of all people within the boundaries of US.

The extent of personal privacy compromise goes beyond the bounds of the US with direct monitoring of international visitors and identifying potential pointers to terrorism before the actual act takes place (Noble, 2013). Increased personal privacy and current technological advances in the global community directly affect the integrity of a poor security system.

This directly affects the level of intelligence acquisition by the agencies concerned. Therefore, the compromise of personal privacy, with profiling and discrimination of metadata that is of high priority, plays a crucial role in defining the US as a safe country to live.

Leakage of information and advancements in technology are threats to the national security. In the past decade, technological advances have taken a new turn, which has also trickled down to the enhancement of security systems technologically.

Citizens require support from the government in an bid to develop systems that are able to “sneak” into the privacy of their lives to gather intelligence for purposes of safeguarding national security considering the prevailing dissemination of computer viruses, cyber-terrorism, as well as theft and fraud through hacking and acquisition of personal identification information. The installation of the electronic surveillance system by the US government has led to a decline in cyber crimes in the recent past.

The Federal Intrusion Detection Network (FidNet), which was instituted by Bill Clinton in late 1990s, has protected the government computer security system and facilitated in seizing hackers and terrorists into their systems as well as protected computers from viruses and worms. From a continued use of the system, the Federal Bureau of Investigation in conjunction with the Central Intelligence Agency are using critics to balance the efforts of preserving national security while maintaining personal privacy.

In fact, this is an essential approach because it would ensure that both personal and national issues with regard to security are addressed. Although there have been dozens of leaks depicting the black image of NSA, CIA and FBI, and their violation of some liberties and rights of average US citizens, the violations have been shown to be within the constitutional boundaries.

In addition, the development and installation of security systems by the NSA to monitor emails sent within and outside the US have not been shown to be misused by the security agencies in the country. The freedoms of expression commanded by the media fraternity and the scare they create to the general public has resulted in the spread of fear of domestic spying among Americans (Noble, 2013).

One would think that the US is superior to terrorism, but the easy attacks made early in the twenty first century clearly define the vulnerability of the United States. Moreover, the advances in nanotechnology and indulgence in nuclear technology pose a threat to the US security with other first world countries seeking superiority in terms of warfare, economy and technology.

International bodies play great roles in ensuring national safety. The international organization for intelligence gathering, the ECHELON, is the most powerful international organization run by intelligence organizations of five nations, including the United Kingdom, US, Canada, New Zealand and Australia. The security system is believed to intercept communications that are satellite-based.

This system is a clear evidence of a balance between national security at the international level and personal privacy. Although it can violate some civil liberties, one can identify ECHELON as a security system that aims at averting security breaches and terrorism. In conclusion, stipulation of the importance of national security to the US citizens would lead to acceptance of installation of security systems to promote national security.

Denenberg, S. (2013). What Is More Important: Our Privacy or National Security? Web.

Noble, J. (2013). U.S. debates security vs. privacy 12 years after 9/11 . Web.

O’Harrow Jr, R. (1999). Computer Security Proposal is Revised: Critics Had Raised Online Privacy Fears . Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2020, April 2). National Security Versus Personal Privacy. https://ivypanda.com/essays/national-security-versus-personal-privacy/

"National Security Versus Personal Privacy." IvyPanda , 2 Apr. 2020, ivypanda.com/essays/national-security-versus-personal-privacy/.

IvyPanda . (2020) 'National Security Versus Personal Privacy'. 2 April.

IvyPanda . 2020. "National Security Versus Personal Privacy." April 2, 2020. https://ivypanda.com/essays/national-security-versus-personal-privacy/.

1. IvyPanda . "National Security Versus Personal Privacy." April 2, 2020. https://ivypanda.com/essays/national-security-versus-personal-privacy/.

Bibliography

IvyPanda . "National Security Versus Personal Privacy." April 2, 2020. https://ivypanda.com/essays/national-security-versus-personal-privacy/.

  • The Concept of Metadata
  • Company Metadata and Master Data Management
  • Metadata and Tools in American Express
  • Role of Metadata in Health IT
  • Metadata: Organizing Cars Photos
  • GlaxoSmithKline: Metadata Management
  • The Three Metadata Systems Comparison
  • Digital Surveillance System
  • Information Technology Department Security Measures
  • The National Security Agency' Surveillance Program
  • Legalizing Gay Marriage in the US
  • The Fourteenth Amendment - Constitutional Law
  • The Choice to Buy a Gun for Single Women
  • Civil Law: Is Breaking the Law Necessary
  • Drug Legalization and Intellectual Horizons

Privacy in an AI Era: How Do We Protect Our Personal Information?

A new report analyzes the risks of AI and offers potential solutions. 

Abstract personal private information security technology illustration

The AI boom, including the advent of large language models (LLMs) and their associated chatbots, poses new challenges for privacy. Is our personal information part of a model’s training data? Are our prompts being shared with law enforcement? Will chatbots connect diverse threads from our online lives and output them to anyone? 

To better understand these threats and to wrestle with potential solutions, Jennifer King , privacy and data policy fellow at the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI), and Caroline Meinhardt, Stanford HAI’s policy research manager, published a white paper titled “ Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World .” Here, King describes their main findings.

What kinds of risks do we face, as our data is being bought and sold and used by AI systems?

First, AI systems pose many of the same privacy risks we’ve been facing during the past decades of internet commercialization and mostly unrestrained data collection. The difference is the scale: AI systems are so data-hungry and intransparent that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information. Today, it is basically impossible for people using online products or services to escape systematic digital surveillance across most facets of life—and AI may make matters even worse.

Second, there's the risk of others using our data and AI tools for anti-social purposes. For example, generative AI tools trained with data scraped from the internet may memorize personal information about people, as well as relational data about their family and friends. This data helps enable spear-phishing—the deliberate targeting of people for purposes of identity theft or fraud. Already, bad actors are using AI voice cloning to impersonate people and then extort them over good old-fashioned phones.

Third, we’re seeing data such as a resume or photograph that we’ve shared or posted for one purpose being repurposed for training AI systems, often without our knowledge or consent and sometimes with direct civil rights implications.

Predictive systems are being used to help screen candidates and help employers decide whom to interview for open jobs. However, there have been instances where the AI used to help with selecting candidates has been biased. For example, Amazon famously built its own AI hiring screening tool only to discover that it was biased against female hires.   

Another example involves the use of facial recognition to identify and apprehend people who have committed crimes. It’s easy to think, “It's good to have a tool like facial recognition because it'll catch the bad guys.” But instead, because of the bias inherent in the data used to train existing facial recognition algorithms, we're seeing numerous false arrests of black men . The algorithms simply misidentify them. 

Have we become so numb to the idea that companies are taking all our data that it’s now too late to do anything?

I’m an optimist. There's certainly a lot of data that's been collected about all of us, but that doesn't mean we can't still create a much stronger regulatory system that requires users to opt in to their data being collected or forces companies to delete data when it’s being misused.

Currently, practically any place you go online, your movement across different websites is being tracked. And if you're using a mobile app and you have GPS enabled on your phone, your location data is being collected. This default is the result of the industry convincing the Federal Trade Commission about 20 years ago that if we switched from opt-out to opt-in data collection, we'd never have a commercial internet. At this point I think we've established the utility of the internet. I don't think companies need that excuse for collecting people’s data.  

In my view, when I’m browsing online, my data should not be collected unless or until I make some affirmative choice, like signing up for the service or creating an account. And even then, my data shouldn’t be considered public unless I’ve agreed to share it.

Ten years ago, most people thought about data privacy in terms of online shopping. They thought, “I don't know if I care if these companies know what I buy and what I'm looking for, because sometimes it's helpful.” But now we've seen companies shift to this ubiquitous data collection that trains AI systems, which can have major impact across society, especially our civil rights. I don’t think it’s too late to roll things back. These default rules and practices aren’t etched in stone.

As a general approach to data privacy protection, why isn’t it enough to pass data minimization and purpose limitation regulations that say companies can only gather the data they need for a limited purpose? 

These types of rules are critical and necessary. They play a key role in the European privacy law [the GDPR ] and in the California equivalent [the CPPA ] and are an important part of the federally proposed privacy law [the ADPPA ]. But I’m concerned about the way regulators end up operationalizing these rules. 

For example, how does a regulator make the assessment that a company has collected too much information for the purpose for which it wants to use it? In some instances, it could be clear that a company completely overreached by collecting data it didn’t need. But it’s a more difficult question when companies (think Amazon or Google) can realistically say that they do a lot of different things, meaning they can justify collecting a lot of data. It's not an insurmountable problem with these rules, but it’s a real issue.

Your white paper identifies several possible solutions to the data privacy problems posed by AI. First, you propose a shift from opt-out to opt-in data sharing, which could be made more seamless using software. How would that work?

I would argue that the default should be that our data is not collected unless we affirmatively ask for it to be collected. There have been a few movements and tech solutions in that direction.

One is Apple’s App Tracking Transparency (Apple ATT), which Apple launched in 2021 to address concerns about how much user data was being collected by third-party apps. Now, when iPhone users download a new app, Apple’s iOS system asks if they want to allow the app to track them across other apps and websites. Marketing industry reports estimate that 80% to 90% of people presented with that choice say no.  

Another option is for web browsers to have a built-in opt-out signal, such as Global Privacy Control , that prevents the placement of cookies by third parties or the sale of individuals’ data without the need to check a box. Currently, the California Privacy Protection Act (CPPA) provides that browsers may include this capability, but it has not been mandatory. And while some browsers (Firefox and Brave, for example) have a built-in op-out signal, the big browser companies (such as Microsoft Edge, Apple’s Safari, and Google Chrome) do not. Interestingly though, a California legislator recently proposed a change to the CPPA that would require all browser makers to respect third-party opt-out signals. This is exactly what we need so that data is not collected by every actor possible and every place you go.

You also propose taking a supply chain approach to data privacy. What do you envision that would mean?

When I’m talking about the data supply chain, I’m talking about the ways that AI systems raise issues on the data input side and the data output side. On the input side I’m referring to the training data piece, which is where we worry about whether an individual’s personal information is being scraped from the internet and included in a system’s training data. In turn, the presence of our personal information in the training set potentially has an influence on the output side. For example, a generative AI system might have memorized my personally identifiable information and provide it as output. Or, a generative AI system could reveal something about me that is based on an inference from multiple data points that aren’t otherwise known or connected and are unrelated to any personally identifiable information in the training dataset.

At present, we depend on the AI companies to remove personal information from their training data or to set guardrails that prevent personal information from coming out on the output side. And that’s not really an acceptable situation, because we are dependent on them choosing to do the right thing.

Regulating AI requires paying specific attention to the entire supply chain for the data piece—not just to protect our privacy, but also to avoid bias and improve AI models. Unfortunately, some of the discussions that we've had about regulating AI in the United States haven't been dealing with the data at all. We’ve been focused on transparency requirements around the purpose of companies’ algorithmic systems. Even the AI Act in Europe, which already has the GDPR as a privacy baseline, didn’t take a broad look at the data ecosystem that feeds AI. It was only mentioned in the context of high-risk AI systems. So, this is an area where there is a lot of work to do if we’re going to have any sense that our personal information is protected from inclusion in AI systems, including very large systems such as foundation models.  

You note in your report that the focus on individual privacy rights is too limited and we need to consider collective solutions. What do you mean?

If we want to give people more control over their data in a context where huge amounts of data are being generated and collected, it’s clear to me that doubling down on individual rights isn't sufficient.

In California where we have a data privacy law, most of us don’t even know what rights we do have, let alone the time to figure out how to exercise them. And if we did want to exercise them, we’d have to make individual requests to every company we’ve interacted with to demand that they not sell our personal information—requests that we’d have to make every two years, given that these “do not sell” opt-outs are not permanent.  

This all points toward the need for a collective solution so that the public has enough leverage to negotiate for their data rights at scale. To me, the concept of a data intermediary makes the most sense. It involves delegating the negotiating power over your data rights to a collective that does the work for you, which gives consumers more leverage.

We're already seeing data intermediaries take shape in some business-to-business contexts and they can take various forms , such as a data steward, trust, cooperative, collaborative, or commons. Implementing these in the consumer space would be more challenging, but I don't think it's impossible by any means.

Read the full white paper, “ Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World .”

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition.  Learn more . 

More News Topics

Personal Privacy Essays

Balancing personal privacy and national security: the just war-intelligence theory, popular essay topics.

  • American Dream
  • Artificial Intelligence
  • Black Lives Matter
  • Bullying Essay
  • Career Goals Essay
  • Causes of the Civil War
  • Child Abusing
  • Civil Rights Movement
  • Community Service
  • Cultural Identity
  • Cyber Bullying
  • Death Penalty
  • Depression Essay
  • Domestic Violence
  • Freedom of Speech
  • Global Warming
  • Gun Control
  • Human Trafficking
  • I Believe Essay
  • Immigration
  • Importance of Education
  • Israel and Palestine Conflict
  • Leadership Essay
  • Legalizing Marijuanas
  • Mental Health
  • National Honor Society
  • Police Brutality
  • Pollution Essay
  • Racism Essay
  • Romeo and Juliet
  • Same Sex Marriages
  • Social Media
  • The Great Gatsby
  • The Yellow Wallpaper
  • Time Management
  • To Kill a Mockingbird
  • Violent Video Games
  • What Makes You Unique
  • Why I Want to Be a Nurse
  • Send us an e-mail

Personal privacy vs. public security

personal privacy essay

Personal privacy is a fairly new concept . Most people used to live in tight-knit communities, constantly enmeshed in each other’s lives. The notion that privacy is an important part of personal security is even newer, and often contested, while the need for public security — walls which must be guarded, doors which must be kept locked — is undisputed. Even anti-state anarchists concede the existence of violent enemies and monsters.

Rich people can afford their own high walls and closed doors. Privacy has long been a luxury, and it’s still often treated that way; a disposable asset, nice-to-have, not essential. Reinforcing that attitude is the fact that it’s surprisingly easy, even instinctive, for human beings to live in a small community — anything below Dunbar’s Number — with very little privacy. Even I, a card-carrying semi-misanthropic introvert, have done that for months at a stretch and found it unexpectedly, disconcertingly natural.

And so when technological security is treated as a trade-off between public security and privacy, as it almost always is these days, the primacy of the former is accepted. Consider the constant demands for “golden key” back doors so that governments can access encrypted phones which are “going dark.” Its opponents focus on the fact that such a system will inevitably be vulnerable to bad actors — hackers, stalkers, “evil maids.” Few dare suggest that, even if a perfect magical golden key with no vulnerabilities existed, one which could only be used by government officials within their official remit, the question of whether it should be implemented would still be morally complex.

Consider license plate readers that soon enough will probably track the locations of most cars in California in near-real-time with remarkable precision. Consider how the Golden State Killer was identified, by trawling through public genetic data to look for family matches; as FiveThirtyEight puts it, “ you can’t opt out of sharing your data, even if you didn’t opt in ” any more. Which would be basically fine, as long as we can guarantee hackers don’t get their hands on that data, right? Public security — catching criminals, preventing terror attacks — is far more important than personal privacy. Right?

Consider too corporate security, which, like public security, is inevitably assumed to be far more important than personal privacy. Until recently, Signal, the world’s premier private messaging app, used a technical trick known as “domain fronting,” on Google and Amazon web services, to provide access in countries which had tried to ban it — until this month , when Google disabled domain fronting and Amazon threatened termination of their AWS account , because the privacy of vulnerable populations is not important to them. Consider Facebook’s countless subtle assaults on personal privacy, in the name of connecting people, which happens to be how Facebook becomes ever stronger and more inescapable, while maintaining much stronger controls for its own employees and data.

But even strict corporate secrecy just reinforces the notion that privacy is a luxury for the rich and powerful, an inessential. It wouldn’t make that much difference if Amazon or Facebook or Google or even Apple were to open up their books and their roadmaps. Similarly, it won’t make that much difference if ordinary people have to give up their privacy in the name of public security, right? Living in communities where everyone knows one another’s business is natural, and arguably healthier than the disjoint dysfunction of, say, an apartment building whose dozens of inhabitants don’t even know each other’s names. Public security is essential; privacy is nice-to-have.

…Except.

…Except this dichotomy between “personal privacy” and “public security,” all too often promulgated by people who should know better, is completely false, a classic motte-and-bailey argument in bad faith. When we talk about “personal privacy” in the context of phone data, or license plate readers, or genetic data, or encrypted messaging, we’re not talking about anything even remotely like our instinctive human understanding of “privacy,” that of a luxury for the rich, inessential for people in healthy close-knit communities. Instead we’re talking about the collection and use of personal data at scale; governments and corporations accumulating massive amounts of highly personal information from billions of people.

This accumulation of data is, in and of itself, not a “personal privacy” issue, but a massive public security problem.

At least three problems, in fact. One is that the lack of privacy has a chilling effect on dissidence and original thought. Private spaces are the experimental petri dishes for societies. If you know your every move can be watched, and your every communication can be monitored, so private spaces effectively don’t exist, you’re much less likely to experiment with anything edgy or controversial; and in this era of cameras everywhere, facial recognition, gait recognition, license plate readers, Stingrays, etc., your every move can be watched.

If you don’t like the ethos of your tiny community, you can move to another one whose ethos you do like, but it’s a whole lot harder to change nation-states. Remember when marijuana and homosexuality were illegal in the West? (As they still are, in many places.) Would that have changed if ubiquitous surveillance and at-scale enforcement of those laws had been possible, back then? Are we so certain that all of our laws are perfect and just today, and that we will respond to new technologies by immediately regulating them with farsighted wisdom? I’m not. I’m anything but.

A second problem is that privacy eradication for the masses, coupled with privacy for the rich, will, as always, help to perpetuate status-quo laws / standards / establishments, and encourage parasitism, corruption, and crony capitalism. Cardinal Richelieu famously said, “I f one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged. ” Imagine how much easier it gets if the establishment has access to everything any dissident has ever said and done, while maintaining their own privacy. How long before “anti-terrorism” privacy eradication becomes “selective enforcement of unjust laws” becomes “de facto ‘oppo research’ unleashed on anyone who challenges the status quo”?

A third problem is that technology keeps getting better and better at manipulating the public based on their private data. Do you think ads are bad now? Once AIs start optimizing the advertising → behavior → data feedback loop, you may well like the ads you see, probably on a primal, mammalian, limbic level. Proponents argue that this is obviously better than disliking them. But the propaganda → behavior → data loop is no different from advertising → behavior → data, and no less subject to “optimization.”

When accumulated private data can be used to manipulate public opinion on a massive scale, privacy is no longer a personal luxury. When the rich establishment can use asymmetric privacy to discredit dissidents while remaining opaque themselves, privacy is no longer a personal luxury. When constant surveillance, or the threat thereof, systematically chills and dissuades people from experimenting with new ideas and expressing contentious thoughts, privacy is no longer a personal luxury. And that, I fear, is the world we may live in soon enough, if we don’t already.

  Take 10% OFF— Expires in h m s Use code save10u during checkout.

Chat with us

  • Live Chat Talk to a specialist
  • Self-service options
  • Search FAQs Fast answers, no waiting
  • Ultius 101 New client? Click here
  • Messenger  

International support numbers

Ultius

For reference only, subject to Terms and Fair Use policies.

  • How it Works

Learn more about us

  • Future writers
  • Explore further

Ultius Blog

Sample essay on online privacy.

Ultius

Select network

The advent of the Internet age has been characterized by an unprecedented proliferation of communication and availability of information. However, the dark side is that such developments also raise questions pertaining to the ethical value of privacy: either through direct efforts by stakeholders to access confidential information or through simple ignorance on the part of Internet users, the enormous resources of the Internet also imply that the Internet, used improperly, could seriously threaten the right to individual privacy.

The present sample essay will explore some of the issues that have emerged around this subject over recent times. In particular, it will discuss the National Security Administration (NSA), Snowden, Facebook, and the value of transparency. A key theme that will emerge over the course of the exposition is that in general, Americans believe that whereas they have a right to personal privacy, organizations have a duty to respect the value of transparency. This type of document would likely be found in a tech blog or as a writing assignment .

The NSA and Snowden

To start with, then, the NSA has been in the news recently as a result of the emergence of evidence that the organization had been unlawfully spying on the communications of Americans . As the Electronic Frontier Foundation has written:

"Secret government documents, published by the media in 2013, confirm the NSA obtains full copies of everything that is carried along major domestic fiber optic cable networks" (paragraph 4).

Such operations were conducted as part of a surveillance program known as Prism. The British newspaper The Guardian was instrumental in reporting on this issue as it was in the process of emerging. For example, in a news article from 2013, Greenwald and MacAskill have written:

"The Guardian has verified the authenticity of the document, a 41-slide PowerPoint presentation—classified as top secret with no distribution to foreign allies—which was apparently used to train intelligence operatives on the capabilities of the program" (paragraph 3).

In any event, it is by now an indisputable fact that the NSA has in fact been spying on American citizens. 

From a historical perspective, the activities of the NSA can be understood as related to the aftermath of the 9/11 terrorist attacks that occurred in 2001. During this time, for example, the Patriot Act was passed, which essentially consisted of provisions that infringed on civil liberties in the name of national security (see Library of Congress). The idea was that in a time of national crisis, individuals should be willing to compromise of some of the individual rights for the sake of the well-being of the broader community as a whole.

This would include the right to private communication, insofar as such privacy would potentially undermine the security of the nation. According to the Electronic Frontier Foundation, it was precisely the provisions of legislation such as the Patriot Act that the NSA has used to justify its activities; and rumors of unlawful spying have been present since at least the year 2005. It was only recently, though, that hard evidence emerged regarding these dubious activities. 

The role of Edward Snowden in regards to online privacy

Edward Snowden was the man who was primarily responsible for bringing the domestic surveillance activities of the NSA to the light of the public eye. Not only did the documents leaked by Snowden reveal that the NSA had in fact been engaging in domestic surveillance, it also revealed that it was primarily ordinary Americans who had nothing to do with any kind of investigation who were getting caught by the surveillance. As Gellman, Tate, and Soltani have put it:

"Ordinary Internet users, American and non-American alike, far outnumber legally targeted foreigners in the communications intercepted by the National Security Agency from U.S. digital networks. . . . The daily lives of more than 10,000 account holders who were not targeted are catalogued and recorded nevertheless" (paragraphs 1 and 7).

In other words, the NSA was found to have violated the online privacy of Americans not simply within the context of legitimate investigations but rather as a matter of course, as if the value of individual privacy were no longer even a relevant factor to take into consideration when planning surveillance actions. 

Such revelations have made a significant impact on the perceptions of Americans regarding online privacy. For example, Malden, writing on behalf of the Pew Internet Research Project, has delineated some key statistics regarding Americans' perceptions of online privacy; and among other things, it has been found that

"81% feel not very or not at all secure using social media sites when they want to share private information with another trusted person or organization," and "57% feel insecure sending private information via email" (paragraph 7).

Moreover, correlations were found between increased levels of insecurity on the one hand and greater awareness of the NSA's surveillance program on the other. In short, the events surrounding NSA and Snowden have had a strong effect on the popular culture of electronic communication within the United States. A strong majority of people now seem to take it almost for granted that some stakeholder or another is illicitly monitoring their private communications, and that they must take steps in order to protect themselves from such invasions of personal privacy. The general mindset within the nation regarding online privacy is thus marked by a very high level of suspicion and mistrust.  

The case of Facebook and online privacy

At this point in the discussion, it may be useful to turn attention to a specific forum for online communication: Facebook. Despite denials from the founder of the company, there would seem to be a significant popular perception that Facebook did in fact collude with the NSA and gave up the personal information of its users to the government. Whether this is or is not true, it is rather revealing about the level of trust that Facebook users have in the integrity of the company. According to Debatin, Lovejoy, Horn, and Hughes, many Facebook users seem to exhibit an ambivalent behavioral trend in which they both report being highly aware of privacy issues and yet nevertheless upload significant amounts of personal information onto their Facebook accounts.

These researchers have addressed this issue from a somewhat anthropological perspective and concluded that the paradox can be explained by how ritualistically integrated social media sites become into users' lives:

"Social network sites deeply penetrate their users' everyday life and, as pervasive technology, tend to become invisible once they are widely adopted, ubiquitous and taken for granted" (Debatin et al. 83).

This significantly reduces users' practical concern for privacy even as they understand the concern in theory. 

This also calls attention to a broader point regarding online privacy: the violation of privacy can occur not only because of direct malfeasance on the part of a given stakeholder but also due to ignorance on the part of Internet users. For example, a given Facebook user may upload compromising photographs onto his account without even thinking about the fact that (for example) his boss could access those photographs in a fully legal and legitimate way.

Something similar can be said about other forms of online communication as well: insofar as people are either not aware that they are being watched or do not take adequate action on the implications of this awareness, violations of privacy could occur in a "passive" as opposed to "active" way. That is, Internet users would make their lives essentially open to the public without even realizing they are doing so; and anyone who wanted the users' information could simply take it without even being in violation of the law. In this context, the NSA revelations may have ultimately had at least some positive effect, insofar as they have clearly contributed to Americans becoming more aware of the nature of the dangers at play (see Maden). 

Privacy versus transparency

It is worth turning now to a more careful reflection on the ethical principles that are involved when discussing the subject of online privacy. In particular, one perceives something of a double standard that nevertheless is not illegitimate. On the one hand, when the NSA conducts a secret surveillance program against Americans, this is generally interpreted to mean that the NSA fundamentally lacks transparency, which is unacceptable for a governmental organization within a democratic society.

On the other, when Snowden leaks the NSA's "private" information to the public, this is generally interpreted not as an assault against the NSA's privacy but rather as a defense of the privacy of American citizens. Similarly, Facebook cannot demand that its users become "transparent" and thus willing to share private information with the public; rather, it is expected that Facebook will be transparent with its privacy policy and ensure the security of its users' information. 

The general principle that emerges here, then, is that organizations are expected to be transparent toward their constituents, but that individual constituents are not therefore obliged to be transparent in turn toward the organization. This proposition would seem to be informed by a fundamentally libertarian political perspective, where libertarianism can be understood in the broadest sense as a

"political philosophy that affirms the rights of individuals to liberty, to acquire, keep, and exchange their holdings, and considers the protection of individual rights the primary role of the state" (Vallentyne and Bas van der Vossen,  (paragraph 1).

Essentially, privacy is among the most individualistic of all ethical values; and whenever one insists on the value of privacy, one implicitly also insists on the priority of the individual over the community. Thus, if it is inappropriate for the NSA to spy on its own people, then this is because the NSA was only established by the constituent power of the people in the first place, and its fundamental obligation would thus be to protect the rights of those people (i.e. Americans) and not to violate those rights. 

As a caveat, though, it should be noted that the extent to which this logic obtains depends to a certain extent on how much danger one imagines the community as a whole to be in. In general, when the existence of a community is threatened, emergency powers are granted to executives, and individual citizens are expected to compromise on their own rights until the danger has passed. When does this invasion of online privacy contridict  the United States Consitution?

The logic here would be that if the community were to be annihilated, then this would cause serious harm to all members of the community. This would explain why it is only an appeal to national security that can even potentially justify activities such as the NSA's Prism surveillance program. In this context, Snowden's revelations may have been especially devastating because they showed that much of the information collected by the NSA had nothing to do with national security, and that the NSA thus had no right to violate its basic obligation regarding transparency. 

In summary, this essay has consisted of a discussion of issues regarding online privacy in the modern United States. Subjects addressed have included the NSA and Snowden, the case of Facebook, and the interplay within the ethical values of privacy and transparency. Ultimately, these issues are related to political questions regarding the meaning of democracy, the rights and prerogatives of the individual, and the obligations that organizations have toward their constituents. Understood in this way, issues regarding online privacy merely continue discussions that have historically been in progress for quite a long time.  

Does this look like one of your assignments? You can get a custom research paper on this subject from Ultius.

Works Cited

Debatin, Berhard, Jennette P. Lovejoy, Ann-Kathrin Horn, and Brittany N. Hughes. "Facebook and Online Privacy: Attitudes, Behaviors, and Unintended Consequences." Journal of Computer Mediated Communication 15 (2009): 83-108. 

Electronic Frontier Foundation. NSA Spying on Americans. 2014. Web. 1 Dec. 2014. <https://www.eff.org/nsa-spying/>.

Gellman, Barton, Julie Tate, and Askan Soltani. "In NSA-Intercepted Data, Those Not Targeted Far Outnumber the Foreigners Who Are." The Washington Post 5 Jul. 2014. Web. 1 Dec. 2014. <http://www.washingtonpost.com/world/national-security/in-nsa-intercepted-data-those-not-targeted-far-outnumber-the-foreigners-who-are/2014/07/05/8139adf8-045a- 11e4-8572-4b1b969b6322_story.html>.

Greenwald, Glenn, and Ewen MacAskill. "NSA Prism Program taps in to user data of Apple, Google and Others." The Guardian 7 Jun. 2013. Web. 1 Dec. 2014. 

<http://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data>.

Library of Congress. "H.R.3162: Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT ACT) Act of 2001." THOMAS, 2001. Web. 1 Dec. 2014. < http://thomas.loc.gov/cgi- bin/bdquery/z?d107:H.R.3162:>.

Maden, Mary. "Public Perceptions of Privacy and Security in the Post-Snowden Era." Pew Research Internet Project, 12 Nov. 2014. Web. 1 Dec. 2014. <http://www.pewinternet.org/2014/11/12/public-privacy-perceptions/#ftag=YHFb1d24ec>.

Vallentyne, Peter, and Bas van der Vossen. "Libertarianism." Stanford Encyclopedia of Philosophy. 2014. Web. 1 Dec. 2014. <http://plato.stanford.edu/entries/libertarianism/>.

https://www.ultius.com/ultius-blog/entry/sample-essay-on-online-privacy.html

  • Chicago Style

Ultius, Inc. "Sample Essay on Online Privacy." Ultius | Custom Writing and Editing Services. Ultius Blog, 01 Dec. 2014. https://www.ultius.com/ultius-blog/entry/sample-essay-on-online-privacy.html

Copied to clipboard

Click here for more help with MLA citations.

Ultius, Inc. (2014, December 01). Sample Essay on Online Privacy. Retrieved from Ultius | Custom Writing and Editing Services, https://www.ultius.com/ultius-blog/entry/sample-essay-on-online-privacy.html

Click here for more help with APA citations.

Ultius, Inc. "Sample Essay on Online Privacy." Ultius | Custom Writing and Editing Services. December 01, 2014 https://www.ultius.com/ultius-blog/entry/sample-essay-on-online-privacy.html.

Click here for more help with CMS citations.

Click here for more help with Turabian citations.

Ultius

Ultius is the trusted provider of content solutions and matches customers with highly qualified writers for sample writing, academic editing, and business writing. 

McAfee Secured

Tested Daily

Click to Verify

About The Author

This post was written by Ultius.

Ultius - Writing & Editing Help

  • Writer Options
  • Custom Writing
  • Business Documents
  • Support Desk
  • +1-800-405-2972
  • Submit bug report
  • A+ BBB Rating!

Ultius is the trusted provider of content solutions for consumers around the world. Connect with great American writers and get 24/7 support.

Download Ultius for Android on the Google Play Store

© 2024 Ultius, Inc.

  • Refund & Cancellation Policy

Free Money For College!

Yeah. You read that right —We're giving away free scholarship money! Our next drawing will be held soon.

Our next winner will receive over $500 in funds. Funds can be used for tuition, books, housing, and/or other school expenses. Apply today for your chance to win!

* We will never share your email with third party advertisers or send you spam.

** By providing my email address, I am consenting to reasonable communications from Ultius regarding the promotion.

Past winner

Past Scholarship Winner - Shannon M.

  • Name Samantha M.
  • From Pepperdine University '22
  • Studies Psychology
  • Won $2,000.00
  • Award SEED Scholarship
  • Awarded Sep. 5, 2018

Thanks for filling that out.

Check your inbox for an email about the scholarship and how to apply.

Home — Essay Samples — Information Science and Technology — Cyber Security — Cyber Security Personal Statement

test_template

Cyber Security Personal Statement

  • Categories: Cyber Security National Security

About this sample

close

Words: 683 |

Published: Mar 19, 2024

Words: 683 | Pages: 2 | 4 min read

Table of contents

Defining cyber security, the ever-evolving nature of cyber threats, the complexity of cyber security, protecting personal information, safeguarding national security, ethical considerations, in conclusion.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Karlyna PhD

Verified writer

  • Expert in: Information Science and Technology Government & Politics

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 836 words

4 pages / 1733 words

5 pages / 2216 words

2 pages / 985 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Cyber Security

Cyber security can be defined as security measures being applied to computers to provide a desired level of protection from external cyberattacks. The issue of protection can be defined using the acronym CIA for Confidentiality, [...]

This course will also help me when setting up and troubleshooting patient monitoring network in my current career. There are many times we have network issues that affect the way our system communicates. I can apply the skills [...]

The emergence of online communication has had a significant impact on how society perceives communication and media. This has created a new sphere of social interaction, with new channels of cooperation and coordination, as well [...]

Social media has transformed the way we communicate, share information, and engage with the world. While it offers numerous benefits, it also poses significant challenges, including the spread of misinformation, threats to [...]

Confidentiality, integrity and availability, also known as the CIA triad, is a model designed to guide policies for information security within an organization. The model is also sometimes referred to as the AIC triad [...]

Ethical dilemma is a decision between two alternatives, both of which will bring an antagonistic outcome in light of society and individual rules. It is a basic leadership issue between two conceivable good objectives, neither [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

personal privacy essay

IMAGES

  1. ≫ Internet Privacy Protection Free Essay Sample on Samploon.com

    personal privacy essay

  2. ⇉Importance of Internet Privacy Essay Example

    personal privacy essay

  3. ⇉Issue of Students’ Privacy Rights Argumentative Essay Essay Example

    personal privacy essay

  4. Privacy as a Basic Individual Right Essay Example

    personal privacy essay

  5. (DOC) Privacy essay

    personal privacy essay

  6. 005 Essay Example Privacy Internet Essays Help Get From Custom

    personal privacy essay

VIDEO

  1. How To Reduce Risk When Managing Data Privacy

  2. Privacy settings on devices and online keeping personal information safe

  3. Personal Statement Example

  4. Right to Privacy

  5. Ethics and Codes of Conduct in Counselling

  6. Why Privacy Matters: Protecting Personal Information

COMMENTS

  1. Why We Care about Privacy

    Reverence for the human person as an end in itself and as an autonomous being requires respect for personal privacy. To lose control of one's personal information is in some measure to lose control of one's life and one's dignity. ... when Daniel Ellsberg was suspected of leaking the Pentagon Papers, an internal critique of government conduct ...

  2. Privacy Essay Topics (50 Ideas!)

    After learning about the information that has come to light, students need the opportunity to explore their own thoughts and feelings on this important issue. These 50 new journal prompts offer students the chance to think about their own right to privacy, the government's motives for surveillance, and the ramifications of wide-scale public ...

  3. The Right to Privacy in a Digital Age ...

    Thanks to the Internet and social media, personal privacy has been revolutionized, public figures and private figures are becoming increasingly difficult to discern, and until changes in the law occur, privacy violations in an Internet environment are hard to determine. ... he right of the people to be secure in their persons, houses, papers ...

  4. The Importance of Internet Privacy: [Essay Example], 1017 words

    Get custom essay. Ultimately, the importance of internet privacy extends beyond individual convenience; it is a fundamental right that underpins our freedom, security, and individuality in the digital age. By recognizing the significance of internet privacy and taking meaningful steps to protect it, we can ensure that the digital landscape ...

  5. What Is Privacy?

    An important element of the right to privacy is the right to protection of personal data. While the right to data protection can be inferred from the general right to privacy, some international and regional instruments also stipulate a more specific right to protection of personal data, including:

  6. The Right to Privacy: Personal Freedom in the Digital Age: [Essay

    The right to privacy is a fundamental human right that has evolved and adapted over time, particularly in the face of rapid technological advancements. In an era where personal information is more accessible than ever before, understanding and upholding the right to privacy has never been more crucial. This essay explores the significance of ...

  7. 1. How Americans think about privacy and the vulnerability of their

    When asked what privacy means to them, 28% of respondents mention other people or organizations: "Keeping my personal information out of the hands of the big data companies." - Man, 34 "My personal information is secure. No one knows my credit card numbers, address info, where I have been, my banking info, my health info, etc.

  8. How To Protect Your Privacy Online In 8 Tips : Life Kit : NPR

    Get involved and call your congressperson, he says — tell the policymakers that you care about online privacy. 8. Start small and take it one step at a time. Faced with this landscape, getting a ...

  9. What Digital Privacy Is Worth

    W hen reckoning with a subject as complex and fundamental as our digital privacy, metaphor is appealing—I've certainly reached for it throughout this essay. Our information is oil: a pollutant ...

  10. The Right to Privacy Essay

    The only thing the constitution doesn't directly give us, is our right to privacy, and our right to privacy has been a big concern lately courtesy of the National Security Agency (NSA).(#7) Although our constitution doesn't necessarily cover the privacy topic, it does suggest that privacy is a given right.

  11. Privacy

    Protection of personal privacy with respect to these three dimensions is also constitutive of social life (Fried 1968), as well as crucial for democratic decision-making procedures (Stahl 2020; A. Roberts 2022). ... Translated as Autonomy: An Essay on the Life Well Lived, James C. Wagner (trans.), Cambridge, UK: Polity, 2021.

  12. Views of data privacy risks, personal data and digital privacy laws in

    1. Views of data privacy risks, personal data and digital privacy laws. Online privacy is complex, encompassing debates over law enforcement's data access, government regulation and what information companies can collect. This chapter examines Americans' perspectives on these issues and highlights how views vary across different groups ...

  13. Full article: Online Privacy Breaches, Offline Consequences

    Over 30 years ago, Mason (Citation 1986) voiced ethical concerns over the protection of informational privacy, or "the ability of the individual to personally control information about one's self" (Stone et al., Citation 1983), calling it one of the four ethical issues of the information age.Since the 1980s, scholars have remained concerned about informational privacy, especially given ...

  14. Privacy Essay

    In today's society, the word "privacy" has become ubiquitous. When discussing whether government surveillance and data collection pose a threat to privacy, the most common retort against privacy advocates - by those in favor of databases, video surveillance, spyware, data mining and other modern surveillance measures - is this line: "If I'm not doing anything wrong, what would I ...

  15. National Security Versus Personal Privacy Essay (Critical Writing)

    The majority of the critiques argue that sacrificing personal privacy in the direct sense violates the liberty of Americans (Noble, 2013). On the other hand, the critics that oppose this idea relate the events of September 11, 2001 to a bleach of security through the increased personal privacy prevailing in the country.

  16. Personal Privacy

    813 Words. 4 Pages. Open Document. Abstract. The purpose of this paper is to explain what I use to protect my personal privacy. There are many different things that can be use to protect someone's information and keep hackers from accessing their computer. Some people use anti-virus, firewalls and anti-spam software to help protect their ...

  17. Privacy in an AI Era: How Do We Protect Our Personal Information?

    On the input side I'm referring to the training data piece, which is where we worry about whether an individual's personal information is being scraped from the internet and included in a system's training data. In turn, the presence of our personal information in the training set potentially has an influence on the output side.

  18. The Issue of Personal Privacy in Orwell's 1984 and Today

    Specifically, how people all over the world can communicate with one another and stay connected. In 1984, "the Party" banned relationships between individuals; but, people today can forge friendships and talk freely with social media. Thus, technology, if used the right way, can benefit society rather than damage it.

  19. Personal Privacy Essay Examples

    Introduction Background The intelligence service's role in the United States is to locate hotspots of possible threats to the country's security and identify individuals with ill-will intentions toward the general operations of the US.

  20. Personal privacy vs. public security

    Personal privacy is a . Most people used to live in tight-knit communities, constantly enmeshed in each other's lives. The notion that privacy is an important part of personal security is even ...

  21. Personal Privacy Essays

    The definition given in the Merriam-Webster's dictionary of privacy is, "the quality or state of being apart from company or observation.". And "personal" meaning, "Relating to the person or body.". In other words: setting oneself apart from the observation or company of others. It seems simple;

  22. Sample Essay on Online Privacy

    Conclusion. In summary, this essay has consisted of a discussion of issues regarding online privacy in the modern United States. Subjects addressed have included the NSA and Snowden, the case of Facebook, and the interplay within the ethical values of privacy and transparency. Ultimately, these issues are related to political questions ...

  23. Privacy Vs Security Essay

    The technological advancements humans are making are undoubtedly the need of the house but they also come with some repercussions. Like tapping of the phone calls, tracking of the sites we have visited lately, etc.

  24. Personal Privacy: Surveillance In The World

    Personal privacy is something that everyone wants and is constantly being difficult to have because many people are always being on each other lives and also privacy should have the ability to protect personal information and really protection is a security component. Also, security is something many people are concern about since it controls ...

  25. Internet and Personal Privacy Essay

    The fact that many people now carry out their transactions electronically is another important factor. There is also pressure on personal privacy for increased national security around the world to combat terrorism. In addition, personal privacy is even threatened by commercial factors and the Internet. 3843 Words.

  26. A personal information privacy perceptions model for university

    The students' perceptions were determined for the following privacy constructs: privacy awareness; privacy expectations; and student confidence in the university's data privacy practices. In this study, a quantitative research method using a cross-sectional survey with a closed-ended questionnaire was adopted to collect data from 284 ...

  27. Cyber Security Personal Statement: [Essay Example], 683 words

    Protecting Personal Information. Another critical aspect of cyber security is the protection of personal information. In our digitally interconnected world, our personal data is more vulnerable than ever before. From social media platforms to online banking, our lives are intricately woven into the digital fabric.