• Undergraduate
  • High School
  • Architecture
  • American History
  • Asian History
  • Antique Literature
  • American Literature
  • Asian Literature
  • Classic English Literature
  • World Literature
  • Creative Writing
  • Linguistics
  • Criminal Justice
  • Legal Issues
  • Anthropology
  • Archaeology
  • Political Science
  • World Affairs
  • African-American Studies
  • East European Studies
  • Latin-American Studies
  • Native-American Studies
  • West European Studies
  • Family and Consumer Science
  • Social Issues
  • Women and Gender Studies
  • Social Work
  • Natural Sciences
  • Pharmacology
  • Earth science
  • Agriculture
  • Agricultural Studies
  • Computer Science
  • IT Management
  • Mathematics
  • Investments
  • Engineering and Technology
  • Engineering
  • Aeronautics
  • Medicine and Health
  • Alternative Medicine
  • Communications and Media
  • Advertising
  • Communication Strategies
  • Public Relations
  • Educational Theories
  • Teacher's Career
  • Chicago/Turabian
  • Company Analysis
  • Education Theories
  • Shakespeare
  • Canadian Studies
  • Food Safety
  • Relation of Global Warming and Extreme Weather Condition
  • Movie Review
  • Admission Essay
  • Annotated Bibliography
  • Application Essay
  • Article Critique
  • Article Review
  • Article Writing
  • Book Review
  • Business Plan
  • Business Proposal
  • Capstone Project
  • Cover Letter
  • Creative Essay
  • Dissertation
  • Dissertation - Abstract
  • Dissertation - Conclusion
  • Dissertation - Discussion
  • Dissertation - Hypothesis
  • Dissertation - Introduction
  • Dissertation - Literature
  • Dissertation - Methodology
  • Dissertation - Results
  • GCSE Coursework
  • Grant Proposal
  • Marketing Plan
  • Multiple Choice Quiz
  • Personal Statement
  • Power Point Presentation
  • Power Point Presentation With Speaker Notes
  • Questionnaire
  • Reaction Paper
  • Research Paper
  • Research Proposal
  • SWOT analysis
  • Thesis Paper
  • Online Quiz
  • Literature Review
  • Movie Analysis
  • Statistics problem
  • Math Problem
  • All papers examples
  • How It Works
  • Money Back Policy
  • Terms of Use
  • Privacy Policy
  • We Are Hiring

Social Media Is a Threat to Privacy, Essay Example

Pages: 3

Words: 860

Hire a Writer for Custom Essay

Use 10% Off Discount: "custom10" in 1 Click 👇

You are free to use it as an inspiration or a source for your own work.

Introduction

Social networking has been a global phenomenon with the proliferation of various social networking platforms such as Facebook, Twitter, and Instagram. In many ways, social media has replaced conventional modes of communication such as the telephone and email. People can keep in touch with others by sharing experiences and photographs, and most cases exchange personal information. Social media users usually post private information as part of the process of knowing one another. Since social media is associated with having a large number of users unknown to the client, there is an increased risk of exposing personal details to cybercriminals.

Social media is a threat to privacy

Social media has increased privacy concerns with online platforms. Although they are effective in connecting with family and friend, social media can also endanger private information. Individuals create social media profiles that may expose their private information. According to research conducted by Carnegie Mellon University, information found in social media is sufficient to guess one’s social security number, which can lead to identity theft. With the advent of mobile banking applications, more people are login their sensitive data to smartphones, which can endanger their privacy.

Another group whose privacy is in danger is teenagers. Teenagers post a significant amount of information online, which makes it vitally for them to understand the people they share information with or use privacy settings. However, most teenagers are interested with capturing the attention of their peers and in the process posting information that may enhance their status. This information may not seem harmful, but it can be exploited by cyber criminals to get access to the parents.

Several articles have argued about the threat of privacy in online platforms such as social media. In “online privacy: current health 2 by given (2009),” the author argues that online predators can figure out information posted in online platforms, which can be used against the user. Additionally, employers can use the online platform to check out their employees. With the proliferation of electronic health registers, cyber criminals can access an individual’s health information. It presents a significant danger to the individuals concerned because such registers contain vital information such as social security number and insurance details.

In her article titled” should you panic about online privacy?” Palmer (2010) notes that online platforms are a threat to privacy and individual must take measures to protect their personal data. Due to the threat posed by online environments to privacy, Palmer suggests various strategies that user can use to safeguard their privacy. One way of doing this is by removing the birth year from a personal profile in social media networks because the full birth year is often used by banks to categorize their clients. Cyber criminals can use such information to access online banking systems that can compromise user’s safety. Another suggestion given by Palmer is to use antivirus and anti-spyware. These techniques will prevent criminals from exploiting them to access confidential information.

In the piece carried by the New Yorker tiled “the face of Facebook” by José Vargas may present information about Mark Zuckerberg that is public domain, it illustrates how Facebook profiles reveal private and confidential information to virtually anyone on the site. FACEBOOK is a directory of global citizens that affords people the chance to create public identities. Friends can access this information; friends of friends can also access some while some information is also available to anyone interested in them. Although the company has changed it privacy policies severally, it still exposes private information in several ways. From Zuckerberg’s profile, it is possible to know that he has three sisters, where he schooled in, he favorite comedian and musicians, and his interests. His friends can also access his cell-phone number and email address. Additionally, the addition of a feature known as Places, which allows users to mark their location means that someone interested in Zuckerberg’s location can know it anytime. The article by Vargas reveals how easy it is to access one’s private information in Facebook.

Plagiarism, which is using another person’s ideas or creations without giving credit to that person, is another concern in social media. Individuals take information from other sources or individuals and use them as their own, which a common practice in social media. The lack of attribution and fabrication of content are the real issues because users seldom give credit to the source of the content. Despite the fact that social media is for connecting with friends and family, it has been used as social aggregators, which makes it important to give links to the sources of the content.

Social media platforms raise concerns over privacy issues because others can exploit information that is innocently posted on these sites. Cyber criminals can exploit the information to harm the user. It is important to note that different people can access information posted online. Users must take significant steps to protect their information by using anti-spyware software and emitting sensitive information in their profiles.

Given M. (2008). online privacy. Current health 2 . Retrieved from academic search complete.

Palmer L. (2010, 08) should you panic about online privacy? Redbook , Vol. 215 Issue 2, p130

Vargas,J.A. ( 2010, sep20).  The face of Facebook. The New Yorker. Retrieved from http://www.newyorker.com/magazine/2010/09/20/the-face-of-facebook

Stuck with your Essay?

Get in touch with one of our experts for instant help!

Vitamin C Experiment, Essay Example

The Professionalization of Ujwai Bharati, Essay Example

Time is precious

don’t waste it!

Plagiarism-free guarantee

Privacy guarantee

Secure checkout

Money back guarantee

E-book

Related Essay Samples & Examples

Voting as a civic responsibility, essay example.

Pages: 1

Words: 287

Utilitarianism and Its Applications, Essay Example

Words: 356

The Age-Related Changes of the Older Person, Essay Example

Pages: 2

Words: 448

The Problems ESOL Teachers Face, Essay Example

Pages: 8

Words: 2293

Should English Be the Primary Language? Essay Example

Pages: 4

Words: 999

The Term “Social Construction of Reality”, Essay Example

Words: 371

Read our research on: Gun Policy | International Conflict | Election 2024

Regions & Countries

Americans’ complicated feelings about social media in an era of privacy concerns.

(Busakorn Pongparnit)

Amid public concerns over Cambridge Analytica’s use of Facebook data and a subsequent movement to encourage users to abandon Facebook , there is a renewed focus on how social media companies collect personal information and make it available to marketers.

Pew Research Center has studied the spread and impact of social media since 2005, when just 5% of American adults used the platforms. The trends tracked by our data tell a complex story that is full of conflicting pressures. On one hand, the rapid growth of the platforms is testimony to their appeal to online Americans. On the other, this widespread use has been accompanied by rising user concerns about privacy and social media firms’ capacity to protect their data.

All this adds up to a mixed picture about how Americans feel about social media. Here are some of the dynamics.

People like and use social media for several reasons

essay on does social media violate our privacy

The Center’s polls have found over the years that people use social media for important social interactions like staying in touch with friends and family and reconnecting with old acquaintances. Teenagers are especially likely to report that social media are important to their friendships and, at times, their romantic relationships .

Beyond that, we have documented how social media play a role in the way people participate in civic and political activities, launch and sustain protests , get and share health information , gather scientific information , engage in family matters , perform job-related activities and get news . Indeed, social media is now just as common a pathway to news for people as going directly to a news organization website or app.

Our research has not established a causal relationship between people’s use of social media and their well-being. But in a 2011 report, we noted modest associations between people’s social media use and higher levels of trust, larger numbers of close friends, greater amounts of social support and higher levels of civic participation.

People worry about privacy and the use of their personal information

While there is evidence that social media works in some important ways for people, Pew Research Center studies have shown that people are anxious about all the personal information that is collected and shared and the security of their data.

Overall, a 2014 survey found that 91% of Americans “agree” or “strongly agree” that people have lost control over how personal information is collected and used by all kinds of entities. Some 80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms, and 64% said the government should do more to regulate advertisers.

essay on does social media violate our privacy

Moreover, people struggle to understand the nature and scope of the data collected about them. Just 9% believe they have “a lot of control” over the information that is collected about them, even as the vast majority (74%) say it is very important to them to be in control of who can get information about them.

Six-in-ten Americans (61%) have said they would like to do more to protect their privacy. Additionally, two-thirds have said current laws are not good enough in protecting people’s privacy, and 64% support more regulation of advertisers.

Some privacy advocates hope that the European Union’s General Data Protection Regulation , which goes into effect on May 25, will give users – even Americans – greater protections about what data tech firms can collect, how the data can be used, and how consumers can be given more opportunities to see what is happening with their information.

People’s issues with the social media experience go beyond privacy

In addition to the concerns about privacy and social media platforms uncovered in our surveys, related research shows that just 5% of social media users trust the information that comes to them via the platforms “a lot.”

essay on does social media violate our privacy

A considerable number of social media users said they simply ignored  political arguments when they broke out in their feeds. Others went steps further by blocking or unfriending those who offended or bugged them.

Why do people leave or stay on social media platforms?

The paradox is that people use social media platforms even as they express great concern about the privacy implications of doing so – and the social woes they encounter. The Center’s most recent survey about social media found that 59% of users said it would  not be difficult to give up these sites, yet the share saying these sites would be hard to give up grew 12 percentage points from early 2014.

Some of the answers about why people stay on social media could tie to our findings about how people adjust their behavior on the sites and online, depending on personal and political circumstances. For instance, in a 2012 report we found that 61% of Facebook users said they had taken a break from using the platform. Among the reasons people cited were that they were too busy to use the platform, they lost interest, they thought it was a waste of time and that it was filled with too much drama, gossip or conflict.

In other words, participation on the sites for many people is not an all-or-nothing proposition.

People pursue strategies to try to avoid problems on social media and the internet overall. Fully 86% of internet users said in 2012 they had taken steps to try to be anonymous online. “Hiding from advertisers” was relatively high on the list of those they wanted to avoid.

Many social media users fine-tune their behavior to try to make things less challenging or unsettling on the sites, including changing their privacy settings and restricting access to their profiles. Still, 48% of social media users reported in a 2012 survey they have difficulty managing their privacy controls.

After National Security Agency contractor Edward Snowden disclosed details about government surveillance programs starting in 2013, 30% of adults said they took steps to hide or shield their information and 22% reported they had changed their online behavior in order to minimize detection.

One other argument that some experts make in Pew Research Center canvassings about the future is that people often find it hard to disconnect because so much of modern life takes place on social media. These experts believe that unplugging is hard because social media and other technology affordances make life convenient and because the platforms offer a very efficient, compelling way for users to stay connected to the people and organizations that matter to them.

Note: See topline results  for overall social media user data   here (PDF).

essay on does social media violate our privacy

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

Social Media Fact Sheet

7 facts about americans and instagram, social media use in 2021, 64% of americans say social media have a mostly negative effect on the way things are going in the u.s. today, share of u.s. adults using social media, including facebook, is mostly unchanged since 2018, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

  • Search Search for:
  • News & Stories

The Erosion of Privacy: Social Media’s Impact on Personal Information 

Social media has become an integral part of our lives in digital age, providing a platform for sharing experiences, opinions, and personal information. However, this convenience comes with a price – the erosion of our privacy. Social media encourages users to share vast amounts of personal information , often without fully understanding the implications.

essay on does social media violate our privacy

The impact of personal information leakage on social media is far-reaching and can have significant consequences. Take Zoey He, for example, a former online celebrity with a considerable following. As her popularity grew, she found herself grappling with the distress caused by the interconnectivity of information across multiple platforms. What started as sharing her daily life on Instagram soon spilled over to Facebook and other platforms, making her feel like her personal space was invaded.

essay on does social media violate our privacy

As Zoey gained more followers, she faced escalating consequences. Among them, the most concerning was when she shared a video about her job, and someone cleverly gain information from it and find her LinkedIn profile, leading to emailed her working company based on what she accidentally said about her work . This intrusion into her professional life had detrimental effects on her work. This incident led her to a realization: “I came to understand that finding a balance between personal sharing and privacy protection on social media is a challenge.” To protect herself, Zoey gave up on becoming an influencer, made her account private, and only shared updates with close friends.

essay on does social media violate our privacy

Social media platforms thrive on user engagement and data-driven business models. They entice users to disclose personal details through various means, such as prompts to complete profiles, sharing location, and interactions with friends and family. The more information shared, the better platforms can tailor content and advertisements, creating an addictive user experience.

essay on does social media violate our privacy

Join us on a journey to unravel the reasoning behind social media’s user data collection as we delve into a conversation with business expert, Pham Thanh Thao (who worked as Research Manager at NielsenIQ) . Get ready to gain valuable insights from her perspective on this intricate topic.

Disclaimer:

This website is part of a student project. While the information on this website has been verified to the best of our abilities, we cannot guarantee that there are no mistakes or errors.

The material on this site is given for general information only and does not constitute professional advice.

The views expressed through this site are those of the individual contributors and not those of the website owner. We are not responsible for the content of external sites.

' src=

I have no idea how to avoid privacy disclosure

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

essay on does social media violate our privacy

Why protecting privacy is a losing game today—and how to change the game

Subscribe to techstream, cameron f. kerry cameron f. kerry ann r. and andrew h. tisch distinguished visiting fellow - governance studies , center for technology innovation @cam_kerry.

July 12, 2018

Recent congressional hearings and data breaches have prompted more legislators and business leaders to say the time for broad federal privacy legislation has come. Cameron Kerry presents the case for adoption of a baseline framework to protect consumer privacy in the U.S.

Kerry explores a growing gap between existing laws and an information Big Bang that is eroding trust. He suggests that recent privacy bills have not been ambitious enough, and points to the Obama administration’s Consumer Privacy Bill of Rights as a blueprint for future legislation. Kerry considers ways to improve that proposal, including an overarching “golden rule of privacy” to ensure people can trust that data about them is handled in ways consistent with their interests and the circumstances in which it was collected.

Table of Contents Introduction: Game change? How current law is falling behind Shaping laws capable of keeping up

  • 31 min read

Introduction: Game change?

There is a classic episode of the show “I Love Lucy” in which Lucy goes to work wrapping candies on an assembly line . The line keeps speeding up with the candies coming closer together and, as they keep getting farther and farther behind, Lucy and her sidekick Ethel scramble harder and harder to keep up. “I think we’re fighting a losing game,” Lucy says.

This is where we are with data privacy in America today. More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system. If we don’t change the rules of the game soon, it will turn into a losing game for our economy and society.

More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system.

The Cambridge Analytica drama has been the latest in a series of eruptions that have caught peoples’ attention in ways that a steady stream of data breaches and misuses of data have not.

The first of these shocks was the Snowden revelations in 2013. These made for long-running and headline-grabbing stories that shined light on the amount of information about us that can end up in unexpected places. The disclosures also raised awareness of how much can be learned from such data (“we kill people based on metadata,” former NSA and CIA Director Michael Hayden said ).

The aftershocks were felt not only by the government, but also by American companies, especially those whose names and logos showed up in Snowden news stories. They faced suspicion from customers at home and market resistance from customers overseas. To rebuild trust, they pushed to disclose more about the volume of surveillance demands and for changes in surveillance laws. Apple, Microsoft, and Yahoo all engaged in public legal battles with the U.S. government.

Then came last year’s Equifax breach that compromised identity information of almost 146 million Americans. It was not bigger than some of the lengthy roster of data breaches that preceded it, but it hit harder because it rippled through the financial system and affected individual consumers who never did business with Equifax directly but nevertheless had to deal with the impact of its credit scores on economic life. For these people, the breach was another demonstration of how much important data about them moves around without their control, but with an impact on their lives.

Now the Cambridge Analytica stories have unleashed even more intense public attention, complete with live network TV cut-ins to Mark Zuckerberg’s congressional testimony. Not only were many of the people whose data was collected surprised that a company they never heard of got so much personal information, but the Cambridge Analytica story touches on all the controversies roiling around the role of social media in the cataclysm of the 2016 presidential election. Facebook estimates that Cambridge Analytica was able to leverage its “academic” research into data on some 87 million Americans (while before the 2016 election Cambridge Analytica’s CEO Alexander Nix boasted of having profiles with 5,000 data points on 220 million Americans). With over two billion Facebook users worldwide, a lot of people have a stake in this issue and, like the Snowden stories, it is getting intense attention around the globe, as demonstrated by Mark Zuckerberg taking his legislative testimony on the road to the European Parliament .

The Snowden stories forced substantive changes to surveillance with enactment of U.S. legislation curtailing telephone metadata collection and increased transparency and safeguards in intelligence collection. Will all the hearings and public attention on Equifax and Cambridge Analytica bring analogous changes to the commercial sector in America?

I certainly hope so. I led the Obama administration task force that developed the “ Consumer Privacy Bill of Rights ” issued by the White House in 2012 with support from both businesses and privacy advocates, and then drafted legislation to put this bill of rights into law. The legislative proposal issued after I left the government did not get much traction, so this initiative remains unfinished business.

The Cambridge Analytica stories have spawned fresh calls for some federal privacy legislation from members of Congress in both parties, editorial boards, and commentators. With their marquee Zuckerberg hearings behind them, senators and congressmen are moving on to think about what do next. Some have already introduced bills and others are thinking about what privacy proposals might look like. The op-eds and Twitter threads on what to do have flowed. Various groups in Washington have been convening to develop proposals for legislation.

This time, proposals may land on more fertile ground. The chair of the Senate Commerce Committee, John Thune (R-SD) said “many of my colleagues on both sides of the aisle have been willing to defer to tech companies’ efforts to regulate themselves, but this may be changing.” A number of companies have been increasingly open to a discussion of a basic federal privacy law. Most notably, Zuckerberg told CNN “I’m not sure we shouldn’t be regulated,” and Apple’s Tim Cook expressed his emphatic belief that self-regulation is no longer viable.

For a while now, events have been changing the way that business interests view the prospect of federal privacy legislation.

This is not just about damage control or accommodation to “techlash” and consumer frustration. For a while now, events have been changing the way that business interests view the prospect of federal privacy legislation. An increasing spread of state legislation on net neutrality, drones, educational technology, license plate readers, and other subjects and, especially broad new legislation in California pre-empting a ballot initiative, have made the possibility of a single set of federal rules across all 50 states look attractive. For multinational companies that have spent two years gearing up for compliance with the new data protection law that has now taken effect in the EU, dealing with a comprehensive U.S. law no longer looks as daunting. And more companies are seeing value in a common baseline that can provide people with reassurance about how their data is handled and protected against outliers and outlaws.

This change in the corporate sector opens the possibility that these interests can converge with those of privacy advocates in comprehensive federal legislation that provides effective protections for consumers. Trade-offs to get consistent federal rules that preempt some strong state laws and remedies will be difficult, but with a strong enough federal baseline, action can be achievable.

how current law is falling behind

Snowden, Equifax, and Cambridge Analytica provide three conspicuous reasons to take action. There are really quintillions of reasons. That’s how fast IBM estimates we are generating digital information, quintillions of bytes of data every day—a number followed by 30 zeros. This explosion is generated by the doubling of computer processing power every 18-24 months that has driven growth in information technology throughout the computer age, now compounded by the billions of devices that collect and transmit data, storage devices and data centers that make it cheaper and easier to keep the data from these devices, greater bandwidth to move that data faster, and more powerful and sophisticated software to extract information from this mass of data. All this is both enabled and magnified by the singularity of network effects—the value that is added by being connected to others in a network—in ways we are still learning.

This information Big Bang is doubling the volume of digital information in the world every two years. The data explosion that has put privacy and security in the spotlight will accelerate. Futurists and business forecasters debate just how many tens of billions of devices will be connected in the coming decades, but the order of magnitude is unmistakable—and staggering in its impact on the quantity and speed of bits of information moving around the globe. The pace of change is dizzying, and it will get even faster—far more dizzying than Lucy’s assembly line.

Most recent proposals for privacy legislation aim at slices of the issues this explosion presents. The Equifax breach produced legislation aimed at data brokers. Responses to the role of Facebook and Twitter in public debate have focused on political ad disclosure, what to do about bots, or limits to online tracking for ads. Most state legislation has targeted specific topics like use of data from ed-tech products, access to social media accounts by employers, and privacy protections from drones and license-plate readers. Facebook’s simplification and expansion of its privacy controls and recent federal privacy bills in reaction to events focus on increasing transparency and consumer choice. So does the newly enacted California Privacy Act.

This information Big Bang is doubling the volume of digital information in the world every two years. The data explosion that has put privacy and security in the spotlight will accelerate. Most recent proposals for privacy legislation aim at slices of the issues this explosion presents.

Measures like these double down on the existing American privacy regime. The trouble is, this system cannot keep pace with the explosion of digital information, and the pervasiveness of this information has undermined key premises of these laws in ways that are increasingly glaring. Our current laws were designed to address collection and storage of structured data by government, business, and other organizations and are busting at the seams in a world where we are all connected and constantly sharing. It is time for a more comprehensive and ambitious approach. We need to think bigger, or we will continue to play a losing game.

Our existing laws developed as a series of responses to specific concerns, a checkerboard of federal and state laws, common law jurisprudence, and public and private enforcement that has built up over more than a century. It began with the famous Harvard Law Review article by (later) Justice Louis Brandeis and his law partner Samuel Warren in 1890 that provided a foundation for case law and state statutes for much of the 20th Century, much of which addressed the impact of mass media on individuals who wanted, as Warren and Brandeis put it, “to be let alone.” The advent of mainframe computers saw the first data privacy laws adopted in 1974 to address the power of information in the hands of big institutions like banks and government: the federal Fair Credit Reporting Act that gives us access to information on credit reports and the Privacy Act that governs federal agencies. Today, our checkerboard of privacy and data security laws covers data that concerns people the most. These include health data, genetic information, student records and information pertaining to children in general, financial information, and electronic communications (with differing rules for telecommunications carriers, cable providers, and emails).

Outside of these specific sectors is not a completely lawless zone. With Alabama adopting a law last April, all 50 states now have laws requiring notification of data breaches (with variations in who has to be notified, how quickly, and in what circumstances). By making organizations focus on personal data and how they protect it, reinforced by exposure to public and private enforcement litigation, these laws have had a significant impact on privacy and security practices. In addition, since 2003, the Federal Trade Commission—under both Republican and Democratic majorities—has used its enforcement authority to regulate unfair and deceptive commercial practices and to police unreasonable privacy and information security practices. This enforcement, mirrored by many state attorneys general, has relied primarily on deceptiveness, based on failures to live up to privacy policies and other privacy promises.

These levers of enforcement in specific cases, as well as public exposure, can be powerful tools to protect privacy. But, in a world of technology that operates on a massive scale moving fast and doing things because one can, reacting to particular abuses after-the-fact does not provide enough guardrails.

As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books. This includes most of the data we generate through such widespread uses as web searches, social media, e-commerce, and smartphone apps. The changes come faster than legislation or regulatory rules can adapt, and they erase the sectoral boundaries that have defined our privacy laws. Take my smart watch, for one example: data it generates about my heart rate and activity is covered by the Health Insurance Portability and Accountability Act (HIPAA) if it is shared with my doctor, but not when it goes to fitness apps like Strava (where I can compare my performance with my peers). Either way, it is the same data, just as sensitive to me and just as much of a risk in the wrong hands.

As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books.

It makes little sense that protection of data should depend entirely on who happens to hold it. This arbitrariness will spread as more and more connected devices are embedded in everything from clothing to cars to home appliances to street furniture. Add to that striking changes in patterns of business integration and innovation—traditional telephone providers like Verizon and AT&T are entering entertainment, while startups launch into the provinces of financial institutions like currency trading and credit and all kinds of enterprises compete for space in the autonomous vehicle ecosystem—and the sectoral boundaries that have defined U.S. privacy protection cease to make any sense.

Putting so much data into so many hands also is changing the nature of information that is protected as private. To most people, “personal information” means information like social security numbers, account numbers, and other information that is unique to them. U.S. privacy laws reflect this conception by aiming at “personally identifiable information,” but data scientists have repeatedly demonstrated that this focus can be too narrow. The aggregation and correlation of data from various sources make it increasingly possible to link supposedly anonymous information to specific individuals and to infer characteristics and information about them. The result is that today, a widening range of data has the potential to be personal information, i.e. to identify us uniquely. Few laws or regulations address this new reality.

Nowadays, almost every aspect of our lives is in the hands of some third party somewhere. This challenges judgments about “expectations of privacy” that have been a major premise for defining the scope of privacy protection. These judgments present binary choices: if private information is somehow public or in the hands of a third party, people often are deemed to have no expectation of privacy. This is particularly true when it comes to government access to information—emails, for example, are nominally less protected under our laws once they have been stored 180 days or more, and articles and activities in plain sight are considered categorically available to government authorities. But the concept also gets applied to commercial data in terms and conditions of service and to scraping of information on public websites, for two examples.

As more devices and sensors are deployed in the environments we pass through as we carry on our days, privacy will become impossible if we are deemed to have surrendered our privacy simply by going about the world or sharing it with any other person. Plenty of people have said privacy is dead, starting most famously with Sun Microsystems’ Scott McNealy back in the 20th century (“you have zero privacy … get over it”) and echoed by a chorus of despairing writers since then. Without normative rules to provide a more constant anchor than shifting expectations, true privacy actually could be dead or dying. The Supreme Court may have something to say on the subject in we will need a broader set of norms to protect privacy in settings that have been considered public. Privacy can endure, but it needs a more enduring foundation.

The Supreme Court in its recent Carpenter decision recognized how constant streams of data about us change the ways that privacy should be protected. In holding that enforcement acquisition of cell phone location records requires a warrant, the Court considered the “detailed, encyclopedic, and effortlessly compiled” information available from cell service location records and “the seismic shifts in digital technology” that made these records available, and concluded that people do not necessarily surrender privacy interests to collect data they generate or by engaging in behavior that can be observed publicly. While there was disagreement among Justices as to the sources of privacy norms, two of the dissenters, Justice Alito and Gorsuch, pointed to “expectations of privacy” as vulnerable because they can erode or be defined away.

How this landmark privacy decision affects a wide variety of digital evidence will play out in criminal cases and not in the commercial sector. Nonetheless, the opinions in the case point to a need for a broader set of norms to protect privacy in settings that have been thought to make information public. Privacy can endure, but it needs a more enduring foundation.

Our existing laws also rely heavily on notice and consent—the privacy notices and privacy policies that we encounter online or receive from credit card companies and medical providers, and the boxes we check or forms we sign. These declarations are what provide the basis for the FTC to find deceptive practices and acts when companies fail to do what they said. This system follows the model of informed consent in medical care and human subject research, where consent is often asked for in person, and was imported into internet privacy in the 1990s. The notion of U.S. policy then was to foster growth of the internet by avoiding regulation and promoting a “ market resolution ” in which individuals would be informed about what data is collected and how it would be processed, and could make choices on this basis.

Maybe informed consent was practical two decades ago, but it is a fantasy today. In a constant stream of online interactions, especially on the small screens that now account for the majority of usage, it is unrealistic to read through privacy policies. And people simply don’t.

It is not simply that any particular privacy policies “suck,” as Senator John Kennedy (R-LA) put it in the Facebook hearings. Zeynep Tufecki is right that these disclosures are obscure and complex . Some forms of notice are necessary and attention to user experience can help, but the problem will persist no matter how well designed disclosures are. I can attest that writing a simple privacy policy is challenging, because these documents are legally enforceable and need to explain a variety of data uses; you can be simple and say too little or you can be complete but too complex. These notices have some useful function as a statement of policy against which regulators, journalists, privacy advocates, and even companies themselves can measure performance, but they are functionally useless for most people, and we rely on them to do too much.

At the end of the day, it is simply too much to read through even the plainest English privacy notice, and being familiar with the terms and conditions or privacy settings for all the services we use is out of the question. The recent flood of emails about privacy policies and consent forms we have gotten with the coming of the EU General Data Protection Regulation have offered new controls over what data is collected or information communicated, but how much have they really added to people’s understanding? Wall Street Journal reporter Joanna Stern attempted to analyze all the ones she received (enough paper printed out to stretch more than the length of a football field), but resorted to scanning for a few specific issues. In today’s world of constant connections, solutions that focus on increasing transparency and consumer choice are an incomplete response to current privacy challenges.

Moreover, individual choice becomes utterly meaningless as increasingly automated data collection leaves no opportunity for any real notice, much less individual consent. We don’t get asked for consent to the terms of surveillance cameras on the streets or “beacons” in stores that pick up cell phone identifiers, and house guests aren’t generally asked if they agree to homeowners’ smart speakers picking up their speech. At best, a sign may be posted somewhere announcing that these devices are in place. As devices and sensors increasingly are deployed throughout the environments we pass through, some after-the-fact access and control can play a role, but old-fashioned notice and choice become impossible.

Ultimately, the familiar approaches ask too much of individual consumers. As the President’s Council of Advisers on Science and Technology Policy found in a 2014 report on big data , “the conceptual problem with notice and choice is that it fundamentally places the burden of privacy protection on the individual,” resulting in an unequal bargain, “a kind of market failure.”

This is an impossible burden that creates an enormous disparity of information between the individual and the companies they deal with. As Frank Pasquale ardently dissects in his “Black Box Society,”   we know very little about how the businesses that collect our data operate. There is no practical way even a reasonably sophisticated person can get arms around the data that they generate and what that data says about them. After all, making sense of the expanding data universe is what data scientists do. Post-docs and Ph.D.s at MIT (where I am a visiting scholar at the Media Lab) as well as tens of thousands of data researchers like them in academia and business are constantly discovering new information that can be learned from data about people and new ways that businesses can—or do—use that information. How can the rest of us who are far from being data scientists hope to keep up?

As a result, the businesses that use the data know far more than we do about what our data consists of and what their algorithms say about us. Add this vast gulf in knowledge and power to the absence of any real give-and-take in our constant exchanges of information, and you have businesses able by and large to set the terms on which they collect and share this data.

Businesses are able by and large to set the terms on which they collect and share this data. This is not a “market resolution” that works.

This is not a “market resolution” that works. The Pew Research Center has tracked online trust and attitudes toward the internet and companies online. When Pew probed with surveys and focus groups in 2016, it found that “while many Americans are willing to share personal information in exchange for tangible benefits, they are often cautious about disclosing their information and frequently unhappy about that happens to that information once companies have collected it.” Many people are “uncertain, resigned, and annoyed.” There is a growing body of survey research in the same vein. Uncertainty, resignation, and annoyance hardly make a recipe for a healthy and sustainable marketplace, for trusted brands, or for consent of the governed.

Consider the example of the journalist Julia Angwin. She spent a year trying to live without leaving digital traces, which she described in her book “Dragnet Nation.” Among other things, she avoided paying by credit card and established a fake identity to get a card for when she couldn’t avoid using one; searched hard to find encrypted cloud services for most email; adopted burner phones that she turned off when not in use and used very little; and opted for paid subscription services in place of ad-supported ones. More than a practical guide to protecting one’s data privacy, her year of living anonymously was an extended piece of performance art demonstrating how much digital surveillance reveals about our lives and how hard it is to avoid. The average person should not have to go to such obsessive lengths to ensure that their identities or other information they want to keep private stays private. We need a fair game.

Shaping laws capable of keeping up

As policymakers consider how the rules might change, the Consumer Privacy Bill of Rights we developed in the Obama administration has taken on new life as a model. The Los Angeles Times , The Economist , and The New York Times all pointed to this bill of rights in urging Congress to act on comprehensive privacy legislation, and the latter said “there is no need to start from scratch …” Our 2012 proposal needs adapting to changes in technology and politics, but it provides a starting point for today’s policy discussion because of the wide input it got and the widely accepted principles it drew on.

The bill of rights articulated seven basic principles that should be legally enforceable by the Federal Trade Commission: individual control, transparency, respect for the context in which the data was obtained, access and accuracy, focused collection, security, and accountability. These broad principles are rooted in longstanding and globally-accepted “fair information practices principles.” To reflect today’s world of billions of devices interconnected through networks everywhere, though, they are intended to move away from static privacy notices and consent forms to a more dynamic framework, less focused on collection and process and more on how people are protected in the ways their data is handled. Not a checklist, but a toolbox. This principles-based approach was meant to be interpreted and fleshed out through codes of conduct and case-by-case FTC enforcement—iterative evolution, much the way both common law and information technology developed.

As policymakers consider how the rules might change, the Consumer Privacy Bill of Rights developed in the Obama administration has taken on new life as a model. The bill of rights articulated seven basic principles that should be legally enforceable by the Federal Trade Commission.

The other comprehensive model that is getting attention is the EU’s newly effective General Data Protection Regulation. For those in the privacy world, this has been the dominant issue ever since it was approved two years ago, but even so, it was striking to hear “the GDPR” tossed around as a running topic of congressional questions for Mark Zuckerberg. The imminence of this law, its application to Facebook and many other American multinational companies, and its contrast with U.S. law made GDPR a hot topic. It has many people wondering why the U.S. does not have a similar law, and some saying the U.S. should follow the EU model.

I dealt with the EU law since it was in draft form while I led U.S. government engagement with the EU on privacy issues alongside developing our own proposal. Its interaction with U.S. law and commerce has been part of my life as an official, a writer and speaker on privacy issues, and a lawyer ever since. There’s a lot of good in it, but it is not the right model for America.

There’s a lot of good in the GDPR, but it is not the right model for America.

What is good about the EU law? First of all, it is a law—one set of rules that applies to all personal data across the EU. Its focus on individual data rights in theory puts human beings at the center of privacy practices, and the process of complying with its detailed requirements has forced companies to take a close look at what data they are collecting, what they use it for, and how they keep it and share it—which has proved to be no small task. Although the EU regulation is rigid in numerous respects, it can be more subtle than is apparent at first glance. Most notably, its requirement that consent be explicit and freely given is often presented in summary reports as prohibiting collecting any personal data without consent; in fact, the regulation allows other grounds for collecting data and one effect of the strict definition of consent is to put more emphasis on these other grounds. How some of these subtleties play out will depend on how 40 different regulators across the EU apply the law, though. European advocacy groups were already pursuing claims against “ les GAFAM ” (Google, Amazon, Facebook, Apple, Microsoft) as the regulation went into effect.

The EU law has its origins in the same fair information practice principles as the Consumer Privacy Bill of Rights. But the EU law takes a much more prescriptive and process-oriented approach, spelling out how companies must manage privacy and keep records and including a “right to be forgotten” and other requirements hard to square with our First Amendment. Perhaps more significantly, it may not prove adaptable to artificial intelligence and new technologies like autonomous vehicles that need to aggregate masses of data for machine learning and smart infrastructure. Strict limits on the purposes of data use and retention may inhibit analytical leaps and beneficial new uses of information. A rule requiring human explanation of significant algorithmic decisions will shed light on algorithms and help prevent unfair discrimination but also may curb development of artificial intelligence. These provisions reflect a distrust of technology that is not universal in Europe but is a strong undercurrent of its political culture.

We need an American answer—a more common law approach adaptable to changes in technology—to enable data-driven knowledge and innovation while laying out guardrails to protect privacy. The Consumer Privacy Bill of Rights offers a blueprint for such an approach.

Sure, it needs work, but that’s what the give-and-take of legislating is about. Its language on transparency came out sounding too much like notice-and-consent, for example. Its proposal for fleshing out the application of the bill of rights had a mixed record of consensus results in trial efforts led by the Commerce Department.

It also got some important things right. In particular, the “respect for context” principle is an important conceptual leap. It says that a people “have a right to expect that companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.” This breaks from the formalities of privacy notices, consent boxes, and structured data and focuses instead on respect for the individual. Its emphasis on the interactions between an individual and a company and circumstances of the data collection and use derives from  the insight of information technology thinker Helen Nissenbaum . To assess privacy interests, “it is crucial to know the context—who is gathering the information, who is analyzing it, who is disseminating and to whom, the nature of the information, the relationships among the various parties, and even larger institutional and social circumstances.”

We need an American answer—a more common law approach adaptable to changes in technology—to enable data-driven knowledge and innovation while laying out guardrails to protect privacy.

Context is complicated—our draft legislation listed 11 different non-exclusive factors to assess context. But that is in practice the way we share information and form expectations about how that information will be handled and about our trust in the handler. We bare our souls and our bodies to complete strangers to get medical care, with the understanding that this information will be handled with great care and shared with strangers only to the extent needed to provide care. We share location information with ride-sharing and navigation apps with the understanding that it enables them to function, but Waze ran into resistance when that functionality required a location setting of “always on.” Danny Weitzner, co-architect of the Privacy Bill of Rights, recently discussed how the respect for context principle “would have prohibited [Cambridge Analytica] from unilaterally repurposing research data for political purposes” because it establishes a right “not to be surprised by how one’s personal data issued.” The Supreme Court’s Carpenter decision opens up expectations of privacy in information held by third parties to variations based on the context.

The Consumer Privacy Bill of Rights does not provide any detailed prescription as to how the context principle and other principles should apply in particular circumstances. Instead, the proposal left such application to case-by-case adjudication by the FTC and development of best practices, standards, and codes of conduct by organizations outside of government, with incentives to vet these with the FTC or to use internal review boards similar to those used for human subject research in academic and medical settings. This approach was based on the belief that the pace of technological change and the enormous variety of circumstances involved need more adaptive decisionmaking than current approaches to legislation and government regulations allow. It may be that baseline legislation will need more robust mandates for standards than the Consumer Privacy Bill of Rights contemplated, but any such mandates should be consistent with the deeply embedded preference for voluntary, collaboratively developed, and consensus-based standards that has been a hallmark of U.S. standards development.

In hindsight, the proposal could use a lodestar to guide the application of its principles—a simple golden rule for privacy: that companies should put the interests of the people whom data is about ahead of their own. In some measure, such a general rule would bring privacy protection back to first principles: some of the sources of law that Louis Brandeis and Samuel Warren referred to in their famous law review article were cases in which the receipt of confidential information or trade secrets led to judicial imposition of a trust or duty of confidentiality. Acting as a trustee carries the obligation to act in the interests of the beneficiaries and to avoid self-dealing.

A Golden Rule of Privacy that incorporates a similar obligation for one entrusted with personal information draws on several similar strands of the privacy debate. Privacy policies often express companies’ intention to be “good stewards of data;” the good steward also is supposed to act in the interests of the principal and avoid self-dealing. A more contemporary law review parallel is Yale law professor Jack Balkin’s concept of “ information fiduciaries ,” which got some attention during the Zuckerberg hearing when Senator Brian Schatz (D-HI) asked Zuckerberg to comment on it. The Golden Rule of Privacy would import the essential duty without importing fiduciary law wholesale. It also resonates with principles of “respect for the individual,” “beneficence,” and “justice” in ethical standards for human subject research that influence emerging ethical frameworks for privacy and data use. Another thread came in Justice Gorsuch’s Carpenter dissent defending property law as a basis for privacy interests: he suggested that entrusting someone with digital information may be a modern equivalent of a “bailment” under classic property law, which imposes duties on the bailee. And it bears some resemblance to the GDPR concept of “ legitimate interest ,” which permits the processing of personal data based on a legitimate interest of the processor, provided that this interest is not outweighed by the rights and interests of the subject of the data.

The fundamental need for baseline privacy legislation in America is to ensure that individuals can trust that data about them will be used, stored, and shared in ways that are consistent with their interests and the circumstances in which it was collected. This should hold regardless of how the data is collected, who receives it, or the uses it is put to. If it is personal data, it should have enduring protection.

The fundamental need for baseline privacy legislation in America is to ensure that individuals can trust that data about them will be used, stored, and shared in ways that are consistent with their interests and the circumstances in which it was collected.

Such trust is an essential building block of a sustainable digital world. It is what enables the sharing of data for socially or economically beneficial uses without putting human beings at risk. By now, it should be clear that trust is betrayed too often, whether by intentional actors like Cambridge Analytica or Russian “ Fancy Bears ,” or by bros in cubes inculcated with an imperative to “ deploy or die .”

Trust needs a stronger foundation that provides people with consistent assurance that data about them will be handled fairly and consistently with their interests. Baseline principles would provide a guide to all businesses and guard against overreach, outliers, and outlaws. They would also tell the world that American companies are bound by a widely-accepted set of privacy principles and build a foundation for privacy and security practices that evolve with technology.

Resigned but discontented consumers are saying to each other, “I think we’re playing a losing game.” If the rules don’t change, they may quit playing.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Artificial Intelligence Internet & Telecommunications Privacy

Courts & Law

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative The Privacy Debate

Nicol Turner Lee, Renée Cummings, Lisa Rice

April 8, 2024

Ryan Hass, Colin Kahl

April 5, 2024

April 4, 2024

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Oxford Handbook of Digital Ethics

  • < Previous chapter
  • Next chapter >

29 Privacy in Social Media

Andrei Marmor, Jacob Gould Schurman Professor of Philosophy and Law, Cornell University

  • Published: 10 November 2021
  • Cite Icon Cite
  • Permissions Icon Permissions

Most people’s immediate concern about privacy in social media, and about the internet more generally, relates to data protection. People fear that information they post on various platforms is potentially abused by corporate entities, governments, or even criminals, in all sorts of nefarious ways. The main premise of this chapter is that concerns about data protection, legitimate and serious as they may be, are not, mostly, about the right to privacy. Privacy is about control over the presentation of the self, not about protection of property rights. From the perspective of privacy as self-presentation, I argue that social media is, generally, very conducive to privacy—in fact, often too much so. Social media enables a great deal of privacy at the expense of truth and authenticity. But the medium also comes with dangers of exposure that carry serious risks to privacy, potentially undermining peoples’ ability to control what aspects of themselves they present to others. Privacy in social media is a mixed bag, containing different goods and dangers pulling in opposite directions.

Introduction

Ms Lisa Li, a famous young influencer in China, flaunting a glamorous and lavish lifestyle, with over one million followers, became rather notorious overnight. Her landlord, upset by Ms Li’s unpaid bills and failure to clean up her apartment, exposed to the world her absolutely squalid living conditions. A video posted by the landlord showed Ms Li’s sordid apartment with dog faeces in the living room, and generally so filthy that allegedly even professionals refused to clean up the place. Not surprisingly, reaction on social media instantaneously recorded hostile posts, with tens of thousands of people unfollowing her overnight, and countless expressions of outrage. Ms Li seems to have survived the media onslaught and recovered her reputation since, but her story encapsulates many of the privacy issues that come up in social media. Ms Li’s story exemplifies how a young woman of modest means can turn herself into a social media celebrity, presenting to the world a personal lifestyle far removed from reality. But it also shows how reputation gained over years of hard work can be shattered in an instant, turning fame and glamour to ridicule and outrage overnight. 1

You may wonder why any of these issues involve moral concerns about the right to privacy. After all, most people’s immediate concern about privacy in social media, and internet platforms more generally, relate to data protection. People fear that information they post on various platforms, explicitly or implicitly, is gathered, compiled, and potentially abused by corporate entities, governments, or even criminals, in all sorts of nefarious ways. 2 On the contrary, I am going to argue in this chapter that concerns about data protection, legitimate and serious as they may be, are not, mostly, about the right to privacy. Privacy is about the presentation of the self, not about protection of proprietary rights. And I am going to argue that social media is, generally, conducive to privacy—in fact, often too much so. The main tension in the domain of social media is between privacy and authenticity: social media enables a great deal of privacy at the expense of authenticity. But it also comes with dangers of exposure that carry risks to privacy. On the whole, then, the state of privacy in social media is a mixed bag. Social media is generally conducive to privacy, often too much so; and it also comes with serious risks to privacy, even if, as often is the case, those risks are self-imposed.

What is the right to privacy and how does it conflict with authenticity?

In previous work, I have argued that the main interest protected by the right to privacy is our interest in having a reasonable measure of control over ways we present aspects of ourselves to different others ( Marmor 2015 ). Having a reasonable amount of control over various aspects of ourselves that we present to different others is essential for our well being; it enables us to have the necessary means to navigate our place in the social world and to have reasonable control over our social lives. We need to have the ability to maintain different types of relationships with different people, and that would not be possible without having control over how we present ourselves to different others. Different types of relationships are constituted by different types of expectations about what aspects of ourselves we reveal to each other. Intimate relationships and friendships, for example, are partly constituted by expectations of sharing information and revealing aspects of ourselves that we would not be willing to share with strangers. But we cannot live in a social world that requires constant intimacy either; the possibility of dealing with others at arms lengths, keeping some distance, is as important as the opportunity for intimate relationships. Additionally, we also need to have some space to engage in various innocuous activities without necessarily inviting social scrutiny. For all these, and similar reasons, it is essential for our well being that we have a reasonable level of control over which aspects of ourselves we reveal to different others. This is the main interest that is protected by the right to privacy.

The interest in the protection of personal data that we post or reveal on internet platforms is typically an interest in protecting our property. There is a huge amount of information about ourselves, and our possessions, that we reveal, often without our knowledge, by using the internet, smartphones, and such. But there are two kinds of concerns about this information falling into the wrong hands, as it were. For one, there is the fairly straightforward concern about theft. A great deal of the information that we allow internet platforms to use or to store can be used to steal our financial ‘identity’, empty our bank accounts, charge us for goods and services we had not ordered, and all sorts of similar proprietary misdeeds. These concerns have very little to do with privacy. When someone uses or takes something that belongs to you without your permission, they violate your right to property, not to privacy ( Thomson 1975 ). 3

The second kind of concern relates to so-called ‘big data’ collected by corporations about our consumer profiles, interests, and habits. And this is a tricky matter from a privacy perspective. Most often the kind of information that is gathered, if looked at in isolation, is not the kind of fact about us that we can legitimately expect to keep to ourselves. When you go out to buy a pair of shoes in a store you cannot expect not to be observed by others. Buying those same shoes online should make no difference in this respect. After all, it needs to be charged to your credit card and delivered to your home. In isolation, there is nothing here that should make anyone worry about their privacy. Problems begin to surface with extensive and repeated data collection, that is, when somebody (or some computer algorithm, to be more precise) collects, analyses, and stores data about everything you buy; and perhaps everywhere you happen to go with your smartphone, and every phone number you call up, and so on and so forth. This is when people begin to worry about their privacy, and to some extent, rightly so. But as I will try to show later, this worry is not easy to articulate and it is subject to reasonable disagreement. Before we get there, however, other aspects of privacy in social media will be explored. I’ll get back to the big data question towards the end.

Let us return to the main interest protected by the right to privacy. It is crucial to note that the level of control over which aspects of ourselves we reveal to others needs to be reasonable, not limitless. Having too much control over what aspects of one’s self one can reveal to others compromises authenticity. But this is complicated. On the one hand, there seems to be nothing wrong with withdrawing from the social world, living your life without anyone knowing anything about you. Perhaps your life would not be as rich and rewarding as it could have been, but you commit no wrong by imposing seclusion on yourself. On the other hand, it does seem to be wrong, in some sense, if you manage to get people to believe that you are something quite different from what you really are. An intensely selfish person who manages to get people to believe that she is generous and kind engages in a form of deceit that we may rightly frown upon, even criticize and condemn. Being intensely selfish is bad enough, creating the false impression in others that you are generous makes things even worse. Now, it might be tempting to think that the distinction here pertains to the difference between not revealing things about yourself, which is normally permissible, and actively presenting yourself in ways that are not your authentic self, that is, creating false impressions, which is often wrong. But this action-omission distinction is not going to do all the work here. There are ways of not being truthful or authentic by just keeping quiet. If you are mistakenly introduced in a party as somebody else, then keeping quiet about it might be as much of a lie as knowingly telling a falsehood. But that does not mean that failing to reveal the truth about yourself is always wrong; far from it. Most people normally want to look and seem better than they are, and there is nothing wrong about that. You do not have to post the most authentic selfie on your Instagram page; posting a particularly flattering one is not deceitful.

The story of Ms Lisa Li, however, is a good reminder that authenticity on social media is compromised well beyond flattering pics and self-congratulating presentations. The main danger facing the value of authenticity in the social media context is that the distinction between truth and fiction gets blurred; one often does not know, and many people seem not to care all that much, what is presented as truth and what is clearly just fiction. This is not a threat to privacy. On the contrary, it is often too much privacy at the expense of truth and authenticity. The following section explains both of these claims.

Privacy, authenticity, and fiction

Social media, like Facebook, Twitter, Instagram, and similar platforms, enable people to present aspects of themselves to others in ways that they could not have done without these tools. It enables people to reach a very wide audience at very low cost and almost instantaneously; but more importantly for our concerns here, social media gives people a tremendous amount of choice and control over what aspects of themselves they present to others, including the option of presenting totally fictitious ‘aspects’ of themselves, inventing a public persona that may have very little to do with reality. Even ordinary users of Facebook, who just want to connect with their friends, tend to post aspects of their lives rather selectively, conscious of constructing an image of their lives in forms they wish their audience to perceive. In actuality, the range of self-construction here is very wide, from minor self-flattering images or posts, to outright large-scale deceit, with all the spectrum in between. Since the main interest protected by the right to privacy is precisely the interest in having control over what aspects of yourself you present to different others, it would seem that social media, quite generally, is very conducive to privacy. It enables people to have a great deal of control over their self-presentation, much greater in scope than hitherto possible. 4 Hence, the first question here is not whether social media threatens privacy, but whether it enables it too much: do we get to have too much control over what aspects of ourselves we present to others?

Part of what makes answering this question difficult is the fact that the kind of creative construction of the self enabled by social media is common knowledge. Everybody knows that the persona I present on Facebook or Instagram is somewhat constructed, that it does not necessarily reflect reality. Both users and consumers of the medium realize that fact and fiction are mixed up; it is part of the game, as it were. In other words, people do not necessarily expect full authenticity on social media; they seem to be content to create and to consume the presentations of partly fictitious selves for the sake of other values, knowing, at least at the back of their minds, that authenticity is not assured. 5 But that does not, by itself, settle the question of whether too much authenticity is sacrificed here; even if the sacrifice of truth and authenticity is on the surface and willingly consumed, it might still be a bad state of affairs.

Authenticity might mean different things in different contexts. In one sense, people think of authenticity in terms of a match between one’s deep self, one’s deep character traits, true desires, etc., and the life one lives. An inauthentic person, on this understanding, is one whose desires, plans, and aspirations in life do not quite match what she really is, deep down, as it were. If I tell myself that I love doing philosophy and live my life with that story, while the truth is that deep down I am not all that interested in philosophy, then I am not authentic, in this sense. However, this deep sense of authenticity is not what I am going to refer to here; what I have in mind is a shallower sense, one that refers to the truth or falsehood of one’s self-presentation to others. You fail to be authentic, on this shallow conception, if you present yourself to others in a way that is, as a matter of fact, false about you. The main difference between the deep and the shallow conceptions of authenticity is that the deep form of inauthenticity involves self-deception, while the shallow sense of it does not necessarily involve any self-deception; the deception or inauthenticity in the shallow sense can be entirely self-conscious. In both cases, however, the value of authenticity is very closely tied with the value of truth. In the deep sense, it is the truth to yourself, truth about what you really want, what you really care about, and things like that. In the shallow sense, and the one that is relevant to our concerns here, the truth in question is public. A presentation that is inauthentic is one that attempts to induce others to have false (or grossly inaccurate) beliefs about certain aspects of your self.

To be sure, I am not assuming here that revealing the truth about one’s self is always valuable or that any type of deception is bad. Far from it. As Thomas Nagel (1998) famously argued, it is often the case that telling the truth is the wrong thing to do. In fact, life would be rather unpleasant, almost unbearable, if people told each other everything that comes to their mind. Just imagine telling everyone you encounter what you really think about them; in many cases, they do not need to know, and often would rather not hear. Something similar applies to the presentation of your self; people do not need, and often do not want to know, everything that goes on in your mind (or your body, for that matter). In other words, authenticity (in the shallow sense, as henceforth used) is not always valuable, and one is not ethically or morally required to be authentic at all times.

The previous considerations suggest that privacy and authenticity are often in some inherent conflict or tension. The moral aspects of this conflict play out differently in the domain of personal presentation and the domain of public discourse. Social media, as currently used, spans both private lives and public-political discourse, and these two raise somewhat different moral concerns. In both cases, the underlying concern is the blurring of the distinction between fact and fiction. But the wider moral implications of this blurring of boundaries are quite different. Let me acknowledge, however, a further complication before we proceed. Social media blurs not only the distinction between fact and fiction; it also blurs the distinction between the personal and the public. Nowhere is it more evident than in the proliferation and tremendous impact of influencers. The whole phenomenon of influencers is based on turning the personal into a commercial or social endeavour, sometimes into commercial business, pure and simple. Therefore, the contrast between personal presentations on social media and public-political discourse spans a wide spectrum; most of the social media use is somewhere in the middle, involving elements of both.

One thing we have learned in the past few years is that social media enables a huge amount of staggeringly false and misleading political speech, directly and indirectly. And we have learned that these falsehoods are not idle. Millions of people seem to be influenced by fake news and incredible falsehoods of all kinds, to the extent that they may have tilted the results of elections and other democratic processes. There are very serious concerns here, and they may force us to rethink some of our established views about free speech and democracy. Perhaps, but this is not the topic of the present chapter. I will leave the discussion of social media and politics to others (see Neil Levy’s chapter ‘Fake News: Rebuilding the Epistemic Landscape’).

Let us return to the presentation of the self in social media. What we seem to have here is a domain of endless possibilities of self-construction, ranging from mild manipulation of reality to outright fiction or deception. One curious aspect of the story of Ms Lisa Li, mentioned at the start, is not so much how she lost many of her followers instantaneously, which she did, but the fact that she has not really lost most of them—far from it. Hundreds of thousands of people kept loyal to her, despite the fact that she turned out to be a rather different person from the glamorous social media persona she had depicted. It seems that many of her followers just did not care. Which would be surprising only if you thought that consumers seek the truth, as if they wanted to follow the real Lisa Li. But evidently that is not what her followers were after; they were seeking to share a dream, a kind of visual fiction, and then, when they come to learn that parts of that fiction are not real, they are not all that surprised or disappointed; fiction, after all, is not supposed to be real. There is nothing morally problematic about the desire to consume fiction, whether on social media or elsewhere. But what if the distinction between fiction and reality gets rather blurred? What if people lose interest in the distinction itself, not caring all that much about whether something they are told or shown purports to be fact or fiction? There is clearly something disturbing about it, but it is not easy to pin down what that is.

The difficulty stems from the possible argument that if you do not care about whether a story is fact or fiction, in essence you are treating it as if it was fiction. A story that is consumed as potentially fiction is treated by the consumer on a par with fiction. And if this is true, then perhaps there is nothing wrong with social media blurring the distinction between fact and fiction, as long as people are, by and large, aware of it. Furthermore, if you think about it, there is nothing new here. Capitalist consumerism is based on aggressive advertising, selling us dreams and fantasies in order to sell us products and services. Instagram influencers sell a constructed image of themselves in order to sell products. It is essentially the same idea. Or, not quite, perhaps. As much as we might want to criticize rampant consumerism and all the advertising industry that keeps it afloat, the advertising industry is what it purports to be, an industry aiming to sell you stuff that you may not really need or did not think you even wanted. One problem that seems to plague the social media domain is yet again, a blurring of the boundaries between what is clearly commercial advertising and what is personal, social, or even political.

But I still have not answered the question of what is morally problematic about the blurring of these distinctions on social media. Perhaps it is a good thing that distinctions between fact and fiction, personal and public, entertainment and consumerism, are getting blurred by social media. Challenging established categories and conceptual divisions is how social changes occur over time; it is often what social movements aim to accomplish. Not all social changes are for the better, of course, but many of them are. Furthermore, if the social media world enhances people’s privacy interests, giving them more control over what aspects of themselves they present to different others, all the better, not so? I am far from sure, however, that moral complacency is warranted. Perhaps many of the categories getting blurred on social media provide new opportunities to people, and empower hitherto underprivileged segments of the population. 6 So there are, quite clearly, some good effects here. But the erosion of the value of truth is not so innocuous. If the interest in truth gets eroded, this erosion is not going to remain confined to the use of social media; it is very likely to pervade personal and public life on a much wider scale. The more socially acceptable it becomes to mix fiction with truth without accountability, the less responsibility people are going to feel for truth in general, both in their personal lives and in their civic engagements. This cannot be a good development.

Let me emphasize, however, that the erosion of truth on social media is very intimately linked to its privacy enhancement aspect; it is not a coincidence that a world in which there is much more privacy, there is less concern for the truth. As I have mentioned, the essence of the right to privacy is the right not to tell the truth, at least not all of it. Control over what aspects of yourself you reveal to others is needed precisely because we have legitimate interests in not being all that forthcoming with revealing aspects of ourselves or our lives to others. Privacy and authenticity are inherently in some conflict or tension.

Social media as a tool against privacy

The picture I have depicted so far has been one-sided; I have focused on the privacy-enhancing aspects of social media. But social media is also used for opposite purposes; it is sometimes used to deliberately undermine someone’s privacy, exposing them to the world in ways they do not want to be perceived. 7 I will focus on cases called ‘doxing’, whereby social media users, often in a group, target an individual or a group of individuals for the purpose of public shaming or, in some extreme cases, even harassment or intimidation. Two main aspects of the medium enable this practice: the availability of a huge amount of information on people online, and the ability to reach a very wide audience at very low cost. 8 Individuals are not the only targets of doxing—sometimes governments are too. Wikileaks is a case in point. But I will bracket these governmental or even corporate targets, and focus on the practice of doxing targeting individuals.

Let us start with a simple story. Suppose you happen to know that your friend is cheating on his wife. You keep quiet for a while, but at some point the friendship turns sour and you decide to tweet about your friend’s infidelity with details and all; you know that your friend, and his wife, and your mutual friends, are all following you on Twitter. I presume that we would think that you had misbehaved; gravely so, perhaps. Even if you had a reason to tell your friend’s spouse about her husband’s infidelity, that is something you should have told her in private; sharing it with many is a deliberate act of shaming. Shaming is an act of humiliation, and as such, pro tanto wrong. Unless there is a very good reason to bring shame on someone, it is wrong on a par with deliberate humiliation, a demeaning speech act, striving deliberately to put someone down. Now, of course, the problem with social media is that it technically allows for public shaming on a very large scale. Someone can find out something embarrassing about you and post it on some social media platform or other, rendering the information public instantaneously. Furthermore, it is often very difficult for the targeted individual to rebut the shaming information, even if it is, actually, false. Once a rumour or an image is out there, it is almost impossible to make it go away.

Doxing, by its very nature, would seem to be a violation of privacy; it is done with the explicit aim of revealing to others information about their target that the individual in question would rather not expose, at least not to the public at large. But it does not necessarily follow that doxing is always an unjustified violation of the target’s right to privacy. This is for two reasons: it might not be a violation of the right to privacy at all, and even if it is, it might be a case of a justified violation of a right. Let me explain briefly both of these points.

J. Thomson (1975 : 307) argued a long time ago, and correctly so in my mind, that nobody can have a right that some truth about them not be known. We cannot have proprietary rights over truths about us or about anything else, for that matter. 9 The right to privacy, I argued (contra Thomson), is there to protect our interest in having a reasonable measure of control over ways in which we present ourselves to others. The protection of this interest requires the securing of a reasonably predictable environment about the flow of information and the likely consequences of our conduct in the relevant types of contexts. On my account of the right to privacy, such a right is violated when somebody manipulates, without adequate justification, the relevant environment in ways that significantly diminish your ability to control what aspects of yourself you reveal to others. One typical case is the following: you assume, and have good reason to assume, that by φ-ing you reveal F to A; that is how things normally work. You can choose, on the basis of this assumption, whether to φ or not. Now somebody would clearly violate your right if he were to manipulate the relevant environment, without your knowledge, making it the case that by doing φ you actually reveal F not only to A but also to B et al., or that you actually reveal not just F but also W to A (and/or to B et al.), which means that you no longer have the right kind of control over what aspects of yourself you reveal to others; your choice is undermined in an obvious way ( Marmor 2015 : 14).

Given this account, it would seem that a case of doxing is typically a violation of one’s right to privacy. If you are the target of doxing, your environment is manipulated by others rendering the information you reveal about yourself, knowingly or unknowingly, spread well beyond its intended or reasonably predicted audience. That is clearly a case in which you lose control over what aspects of yourself you reveal to whom. The difficult or borderline cases are those in which doxing reveals information about someone that is publicly available anyway. Suppose, for example, that you are perceived by your acquaintances as a person of modest means, and yet you buy a very expensive piece of real estate, a transaction that you would rather keep to yourself. In many jurisdictions, ownership of real estate is a matter of public record. (And let us assume that there are good reasons for that.) Somebody can easily look it up and post the information on social media, perhaps to embarrass you. They post something that is a matter of public record, even if, normally, people do not bother to spend their time looking for this kind of information. I suspect that people would have different intuitions about this case. It may not be the right thing to do, for sure, but I am not sure that it amounts to a violation of any right of yours. However, once again, the problem in the social media context is the issue of intensity and scale. Targeted doxing usually involves extensive efforts and considerable investment of time and energy in gathering information that, even if publicly available, is not available without such deliberate and extensive research. 10

The situation here is very similar to the question of privacy in public spaces ( Marmor 2015 : 20–21). 11 When you walk around on Main Street you cannot have a privacy expectation not to be observed by indefinite others. By walking on the street you obviously make yourself observable and there’s nothing problematic about that from a privacy perspective. But suppose somebody is following you with a video camera, recording your movements for a while, and posting that on YouTube. Now it might seem a violation of your right to privacy, even if the recording was done in a public space. Why is that? Presumably, because making yourself observable in a public space is not an invitation, or even consent to, becoming an object of gaze or surveillance. The concern here is about attention and record-keeping. When you take a walk on Main Street, you are perfectly aware of the fact that you have no control over who happens to be there and thus is able to see you; but you also rely on the fact that people’s attention and memory are very limited. You do not expect to have every tiny movement of yours noticed and recorded by others. In other words, consent to public exposure is not unlimited. Voluntarily giving indefinite others the opportunity to see you is not an invitation, or even tacit consent, to gaze at you, and certainly not a consent to record your doings, digitally or otherwise.

But what if expectations actually change, and people come to know that certain public spaces are subject to extensive surveillance? What if we are all well informed, for example, that all the streets in our town are covered with CCTV cameras that record everything everywhere? Would that violate our right to privacy in public spaces? I think that the answer is ‘Yes’ because there is another way in which one’s right to privacy can be violated, that is, by diminishing the space in which we can control what aspect of ourselves we reveal to others to an unacceptably small amount in an important domain of human activity ( Marmor 2015 : 14). Needless to say, determining what counts as a violation of privacy in this respect is bound to be controversial and often difficult to determine. Presumably, there are two main factors in play: the relative importance of the type of activity in question (e.g. walking on ordinary streets versus entering a particular building), and the level of diminished control over concealment or exposure. Either way, we should recognize that people’s right to privacy can be violated even in public spaces ( Véliz 2018 ).

Many cases of doxing based on exposure of data collected from public records involve essentially the same moral issue. Living in a world in which there is a great deal of information about us on public records might not be a problem in and of itself. But becoming the focus of targeted attention based on information-gathering from public records becomes a form of surveillance, quite possibly violating the right to privacy. Such actions diminish, sometimes very considerably, your ability to control what aspects of yourself you reveal to others. And that is so because we can normally expect ordinary others to have limited interest, attention, and resources for digging up information on us that is stored somewhere somehow.

That doxing often, if not quite always, involves a violation of the target’s right to privacy does not necessarily entail that it is never justified, all things considered. Possible moral justifications of rights’ violations are multifarious and greatly depend on circumstances. 12 Sometimes a rights violation is justified when it is required in order to secure a conflicting right that ought to prevail under the circumstances. Sometimes a right may be justly violated in order to secure a common good of greater moral significance. Suppose, for example, that in order to expose a politician’s staggering hypocrisy you need to violate their right to privacy. Still, the exposure might serve the common good and democratic values to an extent that justifies the rights’ violation.

It is a common doctrinal principle in libel laws in many jurisdictions that the more one enjoys a public persona the less protection from libel one can legally expect. Something similar, at least morally speaking, may well apply to protection of privacy. The more you deliberately and voluntarily expose yourself to the public, as it were, perhaps the less concern about your right to privacy you can legitimately expect to have. But this principle, if a principle it is, should have an important caveat: it would be quite unjustified to expose facts about a public persona by violating their right to privacy if the facts disclosed are not related to what makes the person famous. If a politician thrives on gay-bashing and promoting ‘family values’, exposing that the politician is gay himself might be quite justified. But the same would not be true of, say, a famous scientist; if the scientist’s claim to fame has nothing to do with sexuality or anything remotely relevant to that, exposing their sexual preferences against their wish cannot be justified. If the fact you expose about a scientist has something to do with their scientific integrity, that may be justified. Admittedly, the distinction between facts about a public persona that are relevant to their public status and those that are not is sometimes difficult to draw. However, since we are talking about the justified violation of a right here, the justification for violating the right to privacy needs to be fairly robust. Which means that, in cases of doubt, when it is not entirely clear that the disclosure in question is relevant to the person’s public status, the doubt should count in favour of respecting the right to privacy.

Perhaps now you wonder about the case of Ms Li: if her landlord violated her right to privacy, which may well have been the case here, was it a justified violation of her right? Are facts about Ms Li’s sordid living conditions relevant to her claim to fame? My own sense is that the answer is probably ‘Yes’, but I can see that this might be contentious. The more you think that influencers such as Ms Li are selling fiction or fantasy, the less relevant her own life is to the persona she creates on social media.

Social media, big data, and privacy

We now seem to live in a world in which almost everything we do, everywhere we go, and everything we buy, is recordable by some computerized system or other. And much of it, if not most, is actually recorded, aggregated, sorted, and often sold by various systems ( Zuboff 2019 ). Our digital footprint is ubiquitous, and easily utilized by interested parties. That governments may have access to all this information is a serious reason for concern. Governments that have no great respect for democracy and human rights have gained powerful tools that they can use for political oppression, and even democratic and decent regimes may occasionally succumb to the temptation to use such information in ways that violate people’s rights. 13 The serious political hazard with the recording of our digital footprints threaten many of our rights and freedoms, but not necessarily, or even primarily, our right to privacy. Political oppression violates more serious and urgent rights than the right to privacy. There are countries in which people are detained and thrown into jail for things they post on social media; when you find yourself in jail for something you’d posted on Facebook, the violation of your right to privacy is the least of your concerns. Generally speaking, the dangers of government surveillance go far beyond threats to our privacy; they threaten our basic civil and human rights.

Political oppression, however, is not the topic of this chapter. I will therefore bracket the dangers of big (and small) data collection by governments and focus on the private market. 14 One major development of the digital age is the commodification of our consumer profiles. There is a huge amount of digital footprint we leave on a daily basis about our consumer behaviour; things we buy, places we visit, interests we express, movies we stream, even things we search on Google, indicate our tastes and desires, and our willingness to pay for this or that. The ability of computers to store and analyse this information renders our consumer profiles a commodity that can be bought and sold, something that has a market value. Mostly, I presume, it is valuable to corporations for marketing purposes, targeting their marketing efforts in ways that are tailored to our tastes and preferences. All this digital analysis of consumer profiles is not done by people; there is nobody sitting there in front of a computer, thinking, ‘Oh, I see that Professor Marmor likes Borsalino hats. Let’s send him some ads about the latest models.’ Targeted advertising is automated. Our consumer profiles are generated and commodified on a huge scale, and analysed by complex algorithms that handle hundreds of millions of data points. This system makes the concern about privacy rather tricky here.

Let me focus exclusively, however, on the market use of people’s digital footprints for commercial purposes. Is targeted advertising, based on our digital footprints, a threat to consumers’ privacy? For the sake of simplicity, let us assume that all this data collected on our consumer profiles is done without our ex ante consent. 15 So here is a simple and fairly standard example: you post on your Facebook page that you are considering a trip to Paris this summer, intending, for some reason or other, to share this information with your friends. Soon enough (very soon, in my experience) you start getting advertisements on your Facebook about hotels in Paris, flights to Paris, etc. For many people, there is something spooky about this; it feels as if someone is watching your Facebook posts and sending you ads in response. But as I mentioned, that is not the case, and besides, this feeling of spookiness is not shared by all. Many people are perfectly fine with getting these targeted ads; they do not care that some fancy algorithm enables advertisers to do that. Furthermore, and this is a crucial factor that is sometimes forgotten, there is a commercial transaction here in the background: we get to use the social media tools offered by these corporations without pay, in exchange for subjection to targeted advertising. It is a contract, and the contract, on its face, does not seem to be obviously unfair or exploitative. 16

As in many cases of rapid technological developments, it may have taken a while for most of us, users of social media and other internet platforms, to realize that our digital footprints have become a merchandise in themselves, bought and sold by companies for commercial purposes. But I think that now we know this to be the case, and I think that most people understand that the commercial value of our consumer behaviour is priced in the services we get, its market value paying for our free use of social media and internet tools. In principle, the situation here is no different from other, more mundane contexts, in which the market value of captive audience is priced in the products we buy. When you go to the cinema to watch a movie, you are subjected to about twenty minutes of ‘previews’ and other ads; if cinemas had to forgo this practice, presumably our movie tickets would end up costing us more.

But now, you may wonder, where is the threat to privacy in this commodification of our consumer profiles? There are aspects of this new world of commodification of our habits that are certainly troubling; the fact that some data collected on one’s consumer habits is bought and sold by corporations for commercial use might raise concerns about the overreach of capitalism; targeted advertising surely augments the concerns we have about commercial advertising generally, structuring our preferences and desires in questionable ways, but none of it seems to be a threat to privacy. My ability to control ways in which I present myself to different others is not undermined by these commercial practices. Of course, there might be some threats to privacy on the margins. If you start getting ads for a product you do not want others to know about, then if somebody happens to see your computer screen with those ads displayed, they may get to know something about you that you would have rather kept to yourself. But these are marginal cases, and they may come up in countless other contexts. My guess is that most people are concerned about the potential for the abuse of information that is commercially transacted; they fear that it might fall into the wrong hands. Perhaps the government might get hold of your habits or whereabouts in ways that might put you in a vulnerable position; or perhaps rouge agents may use this information to hack into your assets and steal your possessions. These are serious concerns, for sure, but, as I have tried to argue here all along, they are not concerns about the right to privacy. Strange as it may sound, the commodification of our consumer profiles and even more generally, big data collection, threaten many of our rights and freedoms, but the right to privacy is not the primary concern.

Acknowledgements

I am indebted to Alicia Patterson for research assistance on this chapter, and to Carissa Véliz for helpful comments.

The story about Lisa Li has been widely reported by news outlets, e.g. https://www.bbc.com/news/world-asia-china-49830855 , accessed 10 August 2021. There are many other similar cases, such as a vegan influencer caught eating meat, or a middle-aged YouTube celebrity who used an image-modifying camera to make her appear much younger than she was. These are simpler cases of outright deceit. I am using the example of Lisa Li, however, precisely because it is a little more ambiguous and complex.

For detailed accounts, see, e.g. Nissenbaum (2010 : chs 1–3) and Zuboff (2019) .

Following a long Lockean tradition, many philosophers assume that we have property rights in ourselves. Others find the idea of self-ownership fraught with difficulties, perhaps even incoherent. However, delving into this philosophical morass would be far beyond the scope of this chapter, and not quite needed. Even those who find the idea of self-ownership appealing would still want to maintain a distinction between the right to privacy and the right to property. A notable exception is Thompson (1975) . I responded to Thomson’s argument in Marmor (2015) .

See, e.g. Cocking and Van Den Hoven (2018 : ch. 3). For a more sceptical take on this view, see Marwick and Boyd (2011) , who argue that social media makes it difficult for users to understand and navigate social boundaries. Notice, however, that the right to privacy, on my account, is a control right; that does not mean, of course, that people necessarily exercise their control judiciously or wisely. The fact that many tend to post things on social media that, upon reflection, they should not have revealed, or that they come to regret, does count against the fact that they exercise their right.

Up to a point, it would seem. Researchers found that there is a correlation between the time people (especially teenagers) spend on Facebook and depression. One speculation is that people do not quite internalize the fact that the rosy picture of others’ lives they see on social media is actually constructed, and thus feel demoralized or depressed by the comparison to their own humble existence; see, e.g. Steers et al. (2014) .

The tremendous proliferation of influencers would seem to attest to the fact that countless opportunities arise here, often for people who would otherwise have much more limited options. But the reality is slightly more complex; see Duffy (2017) .

The most vulgar and unfortunately prevalent example is the posting of nude photos or videos of women (mostly) without their consent, on dubious porn sites and other internet outlets. These are obvious and blatant violations of privacy that ought to be criminalized and prosecuted. (As with everything else, there are some borderline cases, of course, when there was some qualified consent but the terms of it are allegedly breached or abused; those cases are more complicated.)

For a detailed account of doxing, and its different types and practices, see Douglas (2016) .

I am aware of the fact that Thomson’s thesis is controversial, but I defended this particular view in some detail in Marmor (2015 : 4–6).

For an excellent account of the considerations involved in such cases, see Rumbold and Wilson (2019) . On their account, the question of whether you intend to make some information public and accessible to others is of crucial importance to the question of whether your right to privacy has been violated or not. I am slightly more sceptical about the role of intention here; there might be cases in which even if you did not intend to allow people to have access to some information available on you online, you should have known better than to rely on its concealment. Generally, however, I am largely in agreement with their account.

For a somewhat different account of the right to privacy in public spaces, see Véliz (2018) .

Some philosophers use the word ‘violation’ of a right only when it is not justified, calling justified violations ‘infringement’ of a right. There is no uniformity of usage in the literature, however, and I will not adhere to this terminological distinction. The idea itself is clear enough, and as old as the literature on rights generally. With the exception of Kant, perhaps, no one argues that rights have absolute normative force.

See, e.g. Richards (2013) .

The separation is somewhat artificial, of course, since one of the dangers of data collected by private corporations is that governments can force them to hand over their data.

There are now some jurisdictions that strive to change that by law; California recently enacted a law (California Consumer Privacy Act, 2019) requiring retailers to seek customers’ explicit consent for selling their consumer profiles to others. How much of an actual change in the commodification of consumer profiles this will bring about remains to be seen.

I am talking about the principle here, not the details or the legal aspects of it. Many lawyers have reservations about the lack of transparency in such contracts and about the fact that most consumers are unaware of their contents. See, e.g. Hoofnagle and Whittington (2014) .

Cocking, Dean , and Van Den Hoven, Jeroen ( 2018 ), Evil Online (Oxford: Wiley Blackwell).

Google Scholar

Google Preview

Douglas, David ( 2016 ), ‘ Doxing: A Conceptual Analysis ’, Ethics & Information Technology 18, 199.

Duffy, Brooke E. ( 2017 ), (Not) Getting Paid for What you Love: Gender, Social Media, and Aspirational Work (New Haven, CT: Yale University Press).

Hoofnagle, Chris J. , and Whittington, Jan ( 2014 ), ‘ Accounting for the Costs of the Internet’s Most Popular Price ’, UCLA Law Review 61, 606.

Marmor, Andrei ( 2015 ), ‘ What is the Right to Privacy? ’, Philosophy & Public Affairs 43, 1.

Marwick, Alice E. , and Boyd, Dana ( 2011 ), ‘ I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience ’, New Media & Society 13, 114.

Nagel, Thomas ( 1998 ), ‘ Concealment and Exposure ’, 27 Philosophy & Public Affairs 13, 3.

Nissenbaum, Helen ( 2010 ), Privacy in Context (Redwood City, CA: Stanford University Press).

Richards, Neil ( 2013 ), ‘ The Dangers of Surveillance ’, Harvard Law Review 126, 1934.

Rumbold, Benedict , and Wilson, James ( 2019 ), ‘ Privacy Rights and Public Information ’, The Journal of Political Philosophy 27, 3.

Steers, Mai-Ly , Wickham, Robert , and Acitelli, Linda ( 2014 ), ‘ Seeing Everyone Else’s Highlight Reels: How Facebook Usage Is Linked to Depressive Symptoms ’, Journal of Social and Clinical Psychology 33, 701.

Thomson, Judith J. ( 1975 ), ‘ The Right to Privacy ’, Philosophy & Public Affairs 4, 295.

Véliz, Carissa ( 2018 ), ‘In the Privacy of Our Streets’, in Bryce Clayton Newell , Tjerk Timan , and Bert-Jaap Koops , eds, Surveillance, Privacy and Public Space (London: Routledge), 16.

Zuboff, Shoshana ( 2019 ), The Age of Surveillance Capitalism (London: Profile Books).

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Social Sciences

© 2024 Inquiries Journal/Student Pulse LLC . All rights reserved. ISSN: 2153-5760.

Disclaimer: content on this website is for informational purposes only. It is not intended to provide medical or other professional advice. Moreover, the views expressed here do not necessarily represent the views of Inquiries Journal or Student Pulse, its owners, staff, contributors, or affiliates.

Home | Current Issue | Blog | Archives | About The Journal | Submissions Terms of Use :: Privacy Policy :: Contact

Need an Account?

Forgot password? Reset your password »

  • TeachableMoment

Can We Protect Our Privacy on Social Media?

There's a hidden cost to our free accounts on Facebook, Instagram, Snapchat, and other social media platforms: our privacy. In this lesson, students learn about and discuss how corporations make a profit from our data, potential policy solutions, and how young people are making their own decisions about online privacy.

  • social media

To the Teacher

Social media companies are tracking a tremendous amount of information about our activity online, and they are selling this information for profit. These companies have become huge businesses by offering advertisers and other interested parties data about the items we click on, the things we might like or dislike, and the opinions we express. We don’t have to pay to use social media services because we are the product being sold.

Occasionally, the extent to which corporations are profiting from data about users erupts into public scandal.  Uproar ensues when a company is hacked and personal information is stolen, or when political groups use detailed data to influence voters on social media platforms. But underneath such headline-grabbing incidents are broader issues about of privacy and what we can do to control our personal information online.

Thankfully, while debate continues in the public sphere about regulating social media companies, young people are actively thinking through questions about what information they want to put online and how they can control their digital presence in this age of over-exposure.

This lesson consists of two readings. The first reading looks at how corporations make a profit from our data, and it considers potential policy solutions to this problem. The second reading focuses on how young people are making their own decisions about online privacy. Questions for discussion follow each reading.

Note:  This lesson is Part 3 of a series of lessons on social media.

  • Part 1:  Does Social Media Make Us More or Less Connected ?
  • Part 2: Social Media and the Future of Democracy
  • Part 3: Can We Protect Our Privacy on Social Media?  

social media

Reading One: What Are They Doing with Your Data?

Although you can create an account on Facebook, Instagram, Snapchat, and other social media platforms without paying any money, there’s a hidden cost: the sacrifice of your privacy. Social media companies are tracking a tremendous amount of information about our activity online, and they are selling this information for profit. These companies have become huge businesses by offering advertisers and other interested parties data about the items we click on, the things we might like or dislike, and the opinions we express. In general, we don’t pay to use social media services because we are the product being sold.

What does it mean that we are the product? To get a better idea, we can look at the data-mining behaviors of a company like Facebook. In an April 2018 article for The New York Times, technology reporter Natasha Singer examined how Facebook uses our data. She reports that Facebook “meticulously scrutinizes” our online lives, and not just to show us targeted advertisements. The details that many of us regularly provide on Facebook, such as our age, employer, relationship status, likes and location, are just one part of the information that Facebook analyzes and uses. For example, she writes:

Facebook tracks both its users and nonusers on other sites and apps. It collects biometric facial data without users’ explicit “opt-in” consent. And the sifting of users can get quite personal. Among many possible target audiences, Facebook offers advertisers 1.5 million people “whose activity on Facebook suggests that they’re more likely to engage with/distribute liberal political content” and nearly seven million Facebook users who “prefer high-value goods in Mexico.” “Facebook can learn almost anything about you by using artificial intelligence to analyze your behavior,” said Peter Eckersley, the chief computer scientist for the Electronic Frontier Foundation, a digital rights nonprofit…. When internet users venture to other sites, Facebook can still monitor what they are doing with software like its ubiquitous “Like” and “Share” buttons, and something called Facebook Pixel — invisible code that’s dropped onto the other websites that allows that site and Facebook to track users’ activity…. “Facebook provides a network where the users, while getting free services most of them consider useful, are subject to a multitude of nontransparent analyses, profiling, and other mostly obscure algorithmical processing,” said Johannes Caspar, the data protection commissioner for Hamburg, Germany. https://www.nytimes.com/2018/04/11/technology/facebook-privacy-hearings.html?register=google

From time to time, this corporate data-mining erupts into scandal, like when a company is hacked and personal information is stolen. But sometimes scandal erupts from a legal use of social media data. This happened in 2018, when it was revealed that the firm Cambridge Analytica used Facebook data to construct detailed personality profiles of U.S. voters and target them with specific advertisements. In a March 2018 article for the Chicago Tribune, business reporter Ally Marotti described the uproar. She wrote:

Facebook CEO Mark Zuckerberg promised in a post Wednesday that the social media company would do more to protect its users’ data. “We have a responsibility to protect your data, and if we can't then we don't deserve to serve you,” he wrote. Zuckerberg’s post came following public outcry in response to a report last weekend from The New York Times and The Observer of London that Cambridge Analytica, a political data firm hired by the Trump campaign, gained access to private information of more than 50 million Facebook users, including their profiles, locations and what they like. The firm claimed its tools could analyze voters’ personalities and influence their behavior with targeted messages. Cambridge Analytica improperly acquired the information, Facebook has said, but it wasn’t stolen. Users allowed the maker of a personality quiz app to take the data. About 270,000 people took the quiz several years ago, the Times reported, and the app-maker was able to scrape data from their Facebook friends. He then provided the data to Cambridge Analytica…. Since the report last weekend, several American and British lawmakers have called for greater privacy protection and asked Zuckerberg to explain what the company knew about the misuse of its data…. The debate over internet privacy legislation in the U.S. has shifted from the federal to state level in recent years, but proponents argue there aren’t enough laws at either level to adequately protect users. [ https://www.chicagotribune.com/business/ct-biz-data-privacy-facebook-cambridge-analytica20180319-story.html ]

If corporations are using our social media data to sell us products, influence our decisions, and affect our votes, all with limited legal oversight, what can be done?

 In 2018, the European Union passed the General Data Protection Regulation (GDPR), one of the tougher online privacy laws in the world. In a May 2018 article for The New York Times, London-based technology correspondent Adam Satariano explained the measure:

The new law requires companies to be transparent about how your data is handled, and to get your permission before starting to use it. It raises the legal bar that businesses must clear to target ads based on personal information like your relationship status, job or education, or your use of websites and apps. That means online advertising in Europe could become broader, returning to styles more akin to magazines and television, where marketers have a less detailed sense of the audience. Some of the tools companies develop to comply with the GDPR might be made available to users whether they live in Europe or not. Facebook, for example, announced in April that it would offer the privacy controls required under the new law to all users, not just Europeans…. [Y]ou can ask companies what information they hold about you , and then request that it be deleted. This applies not just to tech companies, but also to banks, retailers, grocery stores or any other organization storing your information. You can even ask your employer. And if you suspect your information is being misused or collected unnecessarily, you can complain to your national data protection regulator, which must investigate . https://www.nytimes.com/2018/05/06/technology/gdpr-european-privacy-law.html?login=google  

So far in the U.S., companies can choose whether or not they want to abide by such European-style standards. Public interest advocates argue that this has left Americans exposed to abuses. As more people become aware of the negative effects of corporate data-mining, demands to change public policy domestically may well gain greater traction.

For Discussion  

  • How much of the material in this reading was new to you, and how much was already familiar? Do you have any questions about what you read?  
  • According to the reading, what kinds of information does a corporation like Facebook collect about its users? How does it make money from that data?  
  • Have you seen evidence in your own online experience that you are being tracked and that your data is being used, perhaps in ways you hadn’t anticipated?  
  • Do you think that the use of user data described in the reading is abusive, or simply something that people voluntarily opt into as a condition of using social media platforms? Explain your position.  
  • What is meant by the expression, “if you’re not paying, you are the product”? What do you think of this idea?  
  • According to the reading, what are some of the effects of the GDPR? What do you think of these requirements?  
  • What other changes would you like to see in privacy protections here in the United States?

Reading Two: How Are Young People Protecting Their Privacy?

Online privacy—or lack of it—can  have real-world impacts. Using social media puts us at risk of data-mining and surveillance, and it also leaves a permanent record that can be examined by employers, university admissions departments, family members, bullies, and political opponents. Such a prospect might give many users pause, even if they otherwise enjoy their lives online.

Thankfully, while debate continues in the public sphere about regulating social media companies, many young people are actively thinking through questions about what information they want to put online and how they can control their digital presence.

In a 2016 article for Vox, Irina Raicu, Internet Ethics Program Director at the Markkula Center for Applied Ethics at Santa Clara University, examined several studies of the privacy behaviors of young people and young adults. She summarized her findings:  

[P]eople between 13 and 35 do care about keeping some control over their information, and take measures to protect their privacy online, even as they sense that most such measures are imperfect solutions. It may surprise you to find out that 60 percent of the teens surveyed … “say they have created accounts that their parents were unaware of, such as on a social media site or for an app.” That is a privacy-protective measure: When it comes to privacy violations, the people teens are most worried about are their parents. As the report notes, “teens greatly value having some level of privacy from their parents when using the internet.” The older “young people” surveyed by Hargittai and Marwick report that they deploy a wide variety of privacy-protective measures: “Using different sites and apps for different purposes, configuring settings on social media sites, using pseudonyms in certain situations, switching between multiple accounts, turning on incognito options in their browsers, opting out of certain apps of sites, deleting cookies and even using Do-Not-Track browser plugins and password-management apps.”…. [A]   Pew Research study notes that “young adults generally are more focused than their elders when it comes to online privacy.” That study asked about some privacy-protective strategies, as well: Among the 18-to-29-year-olds surveyed, 74 percent said they had cleared cookies and browser histories, 71 percent had deleted or edited something they had posted, 49 percent had configured their browsers to reject cookies, 42 percent had decided not to use certain sites that demanded their real names, and 41 percent had used temporary user names or email addresses. In each of those categories, the younger users surpassed their elders. https://www.vox.com/2016/11/2/13390458/young-millennials-oversharing-security-digital-online-privacy

Some young people are taking even more drastic approaches, either aggressively self-censoring what they post or leaving social media behind entirely. Faced with bullying, mental health concerns, and the relentless pace of maintaining a social media presence, some are choosing to move relationships offline.

In a March 2019 article for Fast Company, Sonia Bokhari, an 8th grader who leads her middle school’s Gay-Straight Alliance and is a member of the school’s Environmental Club, gave an account of why she decided to dramatically curtail her social media use after feeling that her privacy had been violated. She wrote:

My parents had long ago made the rule that my siblings and I weren’t allowed to use social media until we turned 13, which was late, compared to many of my friends who started using Instagram, Wattpad, and Tumblr when we were 10 years old…. [S]everal months ago, when I turned 13, my mom gave me the green light and I joined Twitter and Facebook. The first place I went, of course, was my mom’s profiles. That’s when I realized that while this might have been the first time I was allowed on social media, it was far from the first time my photos and stories had appeared online. When I saw the pictures that she had been posting on Facebook for years, I felt utterly embarrassed, and deeply betrayed…. [My mom and my sister] were surprised when they heard how I felt, genuinely surprised. They didn’t know I would get so upset over it, because their intentions weren’t to embarrass me, but to keep a log and document what their little sister/youngest daughter was doing in her early childhood and young teenage years…. In the months since I discovered my unauthorized social media presence, I became more active on Facebook and Twitter. But it wasn’t until I’d been on social media for around nine months that I thought seriously about my digital footprint. Every October my school gave a series of presentations about our digital footprints and online safety. The presenters from an organization called OK2SAY, which educates and helps teenagers about being safe online, emphasized that we shouldn’t ever post anything negative about anyone or post unapproved inappropriate pictures, because it could very deeply affect our school lives and our future job opportunities…. While I hadn’t posted anything negative on my accounts, these conversations, along with what I had discovered posted about me online, motivated me to think more seriously about how my behavior online now could affect my future… I realized that being 13 and using social media wasn’t a fantastic idea, even though I wasn’t obsessed with it and was using it appropriately. My accounts now remain dormant and deactivated…. My friends are active social media users, but I think they are more cautious than they were before. They don’t share their locations or post their full names online, and they keep their accounts private. I think in general my generation has to be more mature and more responsible than our parents, or even teens and young adults in high school and college…. https://www.fastcompany.com/90315706/kids-parents-social-media-sharing

Just as young people have set the trends for which social media platforms rise and fall, they can also change the conversation about how we engage with these corporations and maintain control of our digital lives.

  • How much of the material in this reading was new to you, and how much was already familiar? Do you have any questions about what you read?
  • According to the reading, what are some decisions that young people are making to protect their privacy online?
  • Have you tried any of the strategies discussed in this reading? If so, how did they go for you?
  • Sonia Bokhari reports that she felt betrayed when she went online and saw information that her mother had posted about her when she was a kid. Have you ever experienced something like this? What sort of conversations do you think families and friends should have with one another to make sure they are respecting each other’s privacy?
  • Do you think more young people will choose to quit or substantially reduce their use of social media in the future? Or do you think social media usage will continue grow? What factors might affect the future role of social media in our lives?  

Research assistance provided by John Bergen.

Share this Page

Social Media and Privacy

  • Open Access
  • First Online: 09 February 2022

Cite this chapter

You have full access to this open access chapter

Book cover

  • Xinru Page 7 ,
  • Sara Berrios 7 ,
  • Daricia Wilkinson 8 &
  • Pamela J. Wisniewski 9  

16k Accesses

4 Citations

4 Altmetric

With the popularity of social media, researchers and designers must consider a wide variety of privacy concerns while optimizing for meaningful social interactions and connection. While much of the privacy literature has focused on information disclosures, the interpersonal dynamics associated with being on social media make it important for us to look beyond informational privacy concerns to view privacy as a form of interpersonal boundary regulation. In other words, attaining the right level of privacy on social media is a process of negotiating how much, how little, or when we desire to interact with others, as well as the types of information we choose to share with them or allow them to share about us. We propose a framework for how researchers and practitioners can think about privacy as a form of interpersonal boundary regulation on social media by introducing five boundary types (i.e., relational, network, territorial, disclosure, and interactional) social media users manage. We conclude by providing tools for assessing privacy concerns in social media, as well as noting several challenges that must be overcome to help people to engage more fully and stay on social media.

You have full access to this open access chapter,  Download chapter PDF

Similar content being viewed by others

essay on does social media violate our privacy

Privacy and Empowerment in Connective Media

essay on does social media violate our privacy

Privacy in Social Information Access

essay on does social media violate our privacy

The Socio-economic Impacts of Social Media Privacy and Security Challenges

1 introduction.

The way people communicate with one another in the twenty-first century has evolved rapidly. In the 1990s, if someone wanted to share a “how-to” video tutorial within their social networks, the dissemination options would be limited (e.g., email, floppy disk, or possibly a writeable compact disc). Now, social media platforms, such as TikTok, provide professional grade video editing and sharing capabilities that give users the potential to both create and disseminate such content to thousands of viewers within a matter of minutes. As such, social media has steadily become an integral component for how people capture aspects of their physical lives and share them with others. Social media platforms have gradually altered the way many people live [ 1 ], learn [ 2 , 3 ], and maintain relationships with others [ 4 ].

Carr and Hayes define social media as “Internet-based channels that allow users to opportunistically interact and selectively self-present, either in real time or asynchronously, with both broad and narrow audiences who derive value from user-generated content and the perception of interaction with others” [ 5 ]. Social media platforms offer new avenues for expressing oneself, experiences, and emotions with broader online communities via posts, tweets, shares, likes, and reviews. People use these platforms to talk about major milestones that bring happiness (e.g., graduation, marriage, pregnancy announcements), but they also use social media as an outlet to express grief and challenges, and to cope with crises [ 6 , 7 , 8 ]. Many scholars have highlighted the host of positive outcomes from interpersonal interactions on social media including social capital, self-esteem, and personal well-being [ 9 , 10 , 11 , 12 ]. Likewise, researchers have also shed light on the increased concerns over unethical data collection and privacy abuses [ 13 , 14 ].

This chapter highlights the privacy issues that must be addressed in the context of social media and provides guidance on how to study and design for social media privacy. We first provide an overview of the history of social media and its usage. Next, we highlight common social media privacy concerns that have arisen over the years. We also point out how scholars have identified and sought to predict privacy behavior, but many efforts have failed to adequately account for individual differences. By reconceptualizing privacy in social media as a boundary regulation, we can explain these gaps from previous one-size-fits-all approaches and provide tools for measuring and studying privacy violations. Finally, we conclude with a word of caution about the consequences of ignoring privacy concerns on social media.

2 A Brief History of Social Media

Section highlights.

Social media use has quickly increased over the past decade and plays a key role in social, professional, and even civic realms. The rise of social media has led to “networked individualism.”

This enables people to access a wider variety of specialized relationships , making it more likely they can meet a variety of needs. It also allows people to project their voice to a wider audience.

However, people have more frequent turnover in their social networks , and it takes much more effort to maintain social relations and discern (mis)information and intention behind communication.

The initial popularity of social media harkened back to the historical rise of social network sites (SNSs). The canonical definition of SNSs is attributed to Boyd and Ellison [ 15 ] who differentiate SNSs from other forms of computer-mediated communication. According to Boyd and Ellison, SNS consists of (1) profiles representing users and (2) explicit connections between these profiles that can be traversed and interacted with. A social networking profile is a self-constructed digital representation of oneself and one’s social relationships. The content of these profiles varies by platform from profile pictures to personal information such as interests, demographics, and contact information. Visibility also varies by platform and often users have some control over who can see their profile (e.g., everyone or “friends”). Most SNSs also provide a way to leave messages on another’s profile, such as posting to someone’s timeline on Facebook or sending a mention or direct message to someone on Twitter.

Public interest and research initially focused on a small subset of SNSs (e.g., Friendster [ 16 ] and MySpace [ 17 , 18 , 19 ]), but the past decade has seen the proliferation of a much broader range of social networking technologies, as well as an evolution of SNSs into what Kane et al. term social media networks [ 20 ]. This extended definition emphasizes the reach of social media content beyond a single platform. It acknowledges how the boundedness of SNSs has become blurred as platform functionality that was once contained in a single platform, such as “likes,” are now integrated across other websites, third parties, and mobile apps.

Over the past decade, SNSs and social media networks have quickly become embedded in many facets of personal, professional, and social life. In that time, these platforms became more commonly known as “social media.” In the USA, only 5% of adults used social media in 2005. By 2011, half of the US adult population was using social media, and 72% were social users by 2019 [ 21 ]. MySpace and Facebook dominated SNS research about a decade ago, but now other social media platforms, such as YouTube, Instagram, Snapchat, Twitter, Kik, TikTok, and others, are popular choices among social media users. The intensity of use also has drastically increased. For example, half of Facebook users log on several times a day, and three-quarters of Facebook users are active on the platform at least daily [ 21 ]. Worldwide, Facebook alone has 1.59 billion users who use it on a daily basis and 2.41 billion using it at least monthly [ 22 ]. About half of the users of other popular platforms such as Snapchat, Instagram, Twitter, and YouTube also report visiting those sites daily. Around the world, there are 4.2 billion users who spend a cumulative 10 billion hours a day on social networking sites [ 23 ]. However, different social networking sites are dominant in different cultures. For example, the most popular social media in China, WeChat (inc. Wēixìn 微信), has 1.213 billion monthly users [ 23 ].

While SNS profiles started as a user-crafted representation of an individual user, these profiles now also often consist of information that is passively collected, aggregated, and filtered in ways that are ambiguous to the user. This passively collected information can include data accessed through other avenues (e.g., search engines, third-party apps) beyond the platform itself [ 24 ]. Many people fail to realize that their information is being stored and used elsewhere. Compared to tracking on the web, social media platforms have access to a plethora of rich data and fine-grained personally identifiable information (PII) which could be used to make inferences about users’ behavior, socioeconomic status, and even their political leanings [ 25 ]. While online tracking might be valuable for social media companies to better understand how to target their consumers and personalize social media features to users’ preferences, the lack of transparency regarding what and how data is collected has in more recent years led to heightened privacy concerns and skepticism around how social media platforms are using personal data [ 26 , 27 , 28 ]. This has, in turn, contributed to a loss of trust and changes in how people interact (or not) on social media, leading some users to abandon certain platforms altogether [ 26 , 29 ] or to seek alternative social media platforms that are more privacy focused.

For example, WhatsApp, a popular messaging app, updated its privacy policy to allow its parent company, Facebook, and its subsidiaries to collect WhatsApp data [ 30 ]. Users were given the option to accept the terms or lose access to the app. Shortly after, WhatsApp rival Signal reported 7.5 million installs globally over 4 days. Recent and multiple social media data breaches have heightened people’s awareness around potential inferences that could be made about them and the danger in sensitive privacy breaches. Considering the invasive nature of such practices, both consumers and companies are increasingly acknowledging the importance of privacy, control, and transparency in social media [ 31 ]. Similarly, as researchers and practitioners, we must acknowledge the importance of privacy on social media and design for the complex challenges associated with networked privacy. These types of intrusions and data privacy issues are akin to the informational privacy issues that have been investigated in the context of e-commerce, websites, and online tracking (see Chap. 9 ).

While early research into social media and privacy largely focused on these types of concerns, researchers have uncovered how the social dynamics surrounding social media have led to a broader array of social privacy issues that shape people’s adoption of platforms and their usage behaviors. Rainie and Wellman explain how the rise of social technologies, combined with ubiquitous Internet and mobile access, has led to the rise of “networked individualism” [ 32 ]. People have access to a wider variety of relationships than they previously did offline in a geographically and time-bound world. These new opportunities make it more likely that people can foster relationships that meet their individual needs for havens (support and belonging), bandages (coping), safety nets (protect from crisis), and social capital (ability to survive and thrive through situation changes). Additionally, social media users can project their voice to an extended audience, including many weak ties (e.g., acquaintances and strangers). This enables individuals to meet their social, emotional, and economic needs by drawing on a myriad of specialized relationships (different individuals each particularly knowledgeable in a specific domain such as economics, politics, sports, caretaking). In this way, individuals are increasingly networked or embedded within multiple communities that serve their interests and needs.

Inversely, networked individualism has also made people less likely to have a single “home” community, dealing with more frequent turnover and change in their social networks. Rainie and Wellman describe how people’s social routines are different from previous generations that were more geographically bound – today, only 10% of people’s significant ties are their neighbors [ 32 ]. As such, researchers have questioned and studied the extent to which people can meaningfully maintain interpersonal relationships on social media. The upper limit for doing so has been estimated at 150 connections or “friends” [ 33 ], but social media connections often well exceed this number. With such large networks, it also takes users much more effort to distinguish (mis)information, when communication is intended for the user, and the intent behind that communication. The technical affordances of social media can also help or hinder their (in)ability to capture the nuances of the various relationships in their social network. On many social media platforms, relationships are flattened into friends and followers, making them homogenous and lacking differentiation between, for instance, casual acquaintance and trusted confidant [ 16 , 34 ]. These characteristics of social media lead to a host of social privacy issues which are crucial to address. In the next section, we summarize some of the key privacy challenges that arise due to the unique characteristics of social media.

3 Privacy Challenges in Social Media

Information disclosure privacy issues have been a dominant focus in online technologies and the primary focus for social media. It focuses on access to data and defining public vs. private disclosures . It emphasizes user control over who sees what.

With so many people from different social circles able to access a user’s social media content, the issues of context collapse occur. Users may post to an imagined audience rather than realizing that people from multiple social contexts are privy to the same information.

The issues of self-presentation jump to the foreground in social media. Being able to manage impressions is a part of privacy management.

The social nature of social media also introduces the issues of controlling access to oneself , both in terms of availability and physical access.

Despite all of these privacy concerns, there is a noted privacy paradox between what people say they are concerned about and their resulting behaviors online.

Early focus of social media privacy research was focused on helping individuals meet their privacy needs in light of four key challenges: (1) information disclosure, (2) context collapse, (3) reputation management, and (4) access to oneself. This section gives an overview of these privacy challenges and how research sought to overcome them. The remainder of this chapter shows how the research has moved beyond focusing on the individual when it comes to social media and privacy; rather, social media privacy has been reconceptualized as a dynamic process of interpersonal boundary regulation between individuals and groups.

3.1 Information Disclosure/Control over Who Sees What

A commonality among early social media privacy research is that the focus has been on information privacy and self-disclosure [ 35 ]. Self-disclosure is the information a person chooses to share with other people or websites, such as posting a status update on social media. Information privacy breaches occur when a website and/or person leaks private information about a user, sometimes unintentionally. Many studies have focused on informational privacy and on sharing information with, or withholding it from, the appropriate people [ 36 , 37 , 38 ] on social media. Privacy settings related to self-disclosure have also been studied in detail [ 39 , 40 , 41 ]. Generally, social media platforms help users control self-disclosure in two ways. First is the level of granularity or type of information that one can share with others. Facebook is the most complex, allowing users to disclose and control more granular information for profile categories such as bio, website, email addresses, and at least eight other categories at the time of writing this chapter. Others have fewer information groupings, which make user profiles chunkier, and thus self-disclosure boundaries less granular. The second dimension is one’s access level permissions, or with whom one can share personal information. The most popular social media platforms err on the side of sharing more information to more people by allowing users to give access to categories such as “Everyone,” “All Users,” or “Public.” Similarly, many social media platforms give the option for access for “friends” or “followers” only.

Many researchers have highlighted how disclosures can be shared more widely than intended. Tufekci examined disclosure mechanisms used by college students on MySpace and Facebook to manage the boundary between private and public. Findings suggest that students are more likely to adjust profile visibility rather than limiting their disclosures [ 42 ]. Other research points out how users may not want their posts to remain online indefinitely, but most social media platforms default to keeping past posts visible unless the user specifies otherwise [ 43 ]. Even when the platform offers ways to limit post sharing, there are often intentional and unintentional ways this content is shared that negates the users’ wishes. For example, Twitter is a popular social media platform where users can choose to have their tweets available only to their followers. However, millions of private tweets have been retweeted, exposing private information to the public [ 44 ]. Even platforms like Snapchat, which make posts ephemeral by default, are susceptible to people taking screenshots of a snap and distributing through other channels. Thus, as social media companies continue to develop social media platforms, they should consider how to protect users from information disclosure and teach people to practice privacy protective habits.

Although some users adjust their privacy settings to limit information disclosures, they may be unaware of third-party sites that can still access their information. Scholars have emphasized the importance of educating users on the secondary use of their data, such as when third-party software takes information from their profiles [ 45 ]. Data surveillance continues to expand, and the business model of social media corporations tends to favor getting more information about users, which makes it difficult for users that want to control their disclosure [ 46 ]. Third-party apps can also access information about social media users’ connections without consent of the person whose information is being stored [ 47 ].

3.2 Unique Considerations for Managing Disclosures Within Social Media

As mentioned earlier, social media can expand a person’s network, but as that network expands and diversifies, users have less control over how their personal information is shared with others. Two unique privacy considerations for social media that arise from this tension are context collapse and imagined audiences, which we describe in more detail in the subsections below. For example, as Facebook has become a social gathering place for adults, one’s “friends” may include family members, coworkers, colleagues, and acquaintances all in one virtual social sphere. Social media users may want to share information with these groups but are concerned about which audiences are appropriate for sharing what types of information. This is because these various social spheres that intersect on Facebook may not intersect as readily in the physical world (e.g., college buddies versus coworkers) [ 48 ]. These distinct social circles are brought together into one space due to social media. This concept is referred to as “context collapse” since a user’s audience is no longer limited to one context (e.g., home, work, school) [ 15 , 49 , 50 ]. We highlight research on the phenomenon of the privacy paradox and explain how context collapse and imagined audiences may help explain the apparent disconnect between users’ stated privacy concerns and their actual privacy behavior.

Context Collapse

Nuanced differences between one’s relationships are not fully represented on social media. While real-life relationships are notorious for being complex, one of the biggest criticisms of social media platforms is that they often simplify relationships to a “binary” [ 51 ] or “monolithic” [ 52 ] dimension of either friend or not friend. Many platforms just have one type of relationship such as a “friend,” and all relationships are treated the same. Once a “friend” has been added to one’s network, maintaining appropriate levels of social interactions in light of one’s relationship context with this individual (and the many others within one’s network) becomes even more problematic [ 53 ]. Since each friend may have different and, at times, mutually exclusive expectations, acting accordingly within a single space has become a challenge. As Boyd points out, for instance, teenagers cannot be simultaneously cool to their friends and to their parents [ 53 ]. Due to this collapsed context of relationships within social media, acquaintances, family, friends, coworkers, and significant others all have the same level of access to a social media user once added to one’s network – unless appropriately managed.

Research reveals that the way people manage context collapses varies. Working professionals might deal with context collapse by limiting posts containing personal information, creating different accounts, and avoiding friending those they worked with [ 54 ]. As another example, many adolescents manage context collapse by keeping their family members separate from their personal accounts [ 55 ]. Other mechanisms for managing context collapse include access-level permission to request friendship, denying friend requests, and unfriending. While there is limited support for manually assigning different privileges to each friend, the default is to start out the same and many users never change those defaults.

Privacy incidents resulting from mixing work and social media show the importance of why context collapse must be addressed. Context collapse has been shown to negatively affect those seeking employment [ 56 ], as well as endangering those who are employed. For example, a teacher in Massachusetts lost her job because she did not realize her Facebook posts were public to those who were not her friends; her complaints about parents of students getting her sick led to her getting fired from her job [ 57 ]. Many others have shared anecdotes about being fired after controversial Facebook and Twitter posts [ 58 , 59 ]. Even celebrities who live in the public eye can suffer from context collapse [ 60 , 61 ]. Kim Kardashian, for example, received intense criticism from Internet fans when she posted a photo on social media of her daughter using a cellphone and wearing makeup while Kim was getting ready for hair and wardrobe [ 62 ]. Many online users criticized her parenting style for not limiting screen time and Kim subsequently shared a photo of a stack of books that the kids have access to while she works.

Nevertheless, context collapse can also increase bridging social capital, which is the potential social benefit that can come through having ties to a wider audience. Context collapse enables this to occur by allowing people to increase their connections to weak ties and creating serendipitous situations by sharing with people beyond whom one would normally share [ 60 ]. For example, job hunters may increase their chances of finding a job by using social media to network and connect with those they would not normally be associated with on a daily basis. Getting out a message or spreading the word can also be accomplished more easily. For instance, finding people to contribute to natural disaster funds can be effective on social media because multiple contexts can be easily reached from one account [ 63 ]. In addition to managing context collapse, social media users also have to anticipate whether they are sharing disclosures with their intended audiences.

Imagined Audiences

The disconnect between the real audience and the imagined audience on social media poses privacy risks. Understanding who can see what content, how, when, and where is key to deciding what content to share and under what circumstances. Yet, research has consistently demonstrated how users do not accurately anticipate who can potentially see their posts. This manifests as wrongly anticipating that a certain person can see content (when they cannot), as well as not realizing when another person can access posted content. Users have an “imagined audience” [ 64 , 65 ] to whom they are posting their content, but it often does not match the actual audience viewing the user’s content. Social media users typically imagine that the audience for their social media posts are like-minded people, such as family or close friends [ 65 ]. Sometimes, online users think of specific people or groups when creating content such as a daughter, coworkers, people who need cleaning tips, or even one’s deceased father [ 65 ]. Despite these imagined audiences, privacy settings may be set so that many more people can see these posts (acquaintances, strangers, etc.). While users do tend to limit who sees their profile to a defined audience [ 44 , 66 , 67 ], they still tend to believe their posts are more private than they actually are [ 49 , 68 ].

Some users adopt privacy management strategies to counter potential mismatch in audience. Vitak identified several privacy management tactics users employ to disclose information to a limited audience [ 69 ]:

Network-based . Social media users decide who to friend or follow, therefore filtering their network of people. Some Facebook users avoid friending people they do not know. Others set friends’ profiles to “hidden,” so that they do not have to see their posts, but avoid the negative connotations associated with “unfriending.”

Platform-based . Some users choose to use the social media sites’ privacy settings to control who sees their posts. A common approach on Facebook is to change the setting to be “friends only,” so that only a user’s friends may see their posts.

Content-based . These users control their privacy by being careful about the information they post. If they knew that an employer could see their posts, then they would avoid posting when they were at work.

Profile-based . A less commonly used approach is to create multiple accounts (on a single platform or across platforms). For example, a professional, personal, and fun account.

As another example, teenagers often navigate public platforms by posting messages that parents or others would not understand their true meaning. For instance, by posting a song lyric or quote that is only recognized by specific individuals as a reference to a specific movie scene or ironic message, they therefore creatively limit their audience [ 49 , 70 ]. Others manage their audience by using more self-limiting privacy tactics like self-censorship [ 70 ], choosing just to not post something they were considering in the first place. These various tactics allow users to control who can see what on social media in different ways.

3.3 Reputation Management Through Self-Presentation

Technology-mediated interactions have led to new ways of managing how we present ourselves to different groups of friends (e.g., using different profiles on the same platform based on the audience) [ 71 ]. Being able to control the way we come across to others can be a challenging privacy problem that social media users must learn to navigate. Features to limit audience can also help with managing self-presentation. Nonetheless, reputation or impression management is not just about avoiding posts or limiting access to content. Posting more content, such as selfies, is another approach used to control the way others perceive a user [ 72 ]. In this case, it is important to present the content that helps convey a certain image of oneself. Research has revealed that those who engage more in impression management tend to have more online friends and disclose more personal information [ 73 ]. Those who feel online disclosures could leave them vulnerable to negativity, such as individuals who identify as LGBTQ+, have also been found to put an emphasis on impression management in order to navigate their online presence [ 74 ]. However, studies still show that users have anxieties around not having control over how they are presented [ 75 ]. Social media users worry not only about what they post, but are concerned about how others’ postings will reflect on them [ 42 ].

Another dimension that affects impression management attitudes is how social media platforms vary in their policies on whether user profiles must be consistent with their offline identities. Facebook’s real name policy, for instance, requires that people use their real name and represent themselves as one person, corresponding to their offline identities. Research confirms that online profiles actually do reflect users’ authentic personalities [ 76 ]. However, some platforms more easily facilitate identity exploration and have evolved norms encouraging it. For example, Finsta accounts popped up on Instagram a few years after the company started. These accounts are “Fake Instagram” accounts often sharing content that the user does not want to associate with their more public identity, allowing for more identity exploration. This may have arisen from the social norm that has evolved where Instagram users often feel like they need to present an ideal self. Scholars have observed such pressure on Instagram more than on other platforms like Snapchat [ 77 ]. While the ability to craft an online image separate from one’s offline identity may be more prevalent on platforms like Instagram, certain types of social media such as location-sharing social networks are deeply tied to one’s offline self, sharing actual physical location of its users. Users of Foursquare, a popular location-sharing app, have leveraged this tight coupling for impression management. Scholars have observed that users try to impress their friends or family members about the places they spend their time while skipping “check-in” at places like McDonald’s or work for fear of appearing boring or unimpressive [ 78 ].

Regardless of how tightly one’s online presence corresponds with their offline identity, concerns about self-presentation can arise. For example, users may lie about their location on location-sharing platforms as an impression management tactic and have concerns about harming their relationships with others [ 79 ]. On the other hand, Finstas are meant to help with self-presentation by hiding one’s true identity. Ironically, the content posted may be even more representative of the user’s attitudes and activities than the idealized images on one’s public-facing account. These contrasting examples illustrate how self-presentation concerns are complicated.

What further complicates reputation management is that social media content is shared and consumed by a group of people and not just individuals or dyads. Thus, self-presentation is not only controlled by the individual, but by others who might post pictures and/or tag that individual. Even when friends/followers do not directly post about the user, their actions can reflect on the user just by virtue of being connected with them. The issues of co-owned data and how to negotiate disclosure rules are a key area of privacy research on the rise. We refer you to Chap. 6 , which goes in-depth on this topic.

3.4 Access to Oneself

A final privacy challenge many social media users encounter is controlling accessibility others have to them. Some social media platforms automatically display when someone is online, which may invite interaction whether users want to be accessible or not. Controlling access to oneself is not as straightforward as limiting or blocking certain people’s access. For instance, studies have also shown that social pressures influence individuals to accept friend requests from “weak ties” as well as true friends [ 53 , 80 ]. As a result, the social dynamics on social media are becoming more complex, creating social anxiety and drama for many social media users [ 52 , 53 , 80 ]. Although a user may want to control who can interact with him or her, they may be worried about how using privacy features such as “blocking” other accounts may send the wrong signal to others and hurt their relationships [ 81 ]. In fact, an online social norm called “hyperfriending” [ 82 ] has developed where only 25% of a user’s online connections represent true friendship [ 83 ]. This may undermine the privacy individuals wished they had over who interacts with them on their various accounts. Due to social norms or etiquette, users may feel compelled to interact with others online [ 84 ]. Even if users do not feel like they need to interact, they can sometimes get annoyed or overwhelmed by seeing too much information from others [ 85 ]. Their mental state is being bombarded by an overload of information, and they may feel their attention is being captured.

Many social media sites now include location-sharing features to be able to tell people where they are by checking in to various locations, tag photos or posts, or even share location in real time. Therefore, privacy issues may also arise when sharing one’s location on social media and receiving undesirable attention. Studies point out user concerns about how others may use knowledge of that location to reach out and ask to meet up, or even to physically go find the person [ 86 ]. In fact, research has found that people may not be as concerned about the private nature of disclosing location as they are concerned for disturbing others or being disturbed oneself as a result of location sharing [ 87 ]. This makes sense given that analysis of mobile phone conversations reveals that describing one’s location plays a big role in signaling availability and creating social awareness [ 87 , 88 ].

Some scholars focus on the potential harm that may come because of sharing their location. Tsai et al. surveyed people about perceived risks and found that fear of potential stalkers is one of the biggest barriers to adopting location-sharing services [ 89 ]. Nevertheless, studies have also found that many individuals believe that the benefits of using location sharing outweigh the hypothetical costs. Foursquare users have expressed fears that strangers could use the application to stalk them [ 78 ]. These concerns may explain why users share their location more often with close relationships [ 37 ].

Geotagging is another area of privacy concern for online users. Geotagging is when media (photo, website, QR codes) contain metadata with geographical information. More often the information is longitudinal and latitudinal coordinates, but sometimes even time stamps are attached to photos people post. This poses a threat to individuals that post online without realizing that their photos can reveal sensitive information. For example, one study assessed Craigslist postings and demonstrated how they could extract location and hours a person would likely be home based on a photo the individual listed [ 90 ]. The study even pinpointed the exact home address of a celebrity TV host based on their posted Twitter photos. Researchers point out how many users are unaware that their physical safety is at risk when they post photos of themselves or indicate they are on vacation [ 22 , 90 , 91 ]. Doing so may make them easy targets for robbers or stalkers to know when and where to find them.

3.5 Privacy Paradox

While researchers have investigated these various privacy attitudes, perceptions, and behaviors, the privacy paradox (where behavior does not match with stated privacy concerns) has been especially salient on social media [ 92 , 93 , 94 , 95 , 96 , 97 ]. As a result, much research focuses on understanding the decision-making process behind self-disclosure [ 98 ]. Scholars that view disclosure as a result of weighing the costs and the benefits of disclosing information use the term “privacy calculus” to characterize this process [ 99 ]. Other research draws on the theory of bounded rationality to explain how people’s actions are not fully rational [ 100 ]. They are often guided by heuristic cues which do not necessarily lead them to make the best privacy decisions [ 101 ]. Indeed, a large body of literature has tried to dispel or explain the privacy paradox [ 94 , 102 , 103 ].

4 Reconceptualizing Social Media Privacy as Boundary Regulation

By reconceptualizing privacy in social media as a boundary regulation , we can see that the seeming paradox in privacy is actually a balance between being too open or disclosing too much and being too inaccessible or disclosing too little. The latter can result in social isolation which is privacy regulation gone wrong.

In the context of social media, there are five different types of privacy boundaries that should be considered.

People use various methods of coping with privacy violations , many not tied to disclosing less information.

Drawing from Altman’s theories of privacy in the offline world (see Chap. 2 ), Palen and Dourish describe how, just like in the real world, social media privacy is a boundary regulation process along various dimensions besides just disclosure [ 104 ]. Privacy can also involve regulating interactional boundaries with friends or followers online and the level of accessibility one desires to those people. For example, if a Facebook user wants to limit the people that can post on their wall, they can exclude certain people. Research has identified other threats to interpersonal boundary regulation that arise out of the unique nature of social media [ 42 ]. First, as mentioned previously, the threat to spatial boundaries occurs because our audiences are obscured so that we no longer have a good sense of whom we may be interacting with. Second, temporal boundaries are blurred because any interaction may now occur asynchronously at some time in the future due to the virtual persistence of data. Third, multiple interpersonal spaces are merging and overlapping in a way that has caused a “steady erosion of clearly situated action” [ 5 ]. Since each space may have different and, at times, mutually exclusive behavioral requirements, acting accordingly within those spaces has become more of a challenge to manage context collapses [ 42 ]. Along with these problems, a major interpersonal boundary regulation challenge is that social media environments often take control of boundary regulation away from the end users. For instance, Facebook’s popular “Timeline” automatically (based on an obscure algorithm) broadcasts an individual’s content and interactions to all of his or her friends [ 41 ]. Thus, Facebook users struggle to keep up to date on how to manage interactions within these spaces as Facebook, not the end user, controls what is shared with whom.

4.1 Boundary Regulation on Social Media

One conceptualization of privacy that has become popular in the recent literature is viewing privacy on social media as a form of interpersonal boundary regulation. These scholars have characterized privacy as finding the optimal or appropriate level of privacy rather than the act of withholding self-disclosures. That is, it is just as important to avoid over disclosing as it is to avoid under disclosing. Therefore, disclosure is considered a boundary that must be regulated so that it is not too much or too little. Petronio’s communication privacy management (CPM) theory emphasizes how disclosing information (see Chap. 2 ) is vital for building relationships, creating closeness, and creating intimacy [ 105 ]. Thus, social isolation and loneliness resulting from under disclosure can be outcomes of privacy regulation gone wrong just as much as social crowding can be an issue. Similarly, the framework of contextual integrity explains that context-relative informational norms define privacy expectations and appropriate information flows and so a disclosure in one context (such as your doctor asking you for your personal medical details) may be perfectly appropriate in that context but not in another (such as your employer asking you for your personal medical details) [ 106 ]. Here it is not just about an information disclosure boundary but about a relationship boundary where the appropriate disclosure depends on the relationship between the discloser and the recipient.

Drawing on Altman’s theory of boundary regulation, Wisniewski et al. created a useful taxonomy detailing the various types of privacy boundaries that are relevant for managing one’s privacy on social media [ 107 ]. They identified five distinct privacy boundaries relevant to social media:

Relationship . This involves regulating who is in one’s social network as well as appropriate interactions for each relationship type.

Network . This consists of regulating access to one’s social connections as well as interactions between those connections.

Territorial . This has to do with regulating what content comes in for personal consumption and what is available in interactional spaces.

Disclosure . The literature commonly focuses on this aspect which consists of regulating what personal and co-owned information is disclosed to one’s social network.

Interactional . This applies to regulating potential interaction with those within and outside of one’s social network.

Of these boundary types, Wisniewski et al. emphasize the most important is maintaining relationship boundaries between people. Similarly, Child and Petronio note that “one of the most obvious issues emerging from the impact of social network site use is the challenge of drawing boundary lines that denote where relationships begin and end” [ 108 ]. Making sure that social media facilitates behavior appropriate to each of the user’s relationships is a major challenge.

Each of these interpersonal boundaries can be further classified into regulation of more fine-grained dimensions. In Table 7.1 , we summarize the different ways that each of these five interpersonal boundaries can be regulated on social media.

Next, we describe each of these interpersonal boundaries in more detail.

Self- and Confidant Disclosures

The information disclosure concerns described in the previous “Privacy Challenges” section are the focus of privacy around disclosure boundaries. Posting norms on social media platforms often encourage the disclosure of one’s personal information (e.g., age, sexual orientation, location, personal images) [ 109 , 110 ]. Disclosing such information can leave one open to financial, personal, and professional risks such as identity theft [ 46 , 111 ]. However, there are motivations for disclosing personal information. For example, research suggests that posting behaviors on social media platforms have a significant relationship with a desire for positive self-presentation [ 112 , 113 ]. Privacy management is necessary for balancing the benefits of disclosure and its associated risks. This involves regulating both self-disclosure for information about one’s self and confidant-disclosure boundaries for information that is “co-owned” with others [ 105 ] (e.g., a photograph that includes other people, or information about oneself that is shared with another in confidence).

There are a variety of disclosure boundary regulation mechanisms on social media interfaces. Many platforms offer users the freedom to selectively share various types of information, create personal biographies, share links to their websites, or post their birthday. Self-disclosure can also be maintained through privacy settings such as granular control over who has access to specific posts. The level of information one wishes to disclose could be managed by various privacy settings. Many social media platforms encourage multiparty participation with features such as tagging, subtweeting, or replying to others’ posts. This level of engagement promotes the celebration of shared moments or co-owned information/content. At the same time, it increases possibilities for breaching confidentiality and can create unwanted situations such as posting congratulations to a pregnancy that has not yet been announced to most family members or friends. Some ways that people manage violations of disclosure boundaries are to reactively confront the violator in private or to stop using the platform after the unexpected disclosure [ 114 ].

Relationship Connection and Context

Relationship boundaries have to do with who the user accepts into his or her “friend group” and consequently shapes the nature of online interactions within a person’s social network. Social media platforms have embedded the idea of “friend-based privacy” where information and interactional access is primarily dependent on one’s connections. The structure of one’s network can affect the level of engagement and the types of disclosures made on a platform. Individuals with more open relationship boundaries may have higher instances of weak ties compared to others who may employ stricter rules for including people into their inner circles. For example, studies have found people who engage in “hyper-adding,” namely, adding a significant number of persons to their network which could result in a higher distribution of “weak ties” [ 53 , 82 ].

After users accept friends and make connections, they must manage overlapping contexts such as work, family, or acquaintances. This leads to the types of privacy issues discussed under “Context Collapse” in the previous “Privacy Challenges” section. Research shows that boundary violations are hardly remedied by blocking or unfriending unless in extreme cases [ 115 ]. Furthermore, users rarely organize their friends into groups (and some social media platforms do not offer that functionality) [ 114 ]. People are either unaware of the feature, think it takes too much time, or are concerned that the wrong person would still see their information. As a result, users often feel they have to sacrifice being authentic online to control their privacy.

Network Discovery and Interaction

An individual’s social media network is often public knowledge, and there are advantages and disadvantages of having friends being aware of one’s social connections (aka friends list or followers). Network boundary mechanisms enable people to identify groups of people and manage interactions between the various groups. We highlight two types of network boundaries, namely, network discovery and network intersection boundaries. First, network discovery boundaries are primarily centered around the act of regulating the type of access others have to one’s network connections. Implementing an open approach to network discovery boundaries may create problems that may arise including competition as competitors within the same industry could steal clients by carefully selecting from a publicly facing friend list. Another issue arises when a person’s friend does not have a good reputation and that connection is negatively received by others within that social group. Sometimes the result is positive, for example, when friends or family find they have mutual connections, thus building social capital. Some social media platforms offer the ability to hide friend groups from everyone.

Network intersection boundaries involve the regulation of the interactions among different friend groups within one’s social network. Social media users have expressed the benefits of engaging in discourse online with people who they may not personally know offline [ 116 ]. In contrast, clashes within one’s friend list due to opposing political views or personal stances could create tensions that would make moderating a post difficult. These boundaries could be harder to control and sometimes lead to conflict if one is forced to choose which friends can participate in discussions.

Inward- and Outward-Facing Territories

Territorial boundaries include “places and objects in the environment” to indicate “ownership, possession, and occasional active defense” [ 117 ]. Within social media, there are features that are either inward-facing territories or outward-facing territories. Inward-facing territories are commonly characterized as spaces where users could find updates on their friends and see the content their connections were posting (such as the “news feed” on Facebook or “updates” on LinkedIn). To control their inward-facing territories, individuals could hide posts from specific people, adjust their privacy settings, and use filters to find specific information.

These territories are constantly being updated with photos, videos, and news articles that are personalized and not public facing which contributes to an overall low priority for territorial management [ 114 ]. Most choose to ignore content that is irrelevant to them rather than employing privacy features. In addition, once privacy features are used to hide content from particular friends, users rarely revisit that decision to reconsider including content within that territory from that person.

It is important to note that the key characteristic of outward-facing territory management is the regulation of potentially unsatisfactory interactions rather than a fear of information exposure. One example of an outward-facing territory is Facebook’s wall/timeline, where a person’s friend may contribute to your social media presence. Outward-facing territories fall between a public and private place, which creates more risk of unintended boundary violations. Altman argues that “because of their semipublic quality [outward-facing territories] often have unclear rules regarding their use and are susceptible to encroachment by a variety of users, sometimes inappropriately and sometimes predisposing to social conflict” [ 117 ]. Similar to confidant disclosure described above, connections may post (unwanted) content on a user’s wall that could lead to turbulence if that content is later deleted.

Interactional Disabling and Blocking

Interactional boundaries limit the need for other boundary regulations discussed because a person reduces access to oneself by disabling features [ 114 ]. For example, a user may deactivate Facebook Messenger to avoid receiving messages but reactivate the app when they deem that interaction to be welcomed. In a similar regard, disabling semipublic features of the interface (such as the wall on Facebook) could assist users in having a greater sense of control. This manifestation of interaction withdrawal is typically not directed at reducing interaction with a specific person; rather, it may be motivated by a high desire to control one’s online spaces. As such, disabling features are associated with perceptions of mistrust within one’s network and a desire to limit interruptions [ 115 ]. On the more extreme end, blocking could also be employed to regulate interactional boundaries. Unlike other withdrawal mechanisms such as disabling your wall, picture tagging, or chat, blocking is inherently targeted. The act represents the rejection and revocation of access to oneself from a particular party. Some social media platforms allow users to block other people or pages, meaning that the blocked person may not contact or interact with the user in any form. Generally, blocking a person results from a negative experience such as stalking or being bombarded with unwanted content [ 118 ].

4.2 Coping with Social Media Privacy Violations

Overtime, many social media platforms have implemented new privacy features that attempt to address evolving privacy risks and users’ need for more granular control online. While this effort is commendable, Ellison et al. argue that “privacy behaviors on social networking sites are not limited to privacy settings” [ 41 ]. Thus, social media users still venture outside the realm of privacy settings to achieve appropriate levels of social interactions. Coping mechanisms can be viewed as behaviors utilized to maintain or regain interpersonal boundaries [ 107 ]. Although these coping approaches may often be suboptimal, Wisniewski et al.’s framework of coping strategies for maintaining one’s privacy provides insight into the struggles many social media users face in maintaining these boundaries.

This approach is often defined as the “reduction of intensity of inputs” [ 117 ]. Filtering includes selecting whom one will accept into their online social circle and is often used in the management of relational boundaries. Filtering techniques may include relying on social cues (e.g., viewing the profile picture or examining mutual friends) before confirming the addition of a new connection. Other methods leverage non-privacy-related features that are repurposed to manage interactions based on relation context, for example, creating multiple accounts on the same platform to separate professional connections from personal friends.

The vast amount of information on social media could easily become overwhelming and difficult to consume. Therefore, social media users may opt to ignore posts or skim through information to decide which ones should receive priority for engagement. Ignoring is most common for inward-facing territories such as your “Feed” page. The overreliance on this approach might increase the chances of missing critical moments that connections shared.

Blocking is a more extreme approach to interactional boundary management compared to filtering and ignoring, which contributes to lower levels of reported usage [ 119 ]. As an alternative, users have developed other technology-supported mechanisms that would allow them to avoid unwanted interactions. As an example, Wisniewski et al. describe using pseudonyms on Facebook to make it more difficult to find a user on the platform [ 107 ]. Another method for blocking unwanted interactions is to use the account of a close friend or loved one to enjoy the benefits of the content on the platform without the hassle of expected interactions. Page et al. highlight this type of secondary use for those who avoid social media because of social anxieties, harassment, and other social barriers [ 120 ].

When some users feel they are losing control, they withdraw from social media by doing one of the following: deleting their account, censoring their posts, or avoiding confrontation. As a result, a common technique is limiting or adjusting the information shared (even avoiding posts that may be received negatively) [ 121 ]. Das and Kramer found that “people with more boundaries to regulate censor more; people who exercise more control over their audience censor more content; and, users with more politically and age diverse friends censor less, in general” [ 122 ]. Withdrawal suggests that some users think the risks outweigh the benefits of social media.

Unlike offensive coping mechanisms such as filtering, blocking, or withdrawal, social media users resort to more defensive mechanisms when the intention is to create interactions that may be confrontational. Aggressive behavior is displayed when the goal is to seek revenge or garner attention from specific people or groups. Some users may choose to exploit subliminal references in their posts to indirectly address or offend specific persons (e.g., an ex-partner, coworker, family member).

Compliance is giving in to pressures (external or internal) and adjusting one’s interpersonal boundary preferences for others. Altman describes this as “repeated failures to achieve a balance between achieved and desired levels of privacy” [ 117 ]. Relinquishing one’s interactional privacy needs to accommodate pressures of disclosure, nondisclosure, or friending preferences could result in a perceived loss of control over social interactions.

A healthy strategy for managing social media boundary violations is communicating with the other person involved and finding a resolution. Prior work indicates that most users that compromise do so offline [ 107 ]. These compromises are mostly with closer friends who the user can contact through email, phone call, or messaging. These more private scenarios avoid other people becoming involved online. Also, many compromises are about tagging someone in photos or sharing personal information about another user (i.e., confidant disclosure).

In addition to this coping framework for social media privacy, Stutzman examined the creation of multiple profiles on social media websites, primarily Facebook, as an information regulation mechanism. Through grounded theory, he identified three types of information boundary regulation within this context (pseudonymity, practical obscurity, and transparent separations) and four overarching motives for these mechanisms (privacy, identity, utility, and propriety) [ 71 ]. Lampinen et al. created a framework of strategies for managing private versus public disclosures. It defined three dimensions by which strategies differed: behavioral vs. mental, individual vs. collaborative, and preventative vs. corrective [ 71 , 123 ]. The various coping frameworks conceptualize privacy as a process of interpersonal boundary regulation. However, they do not solve the problem of managing privacy on these platforms. They do attempt to model the complexity of privacy management in a way that better reflects the complex nature of interpersonal relationships rather than as a matter of withholding versus disclosing private information.

5 Addressing Privacy Challenges

Rather than just measuring privacy concerns, researchers and designers should focus on understanding attitudes towards boundary regulation. Validated tools for measuring boundary preservation concern and boundary enhancement expectations are provided in this chapter.

Privacy features need to be designed to account for individual differences in how they are perceived and used. While some feel features like untag, unfriend, and delete are useful, others are worried about how using such features will impact their relationships.

Unaddressed privacy concerns can serve as a barrier to using social media. It is crucial to design for not only functional privacy concerns (e.g., being overloaded by information, guarding from inappropriate data access) but social privacy concerns as well (e.g., unwelcome interactions, pressures surrounding appropriate self-presentation).

This section describes how to better identify privacy concerns by measuring them from a boundary regulation perspective. We also emphasize the importance of individual differences when designing privacy features. Finally, we elaborate on a crucial set of social privacy issues that we feel are a priority to address. While many social media users may feel these types of social pressures to some degree, these problems have pushed some of society’s most vulnerable to complete abandonment of social media despite their desire for social connection. We call on social media designers and researchers to focus on these problems which are a side effect of the technologies we have created.

5.1 Understanding People and Their Privacy Concerns

Understanding social media privacy as a boundary regulation allows us to better conceptualize people’s attitudes and behaviors. It helps us anticipate their concerns and balance between too little or too much privacy. However, many existing tools for measuring privacy come from the information privacy perspective [ 124 , 125 , 126 ] and focus on data collection by organizations, errors, secondary use, or technical control of data. In detailing the various types of privacy boundaries that are relevant for managing one’s privacy on social media, Wisniewski et al. [ 114 ] emphasized that the most important is maintaining relationship boundaries between people.

Page et al. [ 86 , 127 ] similarly found that concerns about damaging relationship boundaries are actually at the root of low-level privacy concerns such as worrying about who sees what, being too accessible, or being bothered or bothering others by sharing too much information. For instance, a typically cited privacy concern such as being worried about a stranger knowing one’s current location turns out to be a privacy concern only if an individual expects that a stranger might violate typical relationship expectations. Their research revealed that many people were unconcerned about strangers knowing their location and explained that no one would care enough to use that information to come find them. They did not expect anyone to violate relationship boundaries and so were privacy unconcerned. On the other hand, those who felt there was a likelihood of someone using their location for nefarious purposes were privacy concerned. Social media enabling a negative change in relationship boundaries and the types of interactions that are now possible (such as strangers now being able to locate me) drives privacy concerns.

In fact, while scholars have used many lower-level privacy concerns such as being worried about sharing information to predict social media usage and adoption, they have met with mixed success leading to the commonly observed privacy paradox. However, research shows that preserving one’s relationship boundaries is at the root of these low-level online privacy concerns (e.g., informational, psychological, interactional, and physical privacy concerns) and is a significant predictor of social media usage [ 86 , 127 ]. In other words, concerns about social media damaging one’s relationships (aka relationship boundary regulation) are what drives privacy concerns.

5.2 Measuring Privacy Concerns

Boundary regulation plays a key role in maintaining the right level of privacy on social media, but how do we evaluate whether a platform is adequately supporting it? A popular scale for testing users’ awareness of secondary access is the Internet Users’ Information Privacy Concerns (IUIPC) scale, which measures their perceptions of collection, control, and awareness of user data [ 125 ]. An important finding is that users “want to know and have control over their information stored in marketers’ databases.” This indicates that social media should be designed such that people know where their data goes. However, throughout this chapter, it is evident that research on social media privacy has found concerns about social privacy more salient. In fact, the focus on relationship boundaries is a key privacy boundary to consider and measure in evaluating privacy concerns. Thus, having a scale to measure relationship boundary regulation would allow researchers and designers to better evaluate social media privacy.

Here we present validated relationship boundary regulation survey items developed by Page et al. which predict adoption and usage for various social media including Facebook, Twitter, LinkedIn, Instagram, and location-sharing social media [ 127 , 128 ]. These survey items can be used to evaluate privacy concerns for use of existing social media platforms, as well as capturing attitudes about new features or platforms. The survey items capture attitudes about one’s ability to regulate relationship boundaries when using a social media platform and are administered with a 7-point Likert scale (−3 = Disagree Completely, −2 = Disagree Mostly, −1 Disagree Slightly, 0 = Neither agree nor disagree, 1 = Agree Slightly, 2 = Agree Mostly, 3 = Agree Completely). These items measure both concerns and positive expectations.

When evaluating a new or existing social media platform, the relationship boundary preservation concern (BPC) items can be used to gauge user’s concerns about harming their relationships. A higher score would indicate that more support for privacy management is needed on a given platform. The relationship boundary enhancement expectation (BEE) items can also be used to evaluate whether users expect that using the platform will improve the user’s relationships. A high score is important to driving adoption and usage – having low concerns alone is not enough to drive usage. Along similar lines, even if users have high concerns, they may be counteracted by a perceived high level of benefits and so users remain frequent users of a platform. For instance, Facebook, one of the most widely used platforms, was shown to both invoke high levels of concern as well as high levels of enhancement expectation [ 127 ]. However, note that high frequency of use does not necessarily mean high levels of engagement (e.g., posting, commenting) or that users do not employ suboptimal workarounds (e.g., being vague in their posts) [ 81 ]. On the other hand, Twitter has a higher level of concerns compared to perceived enhancement and, accordingly, lower levels of usage [ 127 ].

In the validation studies, the set of survey items representing BPC were treated as a scale and factor analysis used to compute a single score. Similarly, the ones representing BEE were used to generate a single factor score to represent that construct. These could be used to evaluate new features or platforms in the lab or after deployment. For instance, after performing tasks on a new feature or platform, the user can answer these questions and the designer can compare the responses between different designs in A/B testing, or to predict usage frequency and adoption intentions (e.g., see [ 127 , 129 ] for detailed examples). Moreover, by correlating BPC or BEE with demographics or other customer segmentations (e.g., age, whether they are new customers, purpose for using the platform), product designers may be able to identify attitudes that are connected with certain segments of their customer base and address it directly.

5.3 Designing Privacy Features

When designing for privacy features, a crucial aspect to consider is individual differences. Privacy is not one-size-fits-all: there are many variations in how people feel, what they expect, and how they behave. Because social media connects individuals with diverse needs and expectations, and from a myriad of contexts, a necessity in addressing social media privacy is understanding individual differences in privacy attitudes and behaviors. Many individual differences have been identified that shape privacy needs and preferences [ 15 ] and behaviors [ 6 , 24 , 99 ].

Scholars have established that privacy as a construct is not limited to informational privacy (i.e., understanding the flow of data) but also includes social privacy concerns that may be more interactional (e.g., accessibility) or psychological in nature (e.g., self-presentation) [ 111 , 130 ]. Thus, a host of attitudes and experiences could shape an individual’s view on what it means to have privacy online. For example, people’s preferences for privacy tools could be heavily influenced by the type of data being shared or the recipient of that data [ 36 , 131 , 132 ]. Likewise, prior experiences (negative or positive) could shape how people interact online which could affect disclosure [ 133 ]. Context and relevance have also been found to significantly influence privacy behavior online. Drawing from the contextual integrity framework, many researchers argue that when people perceive data collection to be reasonable or appropriate, they are more likely to share information [ 134 ]. On the other hand, research has shown that when faced with uncomfortable scenarios, people employ privacy protective behaviors such as nondisclosure or falsifying information [ 135 ]. Research has also pointed to personal characteristics that could shape digital privacy behavior such as personality, culture, gender, age, and social norms [ 64 , 106 , 136 , 137 , 138 , 139 , 140 ].

While identifying concerns about damaging one’s relationships is important to measure, understanding the individual differences that can lead someone to be concerned can provide insight into addressing these concerns. For instance, through a series of investigations, Page et al. uncovered a communication style that predicts concerns about preserving relationship boundaries on many different social media platforms [ 127 , 128 , 129 ]. This communication style is characterized by wanting to put information out there so that the individual does not need to proactively inform others. Those who prefer an FYI (For Your Information) communication style are less concerned about relationship boundary preservation and, as a result, exhibit higher levels of engagement, interactions, and use of social media than low FYI communicators. For example, the survey items that capture an FYI communication style preference for location-sharing social media are: “I want the people I know to be aware of my location, without having to bother to tell them,” “I would prefer to make my location available to the people I know, so that they can see it whenever they need it,” and “The people I know should be able to get my location whenever they feel they need it.” Each item is administered with a 7-point Likert scale (Disagree strongly, Disagree moderately, Disagree slightly, Neutral, Agree slightly, Agree moderately, Agree strongly). For other social media platforms, the information type is adjusted (i.e., “what I’m up to” instead of “my location”).

Consequently, this raises concern over implications for non-FYI communicators since the design of major social media platforms is catered to FYI communicators [ 127 , 128 ]. Drawing on this insight, Page demonstrated how considering the user’s communication style when designing location-sharing social media interfaces can alleviate boundary preservation concerns [ 129 ]. Certain design choices such as choosing a request-based location-sharing interaction can lower concerns for non-FYI communicators, while continuous location-sharing and check-in type interactions that are typical in social media may be fine for FYI communicators.

This demonstrates that researchers should consider in the design of social media individual differences that affect privacy attitudes. Another individual difference in attitudes towards privacy features is a user’s apprehension that using common features such as untag, delete, or unfriend/unfollow can act as a hindrance in their relationships with others. Page et al. identified that while many use privacy features and perceive them as a tool useful for protecting their privacy, there are also many who are concerned about how using privacy features could hurt their relationships with others (e.g., being worried about offending others by untagging or unfriending) [ 81 ]. Instead, those individuals would use alternative privacy management tactics such as vaguebooking (not sharing specific details and using vague posts). Designers need to be aware that privacy features also need to be catered to individual variations in attitudes as well or else they may be ineffective and unused by certain segments of the user population.

5.4 Privacy Concerns and Social Disenfranchisement

A significant amount of research within the domain of social media nonuse has been focused on functional barriers that hinder adoption. In many cases, nonuse is traced to a lack of access (e.g., limited access to technology, financial resources, or the Internet). However, the push against adoption and subsequent usage can be voluntary [ 141 ] due to functional privacy concerns such as concerns about data breaches, information overload, or annoying posts [ 120 ]. Several social media companies have also implemented features such as time limits to help users counter overuse [ 142 ].

Likewise, it is equally important to consider social barriers that prevent social media engagement for people who really could use the social connection. Sharing about distressing experiences can be beneficial and reduce stigma, improve connection and interpersonal relationships with one’s network, and enhance well-being [ 6 , 7 , 143 , 144 ]. However, Page et al. identified a class of barriers that highlight social privacy concerns rooted in social anxiety or concerns about being overly influenced by others on social media. This is in contrast to the prior school of thought that focused primarily on functional motivations as barriers that influence nonuse (see Fig. 7.1 ) [ 120 ]. They point out that many who are already vulnerable avoid social media due to social barriers such as online harassment or paralysis over making decisions pertaining to online social interactions. Yet, they are also the ones who could benefit greatly from social connection and who end up losing touch with friends and social support by being off social media. They term this lose-lose situation of negative social consequences that arise when using social media as well as consequences from not using it, social disenfranchisement . They call on designers to address such social barriers and to realize that in designing the user experience to connect users so well, they are implicitly designing the nonuser experience of being left out. Given that social media usage may not always be a viable option, designers should design to alleviate the negative consequences of nonuse.

figure 1

Extension of Wyatt’s frame that divided nonusers along the dimensions of whether someone has used the technology in the past and the motivation for adoption (extrinsic, e.g., organizationally imposed, versus intrinsic, e.g., desire to communicate through technology). Page et al. differentiate between functional motivations/barriers of use (which has been the focus of much research) versus social motivations/barriers to use. Other frameworks consider additional temporal states of adoption (whether they are currently using and whether they will in the future). See [ 120 ] for more detailed descriptions

5.5 Guidelines for Designing Privacy-Sensitive Social Media

Now that you have learned about various privacy problems related to social media use, how do you apply that to designing or studying social media? Here are some practical guidelines.

Identifying Privacy Attitudes

Measuring privacy attitudes is a tricky task. Using existing informational privacy scales, users often say they are concerned, but this does not end up matching their actual behavior. By approaching it from a boundary regulation perspective, it will be easier to identify the proper balance between sharing too much and sharing too little. The survey items described in this chapter offer a way to measure concerns about boundary regulation as well as positive expectations. Considering both are key to more accurately predicting user behaviors.

Understanding Your Target Population

Some key characteristics are described in this chapter. Identifying these in your target population can help you be aware of individual differences that might affect privacy preferences on social media. When you are measuring privacy concerns, matching the preferences of your audience makes it more likely that they will have a good user experience. Pay particular attention to traits that have been identified as being related to usage and adoption of social media platforms, such as the FYI communication style which can be measured using the survey items provided in this chapter.

Evaluating Privacy Features

Focus on understanding whether users perceive your privacy features as useful or perhaps as posing a relational hindrance. The survey items provided in this chapter can help you do so. When anticipating privacy needs of your social media users, make sure you identify features that may impact boundary regulation both positively and negatively. You can compare attitudes between the existing feature and the newer version of the feature that will/has been deployed. You can also correlate attitudes towards privacy features with individual characteristics – some subpopulation of users may see privacy features as useful, while others may consider them a relational hindrance.

6 Chapter Summary

Social media has been widely adopted and quickly become an integral part of social, personal, economic, political, professional, and instrumental welfare. Understanding how mediated social interactions change the assumptions around audience management, disclosure, and self-presentation is key to working towards reconciling offline privacy assumptions with new realities. Moreover, given the rapidly changing landscape of widely available social media platforms, researchers and designers need to continually re-evaluate the privacy implications of new services, features, and interaction modalities.

With the rise of networked individualism, an especially strong emphasis must be placed on understanding individual characteristics and traits that can shape a user’s privacy expectations and needs. Given the inherently social nature of social media, understanding social norms and the influence of larger cultural and structural factors is also important for interpreting expectations of privacy and the significance around various social media behaviors.

Privacy does not have a one-size-fits-all solution. It is a normative construct that is context dependent and can change over time, from culture to culture, and person to person. It needs to be weighed across different individuals and against other important goals and values of the larger group or society. Because people and their social interactions can be complex, designing for social media privacy is usually not a straightforward task. However, the consequences of not addressing privacy issues can range from irritating to devastating. Using this chapter as a guide and taking the steps to think through privacy needs and expectations of your social media users is an integral part of designing for social media.

Quan-Haase, Anabel, and Alyson L. Young. 2010. Uses and gratifications of social media: A comparison of Facebook and instant messaging. Bulletin of Science, Technology & Society 30 (5): 350–361.

Article   Google Scholar  

Gruzd, Anatoliy, Drew Paulin, and Caroline Haythornthwaite. 2016. Analyzing social media and learning through content and social network analysis: A faceted methodological approach. Journal of Learning Analytics 3 (3): 46–71.

Yang, Huining. 2020. Secondary-school Students’ Perspectives of Utilizing Tik Tok for English learning in and beyond the EFL classroom. In 2020 3rd International Conference on Education Technology and Social Science (ETSS 2020) , 163–183.

Google Scholar  

Van Dijck, José. 2012. Facebook as a tool for producing sociality and connectivity. Television & New Media 13 (2): 160–176.

Grudin, Jonathan. 2001. Desituating action: Digital representation of context. Human–Computer Interaction 16 (2–4): 269–286.

Andalibi, Nazanin, Oliver L. Haimson, Munmun De Choudhury, and Andrea Forte. 2016. Understanding social media disclosures of sexual abuse through the lenses of support seeking and anonymity. In Proceedings of the 2016 CHI conference on human factors in computing systems , 3906–3918.

Andalibi, Nazanin, Pinar Ozturk, and Andrea Forte. 2017. Sensitive self-disclosures, responses, and social support on Instagram: The case of #depression. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing , 1485–1500.

Lin, Han, William Tov, and Qiu Lin. 2014. Emotional disclosure on social networking sites: The role of network structure and psychological needs. Computers in Human Behavior 41: 342–350.

Burke, Moira, Cameron Marlow, and Thomas Lento. 2010. Social network activity and social well-being. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , ACM, 1909–1912.

Ellison, Nicole B., Charles Steinfield, and Cliff Lampe. 2007. The benefits of Facebook “Friends:” Social capital and college students’ use of online social network sites. Journal of Computer-Mediated Communication 12 (4): 1143–1168.

———. 2011. Connection strategies: social capital implications of Facebook-enabled communication practices. New Media & Society 13 (6): 873–892.

Koroleva, Ksenia, Hanna Krasnova, Natasha Veltri, and Oliver Günther. 2011. It’s all about networking! Empirical investigation of social capital formation on social network sites. In ICIS 2011 Proceedings .

Fischer-Hübner, Simone, Julio Angulo, Farzaneh Karegar, and Tobias Pulls. 2016. Transparency, privacy and trust–technology for tracking and controlling my data disclosures: Does this work? In IFIP International Conference on Trust Management , Springer, 3–14.

Xu, Heng, Hock-Hai Teo, Bernard C.Y. Tan, and Ritu Agarwal. 2012. Research note-effects of individual self-protection, industry self-regulation, and government regulation on privacy concerns: A study of location-based services. Information Systems Research 23 (4): 1342–1363.

Boyd, Danah. 2002. Faceted Id/Entity: Managing Representation in a Digital World . Retrieved August 14, 2020 from https://dspace.mit.edu/handle/1721.1/39401 .

Boyd, Danah M., and Nicole B. Ellison. 2007. Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication 13 (1): 210–230.

Dwyer, C., S.R. Hiltz, M.S. Poole, et al. 2010. Developing reliable measures of privacy management within social networking sites. In System Sciences (HICSS), 2010 43rd Hawaii International Conference on , 1–10.

Hargittai, E. 2007. Whose space? Differences among users and non-users of social network sites. Journal of Computer-Mediated Communication 13: 1.

Tufekci, Zeynep. 2008. Grooming, Gossip, Facebook and Myspace. Information, Communication & Society 11 (4): 544–564.

Kane, Gerald C., Maryam Alavi, Giuseppe Joe Labianca, and Stephen P. Borgatti. 2014. What’s different about social media networks? A framework and research agenda. MIS Quarterly 38 (1): 275–304.

Pew Research Center. 2019. Social Media Fact Sheet . Pew Research Center: Internet, Science & Technology. Retrieved November 27, 2020 from https://www.pewresearch.org/internet/fact-sheet/social-media/ .

Fire, M., R. Goldschmidt, and Y. Elovici. 2014. Online social networks: Threats and solutions. IEEE Communications Surveys Tutorials 16 (4): 2019–2036.

Social Media Users. DataReportal – Global Digital Insights . Retrieved March 16, 2021 from https://datareportal.com/social-media-users .

Alalwan, Ali Abdallah, Nripendra P. Rana, Yogesh K. Dwivedi, and Raed Algharabat. 2017. Social media in marketing: A review and analysis of the existing literature. Telematics and Informatics 34 (7): 1177–1190.

Binns, Reuben, Jun Zhao, Max Van Kleek, and Nigel Shadbolt. 2018. Measuring third-party tracker power across web and mobile. ACM Transactions on Internet Technology 18 (4): 52:1–52:22.

Barnard, Lisa. 2014. The cost of creepiness: How online behavioral advertising affects consumer purchase intention.

Dolin, Claire, Ben Weinshel, Shawn Shan, et al. 2018. Unpacking perceptions of data-driven inferences underlying online targeting and personalization. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems , ACM, 493.

Ur, Blase, Pedro Giovanni Leon, Lorrie Faith Cranor, Richard Shay, and Yang Wang. 2012. Smart, useful, scary, creepy: Perceptions of online behavioral advertising. In Proceedings of the eighth symposium on usable privacy and security , ACM, 4.

Dogruel, Leyla. 2019. Too much information!? Examining the impact of different levels of transparency on consumers’ evaluations of targeted advertising. Communication Research Reports 36 (5): 383–392.

Hamilton, Isobel Asher, and Dean Grace. Signal downloads skyrocketed 4,200% after WhatsApp announced it would force users to share personal data with Facebook. It’s top of both Google and Apple’s app stores. Business Insider . Retrieved February 1, 2021 from https://www.businessinsider.com/whatsapp-facebook-data-signal-download-telegram-encrypted-messaging-2021-1 .

Wilkinson, Daricia, Moses Namara, Karishma Patil, Lijie Guo, Apoorva Manda, and Bart Knijnenburg. 2021. The Pursuit of Transparency and Control: A Classification of Ad Explanations in Social Media .

Lee, Rainie, and Barry Wellman. 2012. Networked . Cambridge, MA: MIT Press.

Dunbar, Robin. 2011. How many" friends" can you really have? IEEE Spectrum 48 (6): 81–83.

Carr, Caleb T., and Rebecca A. Hayes. 2015. Social media: Defining, developing, and divining. Atlantic Journal of Communication 23 (1): 46–65.

Xu, Heng, Tamara Dinev, H. Smith, and Paul Hart. 2008. Examining the Formation of Individual’s Privacy Concerns: Toward an Integrative View.

Consolvo, Sunny, Ian E Smith, Tara Matthews, Anthony LaMarca, Jason Tabert, and Pauline Powledge. 2005. Location disclosure to social relations: Why, when, & what people want to share. 10.

Wiese, Jason, Patrick Gage Kelley, Lorrie Faith Cranor, Laura Dabbish, Jason I. Hong, and John Zimmerman. 2011. Are you close with me? Are you nearby?: Investigating social groups, closeness, and willingness to share. UbiComp 10.

Xu, Heng, and Sumeet Gupta. 2009. The effects of privacy concerns and personal innovativeness on potential and experienced customers’ adoption of location-based services. Electronic Markets 19 (2–3): 137–149.

Acquisti, A., and R. Gross. 2006. Imagined communities: Awareness, information sharing, and privacy on the Facebook. Privacy Enhancing Technologies : 36–58.

Debatin, Bernhard, Jennette P. Lovejoy, Ann-Kathrin Horn, and Brittany N. Hughes. 2009. Facebook and online privacy: Attitudes, behaviors, and unintended consequences. Journal of Computer-Mediated Communication 15 (1): 83–108.

Ellison, Nicole B., Jessica Vitak, Charles Steinfield, Rebecca Gray, and Cliff Lampe. 2011. Negotiating privacy concerns and social capital needs in a social media environment. In Privacy Online: Perspectives on Privacy and Self-Disclosure in the Social Web , ed. S. Trepte and L. Reinecke, 19–32. Berlin: Springer.

Chapter   Google Scholar  

Tufekci, Z. 2008. Can You See Me Now? Audience and Disclosure Regulation in Online Social Network Sites . Retrieved January 29, 2021 from https://journals.sagepub.com/doi/abs/10.1177/0270467607311484 .

Ayalon, Oshrat and Eran Toch. 2013. Retrospective privacy: Managing longitudinal privacy in online social networks. In Proceedings of the Ninth Symposium on Usable Privacy and Security – SOUPS ’13 , ACM Press, 1.

Meeder, Brendan, Jennifer Tam, Patrick Gage Kelley, and Lorrie Faith Cranor. 2010. RT @IWantPrivacy: Widespread Violation of Privacy Settings in the Twitter Social Network . 12.

Padyab, Ali, and Tero Pã. Facebook Users Attitudes towards Secondary Use of Personal Information . 20.

van der Schyff, Karl, Stephen Flowerday, and Steven Furnell. 2020. Duplicitous social media and data surveillance: An evaluation of privacy risk. Computers & Security 94: 101822.

Symeonidis, Iraklis, Gergely Biczók, Fatemeh Shirazi, Cristina Pérez-Solà, Jessica Schroers, and Bart Preneel. 2018. Collateral damage of Facebook third-party applications: A comprehensive study. Computers & Security 77: 179–208.

Binder, Jens, Andrew Howes, and Alistair Sutcliffe. 2009. The problem of conflicting social spheres: Effects of network structure on experienced tension in social network sites. In Proceedings of the 27th international conference on Human factors in computing systems – CHI 09 , ACM Press, 965.

Marwick, Alice E., and Danah Boyd. 2011. I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society 13 (1): 114–133.

Sibona, Christopher. 2014. Unfriending on Facebook: Context collapse and unfriending behaviors. In 2014 47th Hawaii International Conference on System Sciences , 1676–1685.

Boyd, Danah Michele. 2004. Friendster and publicly articulated social networking. In Extended Abstracts of the 2004 Conference on Human Factors and Computing Systems – CHI ’04 , ACM Press, 1279.

Brzozowski, Michael J., Tad Hogg, and Gabor Szabo. 2008. Friends and foes: Ideological social networking. In Proceeding of the Twenty-Sixth Annual CHI Conference on Human Factors in Computing Systems – CHI ’08 , ACM Press, 817.

Boyd, Danah. 2006. Friends, Friendsters, and MySpace Top 8: Writing community into being on social network sites. First Monday .

Vitak, Jessica, Cliff Lampe, Rebecca Gray, and Nicole B Ellison. “Why won’t you be my Facebook friend?”: Strategies for Managing Context Collapse in the Workplace . 3.

Dennen, Vanessa P., Stacey A. Rutledge, Lauren M. Bagdy, Jerrica T. Rowlett, Shannon Burnick, and Sarah Joyce. 2017. Context collapse and student social media networks: Where life and high school collide. In Proceedings of the 8th International Conference on Social Media & Society - #SMSociety17 , ACM Press, 1–5.

Pike, Jacqueline C., Patrick J. Bateman, and Brian S. Butler. 2018. Information from social networking sites: Context collapse and ambiguity in the hiring process. Information Systems Journal 28 (4): 729–758.

Heussner, Ki Mae and Dalia Fahmy. Teacher loses job after commenting about students, parents on Facebook. ABC News . Retrieved November 19, 2020 from https://abcnews.go.com/Technology/facebook-firing-teacher-loses-job-commenting-students-parents/story?id=11437248 .

Torba, Andrew. 2019. High school teacher fired for tweets criticizing illegal immigration. Gab News . Retrieved November 19, 2020 from https://news.gab.com/2019/09/16/high-school-teacher-fired-for-tweets-criticizing-illegal-immigration/ .

Hall, Gaynor, and Courtney Gousman. 2020. Suburban teacher’s social media post sparks outrage, internal investigation | WGN-TV. WGNTV . Retrieved November 19, 2020 from https://wgntv.com/news/chicago-news/suburban-teachers-social-media-post-sparks-outrage-internal-investigation/ .

Davis, Jenny L., and Nathan Jurgenson. 2014. Context collapse: Theorizing context collusions and collisions. Information, Communication & Society 17 (4): 476–485.

Kaul, Asha, and Vidhi Chaudhri. 2018. Do celebrities have it all? Context collapse and the networked publics. Journal of Human Values 24 (1): 1–10.

Donnelly, Erin. 2019. Kim Kardashian mom-shamed over photo of North staring at a phone: “Give her a book.” Yahoo! Entertainment . Retrieved April 11, 2021 from https://www.yahoo.com/entertainment/kim-kardashian-mom-shamed-north-west-phone-book-151126429.html .

Sutton, Jeannette, Leysia Palen, and Irina Shklovski. 2008. Backchannels on the Front Lines: Emergent Uses of Social Media in the 2007 Southern California Wildfires . 9.

Litt, Eden. 2012. Knock, knock. Who’s there? The imagined audience. Journal of Broadcasting & Electronic Media 56 (3): 330–345.

Litt, Eden, and Eszter Hargittai. 2016. The imagined audience on social network sites. Social Media + Society 2 (1): 2056305116633482.

Li, N., and G. Chen. 2010. Sharing location in online social networks. IEEE Network 24 (5): 20–25.

Stutzman, Fred, and Jacob Kramer-Duffield. 2010. Friends only: Examining a privacy-enhancing behavior in Facebook. In Proceedings of the 28th international conference on Human factors in computing systems – CHI ’10 , ACM Press, 1553.

Jung, Yumi, and Emilee Rader. 2016. The imagined audience and privacy concern on Facebook: Differences between producers and consumers. Social Media + Society 2 (2): 2056305116644615.

Vitak, Jessica. 2015. Balancing Audience and Privacy Tensions on Social Network Sites . 20.

Oolo, Egle, and Andra Siibak. 2013. Performing for one’s imagined audience: Social steganography and other privacy strategies of Estonian teens on networked publics. Institute of Journalism and Communication, University of Tartu, Tartu, Estonia 7: 1.

Stutzman, Fred, and Woodrow Hartzog. 2012. Boundary Regulation in Social Media . 10.

Pounders, Kathrynn, Christine M. Kowalczyk, and Kirsten Stowers. 2016. Insight into the motivation of selfie postings: Impression management and self-esteem. European Journal of Marketing 50 (9/10): 1879–1892.

Krämer, Nicole C., and Stephan Winter. 2008. Impression Management 2.0: The relationship of self-esteem, extraversion, self-efficacy, and self-presentation within social networking sites. Journal of Media Psychology 20 (3): 106–116.

Duguay, Stefanie. 2016. “He has a way gayer Facebook than I do”: Investigating sexual identity disclosure and context collapse on a social networking site. New Media & Society 18 (6): 891–907.

Tang, Karen P., Jialiu Lin, Jason I. Hong, Daniel P. Siewiorek, and Norman Sadeh. 2010. Rethinking location sharing: Exploring the implications of social-driven vs. purpose-driven location sharing. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing , ACM, 85–94.

Back, Mitja D., Juliane M. Stopfer, Simine Vazire, et al. 2010. Facebook profiles reflect actual personality, not self-idealization. Psychological Science 21 (3): 372–374.

Choi, Tae Rang, and Yongjun Sung. 2018. Instagram versus Snapchat: Self-expression and privacy concern on social media. Telematics and Informatics 35 (8): 2289–2298.

Lindqvist, Janne, Justin Cranshaw, Jason Wiese, Jason Hong, and John Zimmerman. 2011. I’m the mayor of my house: Examining why people use foursquare – a social-driven location sharing application. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems – CHI ’11 , ACM Press, 2409.

Page, Xinru, Bart P. Knijnenburg, and Alfred Kobsa. 2013. What a tangled web we weave: Lying backfires in location-sharing social media. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work – CSCW ’13 , ACM Press, 273.

Hogg, Tad, and D Wilkinson. 2008. Multiple Relationship Types in Online Communities and Social Networks . 6.

Page, Xinru, Reza Ghaiumy Anaraky, Bart P. Knijnenburg, and Pamela J. Wisniewski. 2019. Pragmatic tool vs. relational hindrance: Exploring why some social media users avoid privacy features. In Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 1–23.

Fono, D., and K. Raynes-Goldie. 2006. Hyperfriends and beyond: Friendship and social norms on Live Journal. Internet Research Annual .

Zinoviev, Dmitry, and Vy Duong. 2009. Toward understanding friendship in online social networks. arXiv:0902.4658 [cs] .

Smith, Hilary, Yvonne Rogers, and Mark Brady. 2003. Managing one’s social network: Does age make a difference. In Proceedings of the Interact 2003, IOS Press, 551–558.

Ehrlich, Kate, and N. Shami. 2010. Microblogging inside and outside the workplace. Proceedings of the International AAAI Conference on Web and Social Media 4: 1.

Page, Xinru, Alfred Kobsa, and Bart P. Knijnenburg. 2012. Don’t disturb my circles! Boundary preservation is at the center of location-sharing concerns. In Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media , 266–273.

Iachello, Giovanni, and Jason Hong. 2007. End-user privacy in human-computer interaction. Foundations and Trends in Human-Computer Interaction 1 (1): 1–137.

Article   MATH   Google Scholar  

Bentley, Frank R., and Crysta J. Metcalf. 2008. Location and activity sharing in everyday mobile communication. In Proceeding of the Twenty-Sixth Annual CHI Conference Extended Abstracts on Human Factors in Computing Systems – CHI ’08 , ACM Press, 2453.

Tsai, Janice Y., Patrick Gage Kelley, Lorrie Faith Cranor, and Norman Sadeh. Location-Sharing Technologies: Privacy Risks and Controls . 34.

Friedland, Gerald, and Robin Sommer. 2010. Cybercasing the Joint: On the Privacy Implications of Geo-Tagging . 6.

Stefanidis, Anthony, Andrew Crooks, and Jacek Radzikowski. 2011. Harvesting ambient geospatial information from social media feeds.

Awad, Naveen Farag, and M.S. Krishnan. 2006. The personalization privacy paradox: An empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS Quarterly 30 (1): 13–28.

Chen, Xi, and Shuo Shi. 2009. A literature review of privacy research on social network sites. In 2009 International Conference on Multimedia Information Networking and Security , IEEE, 93–97.

Gerber, Nina, Paul Gerber, and Melanie Volkamer. 2018. Explaining the privacy paradox: A systematic review of literature investigating privacy attitude and behavior. Computers & Security 77: 226–261.

Houghton, David J., and Adam N. Joinson. 2010. Privacy, social network sites, and social relations. Journal of Technology in Human Services 28 (1–2): 74–94.

Pavlou, Paul A. 2011. State of the information privacy literature: Where are we now and where should we go. MIS Quarterly 35 (4): 977–988.

Xu, Feng, Katina Michael, and Xi Chen. 2013. Factors affecting privacy disclosure on social network sites: An integrated model. Electronic Commerce Research 13 (2): 151–168.

Xu, Heng, Rachida Parks, Chao-Hsien Chu, and Xiaolong Luke Zhang. 2010. Information disclosure and online social networks: From the case of Facebook news feed controversy to a theoretical understanding. AMCIS , Citeseer, 503.

Dinev, Tamara, Massimo Bellotto, Paul Hart, Vincenzo Russo, Ilaria Serra, and Christian Colautti. 2006. Privacy calculus model in e-commerce – a study of Italy and the United States. European Journal of Information Systems 15 (4): 389–402.

Selten, Reinhard. 1990. Bounded rationality. Journal of Institutional and Theoretical Economics (JITE)/Zeitschrift für die gesamte Staatswissenschaft 146 (4): 649–658.

Knijnenburg, Bart P., Elaine M. Raybourn, David Cherry, Daricia Wilkinson, Saadhika Sivakumar, and Henry Sloan. 2017. Death to the privacy calculus? In Proceedings of the 2017 Networked Privacy Workshop at CSCW , Social Science Research Network.

Dienlin, Tobias, and Sabine Trepte. Is the privacy paradox a relic of the past? An in-depth analysis of privacy attitudes and privacy behaviors. European Journal of Social Psychology 45 (3): 285–297.

Kokolakis, Spyros. 2017. Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon. Computers & Security 64: 122–134.

Palen, Leysia, and Paul Dourish. 2003. Unpacking “Privacy” for a networked world. NEW HORIZONS 5: 8.

Petronio, Sandra. 1991. Communication boundary management: A theoretical model of managing disclosure of private information between marital couples. Communication Theory 1 (4): 311–335.

Nissenbaum, Helen. 2010. Privacy in Context . Stanford University Press.

Wisniewski, Pamela, Heather Lipford, and David Wilson. 2012. Fighting for my space: Coping mechanisms for SNS boundary regulation. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems – CHI ’12 , ACM Press, 609.

Petronio, S. 2010. Communication Privacy Management Theory: What Do We Know About Family Privacy Regulation? Journal of Family Theory & Review 2 (3): 175–196.

Clemens, Chris, David Atkin, and Archana Krishnan. 2015. The influence of biological and personality traits on gratifications obtained through online dating websites. Computers in Human Behavior 49: 120–129.

Vitak, Jessica, and Nicole B. Ellison. 2013. ‘There’s a network out there you might as well tap’: Exploring the benefits of and barriers to exchanging informational and support-based resources on Facebook. New Media & Society 15 (2): 243–259.

Fogel, Joshua, and Elham Nehmad. 2009. Internet social network communities: Risk taking, trust, and privacy concerns. Computers in Human Behavior 25 (1): 153–160.

Agger, Ben. 2015. Oversharing: Presentations of Self in the Internet Age . Routledge.

Book   Google Scholar  

Krämer, Nicole C., and Nina Haferkamp. 2011. Online self-presentation: Balancing privacy concerns and impression construction on social networking sites. In Privacy Online: Perspectives on Privacy and Self-Disclosure in the Social Web , ed. S. Trepte and L. Reinecke, 127–141. Berlin: Springer.

The University of Central Florida, Wisniewski Pamela, A.K.M. Najmul Islam, et al. 2016. Framing and measuring multi-dimensional interpersonal privacy preferences of social networking site users. Communications of the Association for Information Systems 38: 235–258.

Pamela Wisniewski, A.K.M. Najmul Islam, Bart P. Knijnenburg, and Sameer Patil. 2015. Give social network users the privacy they want. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing , ACM, 1427–1441.

Bouvier, Gwen. 2015. What is a discourse approach to Twitter, Facebook, YouTube and other social media: Connecting with other academic fields? Journal of Multicultural Discourses 10 (2): 149–162.

Altman, Irwin. 1975. The Environment and Social Behavior: Privacy, Personal Space, Territory, and Crowding . Monterey, CA: Brooks/Cole Publishing Company.

Paasonen, Susanna, Ben Light, and Kylie Jarrett. 2019. The dick pic: Harassment, curation, and desire. Social Media + Society 5 (2): 2056305119826126.

Karr-Wisniewski, Pamela, David Wilson, and Heather Richter-Lipford. 2011. A new social order: Mechanisms for social network site boundary regulation. In Americas Conference on Information Systems, AMCIS .

Page, Xinru, Pamela Wisniewski, Bart P. Knijnenburg, and Moses Namara. 2018. Social media’s have-nots: An era of social disenfranchisement. Internet Research 28: 5.

Sleeper, Manya, Rebecca Balebako, Sauvik Das, Amber Lynn McConahy, Jason Wiese, and Lorrie Faith Cranor. 2013. The post that wasn’t: Exploring self-censorship on Facebook. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work , ACM, 793–802.

Das, Sauvik, and Adam Kramer. 2013. Self-censorship on Facebook. Proceedings of the International AAAI Conference on Web and Social Media 7: 1.

Lampinen, Airi, Vilma Lehtinen, Asko Lehmuskallio, and Sakari Tamminen. 2011. We’re in it together: Interpersonal management of disclosure in social network services. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems – CHI ’11 , ACM Press, 3217.

Buchanan, Tom, Carina Paine, Adam N. Joinson, and Ulf-Dietrich Reips. 2007. Development of measures of online privacy concern and protection for use on the internet. Journal of the American Society for Information Science & Technology 58 (2): 157–165.

Malhotra, Naresh K., Sung S. Kim, and James Agarwal. 2004. Internet Users’ Information Privacy Concerns (IUIPC): The construct, the scale, and a causal model. Information Systems Research 15 (4): 336–355.

Westin, Alan. 1991. Harris-Equifax Consumer Privacy Survey . Atlanta, GA: Equifax Inc.

Page, Xinru, Reza Ghaiumy Anaraky, and Bart P. Knijnenburg. 2019. How communication style shapes relationship boundary regulation and social media adoption. In Proceedings of the 10th International Conference on Social Media and Society , 126–135.

Page, Xinru, Bart P. Knijnenburg, and Alfred Kobsa. 2013. FYI: Communication style preferences underlie differences in location-sharing adoption and usage. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing , ACM, 153–162.

Page, Xinru Woo. 2014. Factors That Influence Adoption and Use of Location-Sharing Social Media . Irvine: University of California.

Solove, Daniel. 2008. Understanding Privacy . Cambridge, MA: Harvard University Press.

Knijnenburg, B.P., Alfred Kobsa, and Hongxia Jin. 2013. Dimensionality of information disclosure behavior. International Journal of Human-Computer Studies 71 (12): 1144–1162.

Wilkinson, Daricia, Paritosh Bahirat, Moses Namara, et al. 2019. Privacy at a glance: Exploring the effectiveness of screensavers to improve privacy awareness. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). Under Review , ACM.

Joinson, Adam N., Ulf-Dietrich Reips, Tom Buchanan, and Carina B. Paine Schofield. 2010. Privacy, trust, and self-disclosure online. Human–Computer Interaction 25 (1): 1–24.

Nissenbaum, Helen. 2004. Privacy as contextual integrity. Washington Law Review 79: 119–157.

Ramokapane, Kopo M., Gaurav Misra, Jose M. Such, and Sören Preibusch. 2021. Truth or dare: Understanding and predicting how users lie and provide untruthful data online.

Barkhuus, Louise. 2012. The mismeasurement of privacy: Using contextual integrity to reconsider privacy in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , ACM, 367–376.

Cho, Hichang, Bart Knijnenburg, Alfred Kobsa, and Yao Li. 2018. Collective privacy management in social media: A cross-cultural validation. ACM Transactions on Computer-Human Interaction 25 (3): 17:1–17:33.

Hoy, Mariea Grubbs, and George Milne. 2010. Gender differences in privacy-related measures for young adult Facebook users. Journal of Interactive Advertising 10 (2): 28–45.

Li, Yao, Bart P. Knijnenburg, Alfred Kobsa, and M-H. Carolyn Nguyen. 2015. Cross-cultural privacy prediction. In Workshop “Privacy Personas and Segmentation”, 11th Symposium On Usable Privacy and Security (SOUPS) .

Sheehan, Kim Bartel. 1999. An investigation of gender differences in on-line privacy concerns and resultant behaviors. Journal of Interactive Marketing 13 (4): 24–38.

Wyatt, Sally M.E. 2003. Non-users also matter: The construction of users and non-users of the Internet. Now Users Matter: The Co-construction of Users and Technology : 67–79.

2018. Facebook and Instagram introduce time limit tool. BBC News . Retrieved February 10, 2021 from https://www.bbc.com/news/newsbeat-45030712 .

Andalibi, Nazanin. 2020. Disclosure, privacy, and stigma on social media: Examining non-disclosure of distressing experiences. ACM Transactions on Computer-Human Interaction (TOCHI) 27 (3): 1–43.

Gibbs, Martin, James Meese, Michael Arnold, Bjorn Nansen, and Marcus Carter. 2015. #Funeral and Instagram: Death, social media, and platform vernacular. Information, Communication & Society 18 (3): 255–268.

Download references

Author information

Authors and affiliations.

Brigham Young University, Provo, UT, USA

Xinru Page & Sara Berrios

Department of Computer Science, Clemson University, Clemson, SC, USA

Daricia Wilkinson

Department of Computer Science, University of Central Florida, Orlando, FL, USA

Pamela J. Wisniewski

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Xinru Page .

Editor information

Editors and affiliations.

Clemson University, Clemson, SC, USA

Bart P. Knijnenburg

University of Central Florida, Orlando, FL, USA

Pamela Wisniewski

University of North Carolina at Charlotte, Charlotte, NC, USA

Heather Richter Lipford

School of Social and Behavioral Sciences, Arizona State University, Tempe, AZ, USA

Nicholas Proferes

Bridgewater Associates, Westport, CT, USA

Jennifer Romano

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2022 The Author(s)

About this chapter

Page, X., Berrios, S., Wilkinson, D., Wisniewski, P.J. (2022). Social Media and Privacy. In: Knijnenburg, B.P., Page, X., Wisniewski, P., Lipford, H.R., Proferes, N., Romano, J. (eds) Modern Socio-Technical Perspectives on Privacy. Springer, Cham. https://doi.org/10.1007/978-3-030-82786-1_7

Download citation

DOI : https://doi.org/10.1007/978-3-030-82786-1_7

Published : 09 February 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-82785-4

Online ISBN : 978-3-030-82786-1

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Listening Tests
  • Academic Tests
  • General Tests
  • IELTS Writing Checker
  • IELTS Writing Samples
  • Speaking Club
  • IELTS AI Speaking Test Simulator
  • Latest Topics
  • Vocabularying
  • 2024 © IELTS 69

Does Social Media Violate Our Privacy?

This is funny writing

IELTS essay Does Social Media Violate Our Privacy?

  • Structure your answers in logical paragraphs
  • ? One main idea per paragraph
  • Include an introduction and conclusion
  • Support main points with an explanation and then an example
  • Use cohesive linking words accurately and appropriately
  • Vary your linking phrases using synonyms
  • Try to vary your vocabulary using accurate synonyms
  • Use less common question specific words that accurately convey meaning
  • Check your work for spelling and word formation mistakes
  • Use a variety of complex and simple sentences
  • Check your writing for errors
  • Answer all parts of the question
  • ? Present relevant ideas
  • Fully explain these ideas
  • Support ideas with relevant, specific examples
  • ? Currently is not available
  • Meet the criteria
  • Doesn't meet the criteria
  • 5 band People Are Becoming Too Dependent on the Phone and Internet, is it a Positive or a Negative Development? Technology has changed the way we live. From communication, to accomplishing daily tasks and even when it comes to education and learning. With just a click of a button, we can do various tasks all at once and have access to valuable information all over the world. We can pay bills, shop groceries, ...
  • 5.5 band There are conflicting views about the most suitable place for people to live. While some believe that living in urban is beneficial, I would argue that there are more advantages brought when living in the country. There are conflicting views about the most suitable place for people to live. While some believe that living in urban is beneficial, I would argue that there are more advantages brought when living in the country. There are many reasons why some people believe that finding accommodation in an urban ...
  • Language is the blood of the soul into which thoughts run and out of which they grow. Oliver Wendell Holmes
  • 6 band An ambition is positive or negative? Nowdays, people believe that being an ambitious person is an inappropriate attitude. From my viewpoint, in this competitive world we need this attribute to rescue our life's condition. For example, this behavior gives us a power to achieve what we want. We have to carefully use this particular ethic ...
  • 6 band particular are under 3 to nowadays due to the fact that we are living in a global village. what do you think can be done to protect a society traditional values and Culture Nowadays, by growing social media from all over the word specific and old cultures are in danger of disappearing. It is true that developed societies tend to be integration by copying each other. it cause numerous Terminator consequences. This problem can solved by essential Solutions. For instance, ...
  • Knowledge of languages is the doorway to wisdom. Roger Bacon
  • 5 band You recently returned to Toronto from San Francisco. But your luggage got misplaced. Dear Airport manager, I recently returned to Toronto from San Francisco via Vancouver on flight A220 (Air Canada) and A124 respectively (Air Canada) which arrived on 24th Feb at 20: 30 at Toronto airport. My luggage which supposed to check all the way to my final destination, baggage tag number-A12 ...
  • 5 band Let's go bats and play here It is known for everyone that bats are perceived to be the mammals which hunt at nights. However, they have an engineering problem: they can't find their ways at night. Although some people might disagree with the opinion and state that they should change their habits as nocturnal animals, it is an ...
  • The most intimate temper of a people, its deepest soul, is above all in its language. Jules Michelet
  • Share full article

Advertisement

Supported by

The Battle for Digital Privacy Is Reshaping the Internet

As Apple and Google enact privacy changes, businesses are grappling with the fallout, Madison Avenue is fighting back and Facebook has cried foul.

essay on does social media violate our privacy

By Brian X. Chen

Listen to This Article

Open this article in the New York Times Audio app on iOS.

SAN FRANCISCO — Apple introduced a pop-up window for iPhones in April that asks people for their permission to be tracked by different apps.

Google recently outlined plans to disable a tracking technology in its Chrome web browser.

And Facebook said last month that hundreds of its engineers were working on a new method of showing ads without relying on people’s personal data.

The developments may seem like technical tinkering, but they were connected to something bigger: an intensifying battle over the future of the internet. The struggle has entangled tech titans, upended Madison Avenue and disrupted small businesses. And it heralds a profound shift in how people’s personal information may be used online, with sweeping implications for the ways that businesses make money digitally.

At the center of the tussle is what has been the internet’s lifeblood: advertising .

More than 20 years ago, the internet drove an upheaval in the advertising industry. It eviscerated newspapers and magazines that had relied on selling classified and print ads, and threatened to dethrone television advertising as the prime way for marketers to reach large audiences.

Instead, brands splashed their ads across websites, with their promotions often tailored to people’s specific interests. Those digital ads powered the growth of Facebook, Google and Twitter, which offered their search and social networking services to people without charge. But in exchange, people were tracked from site to site by technologies such as “ cookies, ” and their personal data was used to target them with relevant marketing.

Now that system, which ballooned into a $350 billion digital ad industry, is being dismantled. Driven by online privacy fears, Apple and Google have started revamping the rules around online data collection. Apple, citing the mantra of privacy, has rolled out tools that block marketers from tracking people. Google, which depends on digital ads, is trying to have it both ways by reinventing the system so it can continue aiming ads at people without exploiting access to their personal data.

essay on does social media violate our privacy

If personal information is no longer the currency that people give for online content and services, something else must take its place. Media publishers, app makers and e-commerce shops are now exploring different paths to surviving a privacy-conscious internet, in some cases overturning their business models. Many are choosing to make people pay for what they get online by levying subscription fees and other charges instead of using their personal data.

Jeff Green, the chief executive of the Trade Desk, an ad-technology company in Ventura, Calif., that works with major ad agencies, said the behind-the-scenes fight was fundamental to the nature of the web.

“The internet is answering a question that it’s been wrestling with for decades, which is: How is the internet going to pay for itself?” he said.

The fallout may hurt brands that relied on targeted ads to get people to buy their goods. It may also initially hurt tech giants like Facebook — but not for long. Instead, businesses that can no longer track people but still need to advertise are likely to spend more with the largest tech platforms, which still have the most data on consumers.

David Cohen, chief executive of the Interactive Advertising Bureau, a trade group, said the changes would continue to “drive money and attention to Google, Facebook, Twitter.”

The shifts are complicated by Google’s and Apple’s opposing views on how much ad tracking should be dialed back. Apple wants its customers, who pay a premium for its iPhones, to have the right to block tracking entirely. But Google executives have suggested that Apple has turned privacy into a privilege for those who can afford its products.

For many people, that means the internet may start looking different depending on the products they use. On Apple gadgets, ads may be only somewhat relevant to a person’s interests, compared with highly targeted promotions inside Google’s web. Website creators may eventually choose sides, so some sites that work well in Google’s browser might not even load in Apple’s browser, said Brendan Eich, a founder of Brave, the private web browser.

“It will be a tale of two internets,” he said.

Businesses that do not keep up with the changes risk getting run over. Increasingly, media publishers and even apps that show the weather are charging subscription fees, in the same way that Netflix levies a monthly fee for video streaming. Some e-commerce sites are considering raising product prices to keep their revenues up.

Consider Seven Sisters Scones, a mail-order pastry shop in Johns Creek, Ga., which relies on Facebook ads to promote its items. Nate Martin, who leads the bakery’s digital marketing, said that after Apple blocked some ad tracking, its digital marketing campaigns on Facebook became less effective. Because Facebook could no longer get as much data on which customers like baked goods, it was harder for the store to find interested buyers online.

“Everything came to a screeching halt,” Mr. Martin said. In June, the bakery’s revenue dropped to $16,000 from $40,000 in May.

Sales have since remained flat, he said. To offset the declines, Seven Sisters Scones has discussed increasing prices on sampler boxes to $36 from $29.

Apple declined to comment, but its executives have said advertisers will adapt. Google said it was working on an approach that would protect people’s data but also let advertisers continue targeting users with ads.

Since the 1990s, much of the web has been rooted in digital advertising. In that decade, a piece of code planted in web browsers — the “cookie” — began tracking people’s browsing activities from site to site. Marketers used the information to aim ads at individuals, so someone interested in makeup or bicycles saw ads about those topics and products.

After the iPhone and Android app stores were introduced in 2008, advertisers also collected data about what people did inside apps by planting invisible trackers. That information was linked with cookie data and shared with data brokers for even more specific ad targeting.

The result was a vast advertising ecosystem that underpinned free websites and online services. Sites and apps like BuzzFeed and TikTok flourished using this model. Even e-commerce sites rely partly on advertising to expand their businesses.

But distrust of these practices began building. In 2018, Facebook became embroiled in the Cambridge Analytica scandal, where people’s Facebook data was improperly harvested without their consent. That same year, European regulators enacted the General Data Protection Regulation , laws to safeguard people’s information. In 2019, Google and Facebook agreed to pay record fines to the Federal Trade Commission to settle allegations of privacy violations.

In Silicon Valley, Apple reconsidered its advertising approach. In 2017, Craig Federighi, Apple’s head of software engineering, announced that the Safari web browser would block cookies from following people from site to site.

“It kind of feels like you’re being tracked, and that’s because you are,” Mr. Federighi said. “No longer.”

Last year, Apple announced the pop-up window in iPhone apps that asks people if they want to be followed for marketing purposes. If the user says no, the app must stop monitoring and sharing data with third parties.

That prompted an outcry from Facebook , which was one of the apps affected. In December, the social network took out full-page newspaper ads declaring that it was “standing up to Apple” on behalf of small businesses that would get hurt once their ads could no longer find specific audiences.

“The situation is going to be challenging for them to navigate,” Mark Zuckerberg, Facebook’s chief executive, said.

Facebook is now developing ways to target people with ads using insights gathered on their devices, without allowing personal data to be shared with third parties. If people who click on ads for deodorant also buy sneakers, Facebook can share that pattern with advertisers so they can show sneaker ads to that group. That would be less intrusive than sharing personal information like email addresses with advertisers.

“We support giving people more control over how their data is used, but Apple’s far-reaching changes occurred without input from the industry and those who are most impacted,” a Facebook spokesman said.

Since Apple released the pop-up window, more than 80 percent of iPhone users have opted out of tracking worldwide, according to ad tech firms. Last month, Peter Farago, an executive at Flurry, a mobile analytics firm owned by Verizon Media, published a post on LinkedIn calling the “time of death” for ad tracking on iPhones.

At Google, Sundar Pichai, the chief executive, and his lieutenants began discussing in 2019 how to provide more privacy without killing the company’s $135 billion online ad business. In studies, Google researchers found that the cookie eroded people’s trust. Google said its Chrome and ad teams concluded that the Chrome web browser should stop supporting cookies.

But Google also said it would not disable cookies until it had a different way for marketers to keep serving people targeted ads. In March, the company tried a method that uses its data troves to place people into groups based on their interests, so marketers can aim ads at those cohorts rather than at individuals. The approach is known as Federated Learning of Cohorts, or FLOC.

Plans remain in flux. Google will not block trackers in Chrome until 2023 .

Even so, advertisers said they were alarmed.

In an article this year, Sheri Bachstein, the head of IBM Watson Advertising, warned that the privacy shifts meant that relying solely on advertising for revenue was at risk. Businesses must adapt, she said, including by charging subscription fees and using artificial intelligence to help serve ads.

“The big tech companies have put a clock on us,” she said in an interview.

Kate Conger contributed reporting.

Brian X. Chen is the lead consumer technology writer for The Times. He reviews products and writes Tech Fix , a column about the social implications of the tech we use. Before joining The Times in 2011, he reported on Apple and the wireless industry for Wired. More about Brian X. Chen

Tech Fix: Solving Your Tech Problems

Trying Meta’s Smart Glasses: What happens when a columnist and a reporter use A.I. Ray-Bans to scan groceries, monuments and zoo animals? Hilarity, wonder and lots of mistakes ensued .

Ditch Your Wallet: Using your phone as a digital wallet is attainable , but it requires preparation and some compromise.

Managing Subscriptions: The dream of streaming — watch what you want, whenever you want, for a sliver of the price of cable! — is coming to an end as prices go up. Here’s how to juggle all your subscriptions and even cancel them .

Apple’s Vision Pro: The new headset  teaches a valuable lesson about the cost of tech products: The upsells and add-ons will get you .  

Going Old School: Retro-photography apps that mimic the appearance of analog film formats make your digital files seem like they’re from another era. Here’s how to use them .

Cut Down Your Screen Time:  Worried about smartphone addiction? Here’s how to cut down on your screen time , and here’s how to quit your smartphone entirely .

Social Media Ethics and Patient Privacy Breaches Essay

Introduction, ncsbn guidelines, analysis of the breach, lessons for nurses.

Today, social media is a crucial part of daily life for most people. It enables us to communicate with friends and family, share important moments of our lives, and exchange information with others. However, the use of social media by people of certain professions creates concerns related to privacy and confidentiality. The National Council of State Boards of Nursing (NCSBN, 2018) explains how social media may lead to breaches of patient privacy and offers guidelines on how to avoid this dangerous consequence and use social media responsibly. The present paper will seek to reflect on the guidelines as well as on other sources on the subject while also analyzing a recent patient privacy breach that occurred in Texas.

The guidelines provide essential insights into the topic of social media and its relation to the nursing profession. First of all, the document also stresses the legal repercussions of using social media irresponsibly. Indeed, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules for protecting patient information, and thus failing to do so can result in legal action against the nurses or their organization (HIPAA Journal, 2018a). The information protected under HIPAA includes any information disclosed by patients in their communication with healthcare providers, as well as images and videos of patients, their name, address, age, and other identifying information (NCSBN, 2018).

Secondly, the guidelines go beyond legal issues to explain the problem in ethical terms. It is true that patients trust their care providers and expect any information they share to remain confidential (NCSBN, 2012). Part of nurses’ ethical duty is to maintain patient privacy and confidentiality, and thus by posting protected patient information on social media, nurses act unethically and may lose patients’ trust (NCSBN, 2018). Because ethics is an important subject in nursing, stressing the ethical side of the problem is essential to show its scope and consequences.

Finally, the guidelines provide information on how to avoid patient privacy breaches while using social media. NCSBN (2018) states that nurses must refrain from publishing images and videos of patients, as well as any other patient information while using social media. Nurses must also avoid referring to patients in a disparaging manner, even when no identifying patient information is given (NCSBN, 2018). The video published by NCSBN (2012) also highlights that these rules apply to both public and private social media communication. Practicing in compliance with these guidelines can assist nurses in preventing professional and ethical consequences, thus supporting trustful relationships with patients.

One case when the guidelines on responsible social media use were not followed occurred in Texas recently. According to HIPAA Journal (2018b), the pediatric ICU/IR nurse was caring for a patient who was suspected of having measles. This disease is preventable by vaccination, and the nurse decided to share her experience in a closed anti-vaxxer group on Facebook (HIPAA Journal, 2018b). The nurse did not share patient images, videos, names, or any other identifying information. However, her job and organization were visible on her Facebook profile, and this could allow identifying the patient (HIPAA Journal, 2018b). The nurse was suspended and later fired due to an evident HIPAA Privacy Rule Violation (HIPAA Journal, 2018b).

In the post and comments, the nurse shared her experience and feelings in relation to treating a patient with measles. The nurse was an anti-vaxxer herself, and therefore, it is understandable that the case was difficult for her. As noted by NCSBN (2018), sharing feelings and seeking support are among the key purposes of using social media. Nevertheless, when taking into account NCSBN (2018) guidelines, it is evident that the nurse violated patient privacy and confidentiality. While she did not disclose any identifying information, she discussed the diagnosis, treatment, and the patient’s condition. These items constitute protected health information under the HIPAA, and thus, the act of disclosing them is considered a breach of privacy and confidentiality. The consequences faced by the nurse did not include legal action, but the patient’s parents could have charged both her and the hospital with privacy violations.

The case illustrates the importance of maintaining patient privacy and confidentiality while using social media. The nursing profession is stressful and full of cases that cause strong emotions, but nurses should always recognize that their ethical duty to their patients comes first. The analysis also shows that nurses have to be extremely careful on social media because protected information extends beyond details such as the patient’s name, address, or age. The guidelines composed by NCSBN (2018) offer an excellent framework for nurses to use social media responsibly because they prohibit the sharing of all patient information online, including the diagnosis, treatment, and other non-identifying details.

Overall, social media is a highly complicated environment for nurses. On the one hand, nurses might feel the need to seek support and share their experiences online when faced with particularly tough cases. On the other hand, sharing protected patient information online constitutes a breach of patient privacy and confidentiality and may lead to legal and professional consequences. The analysis of NCSBN (2018) guidelines on the responsible use of social media shows that nurses should remember their ethical duty to maintain patient privacy and confidentiality and avoid sharing any information about patients online. Ensuring compliance with these guidelines will assist in establishing and maintaining trust between patients and care providers, thus leading to improved quality of care.

HIPAA Journal. (2018a). HIPAA soc ial media rules . Web.

HIPAA Journal. (2018b). Texas nurse fired for social media HIPAA violation . Web.

National Council of State Boards of Nursing. (NCSBN) (2018). A nurse’s guide to the use of social media . Web.

National Council of State Boards of Nursing. (NCSBN) (2012). Social media guidelines for nurses [Video file]. Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, December 7). Social Media Ethics and Patient Privacy Breaches. https://ivypanda.com/essays/social-media-ethics-and-patient-privacy-breaches/

"Social Media Ethics and Patient Privacy Breaches." IvyPanda , 7 Dec. 2022, ivypanda.com/essays/social-media-ethics-and-patient-privacy-breaches/.

IvyPanda . (2022) 'Social Media Ethics and Patient Privacy Breaches'. 7 December.

IvyPanda . 2022. "Social Media Ethics and Patient Privacy Breaches." December 7, 2022. https://ivypanda.com/essays/social-media-ethics-and-patient-privacy-breaches/.

1. IvyPanda . "Social Media Ethics and Patient Privacy Breaches." December 7, 2022. https://ivypanda.com/essays/social-media-ethics-and-patient-privacy-breaches/.

Bibliography

IvyPanda . "Social Media Ethics and Patient Privacy Breaches." December 7, 2022. https://ivypanda.com/essays/social-media-ethics-and-patient-privacy-breaches/.

  • "Substance Use Disorder in Nursing" by NCSBN
  • The National Council of State Boards of Nursing
  • Ethical Application of Informatics: Nurse's Case
  • Regulatory and Accreditation Bodies in United States
  • Climate Change: Inconsistencies in Reporting
  • Measles: Origins, Symptoms, Treatment
  • Recent Measles Outbreak in the UK Analysis
  • Importance and Limits of H.I.P.A.A
  • HIPAA Technology Breach
  • Measles Outbreaks in Australia
  • Participatory Approach of Public Administration Through Social Media in the UAE
  • Monitoring Media Use to Control Workplaces
  • How Does Social Media Affect Leadership?
  • The Effects of Social Media on Marriage in the UAE
  • Likecoholic: Social Media Addiction

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Research on the influence mechanism of privacy invasion experiences with privacy protection intentions in social media contexts: Regulatory focus as the moderator

1 School of Journalism and Communication, Xiamen University, Xiamen, China

2 Research Center for Intelligent Society and Social Governance, Interdisciplinary Research Institute, Zhejiang Lab, Hangzhou, China

Associated Data

The original contributions presented in the study are included in the article/ Supplementary material , further inquiries can be directed to the corresponding author.

Introduction

In recent years, there have been numerous online privacy violation incidents caused by the leakage of personal information of social media users, yet there seems to be a tendency for users to burn out when it comes to privacy protection, which leads to more privacy invasions and forms a vicious circle. Few studies have examined the impact of social media users' privacy invasion experiences on their privacy protection intention. Protection motivation theory has often been applied to privacy protection research. However, it has been suggested that the theory could be improved by introducing individual emotional factors, and empirical research in this area is lacking.

To fill these gaps, the current study constructs a moderated chain mediation model based on protection motivation theory and regulatory focus theory, and introduces privacy fatigue as an emotional variable.

Results and discussion

An analysis of a sample of 4800 from China finds that: (1) Social media users' previous privacy invasion experiences can increase their privacy protection intention. This process is mediated by response costs and privacy fatigue. (2) Privacy fatigue plays a masking effect, i.e., increased privacy invasion experiences and response costs will raise individuals' privacy fatigue, and the feeling of privacy fatigue significantly reduces individuals' willingness to protect their privacy. (3) Promotion-focus individuals are less likely to experience privacy fatigue than those with prevention-focus. In summary, this trend of “lie flat” on social media users' privacy protection is caused by the key factor of “privacy fatigue”, and the psychological trait of regulatory focus can be used to interfere with the development of privacy fatigue. This study extends the scope of research on privacy protection and regulatory focus theory, refines the theory of protection motivation, and expands the empirical study of privacy fatigue; the findings also inform the practical governance of social network privacy.

1. Introduction

Nowadays, people communicate and share information through SNS, and it has become an integral part of the daily lives of network users worldwide (Hsu et al., 2013 ). SNS makes people's lives highly convenient. However, it also poses an increasingly serious privacy issue. For instance, British media reported that 87,000,000 Facebook users' profiles were illegally leaked to a political consulting firm, Cambridge Analytica (Revell, 2019 ). In addition, one of the three major US credit bureaus, Equifax, reported a large-scale data leak in 2017, including 146 million pieces of personal information (Zhou and Schaub, 2018 ). The incidents that happened in recent years provoked a wave of discussion on personal privacy and information security issues.

Individuals' proactive behavior in protecting online privacy information is an effective method for reducing the occurrence of privacy violations; therefore, scholars explored how to enhance individuals' willingness to protect privacy. In terms of applied theoretical models, the Health Belief Model (HBM) (Kisekka and Giboney, 2018 ), the Technology Threat Avoidance Theory (TTAT) (McLeod and Dolezel, 2022 ), the Technology Acceptance Model (TAM) (Baby and Kannammal, 2020 ), and the Theory of Planned Behavior (TPB) (Xu et al., 2013 ) have been applied to explore the issue of online privacy protection behavior. By contrast, Protection Motivation Theory (PMT) is more applicable to studying privacy protection behavior in SNS because it focuses on threat assessment and coping mechanisms for privacy issues. However, the issue with this study's application of PMT theory is that it ignores the influence of individual emotions on protective behavior (Mousavi et al., 2020 ). Therefore, this study considered privacy fatigue as a variable to expand the theory of PMT in the context of social media privacy protection research. Moreover, in terms of the antecedents of privacy protection, existing research suggests that factors such as perceived benefits, perceived risks (Price et al., 2005 ), privacy concerns (Youn and Kim, 2019 ), self-efficacy (Baruh et al., 2017 ), and trust (Wang et al., 2017 ) can affect individuals' privacy-protective behaviors.

Along with the increased frequency of data breaches on the Internet, people find that they have less control over their data. Further, they are overwhelmed by having to protect their privacy alone. Moreover, the complexity of the measures required to protect personal information aggravates users' sense of futility, leading to exhaustion among online users. This phenomenon, defined as “privacy fatigue,” is regarded as a factor leading to the avoidance of privacy issues. Privacy fatigue has recently been prevalent among network users. However, empirical studies related to this phenomenon are still insufficient (Choi et al., 2018 ). Therefore, this study attempted to explore the role privacy burnout plays in users' privacy protection behaviors. Previous studies discovered that the impact of varying degrees of privacy invasion on privacy protection differed according to individual differences. It could be moderated by psychological differences (Lai and Hui, 2006 ). Clarifying the role of psychological traits is beneficial to the hierarchical governance of privacy protection. Regulatory focus is a kind of psychological trait based on different regulatory orientations, which could effectively affect social media users' behavioral preferences and decisions on privacy protection (Cho et al., 2019 ); however, to date, the relationship between regulatory focus, privacy fatigue, and privacy protection intentions has not been sufficiently examined. For this reason, it is necessary to empirically explore this question.

Based on the PMT theoretical framework, this study built a moderated mediation model to examine the influential mechanism of privacy-invasive experiences on privacy protection intentions by introducing three factors: response costs, privacy burnout, and regulatory focus. Data analyzed from an online survey of 4,800 network users demonstrated that, first, social media users' experiences of privacy invasion increase their willingness to protect privacy. Second, privacy burnout has a masking effect, which means that the more privacy-invasive experiences and response costs there are, the greater the privacy fatigue, which reduces users' privacy protection intentions even further. Third, promotion-focused individuals are less likely to experience fatigue from protecting personal information alone. The significance of this study lies in the fact that it bridged the gap between the effects of privacy violation experiences on individuals' protective willingness.

Meanwhile, this study verified the practicality of combining PMT theory with emotionally related variables. Additionally, it complemented the study on privacy fatigue and expanded the scope of regulatory orientation theory in privacy research. From a practical perspective, this study offered a reference for the hierarchical governance of privacy in social networks. Finally, this study reveals a vicious cycle mechanism (negative experiences, privacy fatigue, low willingness to protect, and new negative experiences) followed by a theoretical reference for breaking this cycle.

2. Theoretical framework

2.1. privacy invasion experiences, response costs, and privacy protection intentions.

Protection motivation theory (PMT) is commonly used in online privacy studies (Chen et al., 2015 ). According to Rogers ( 1975 ), individuals cognitively evaluate the risk before adopting behaviors, develop protection motivation, and eventually modify their behaviors to avoid risks. There are two sources of impact on people's response assessments: environmental and interpersonal sources of information and prior experience. After combing through the past literature, we found that many scholars have verified the influence of environmental (Wu et al., 2019 ) and interpersonal (Hsu et al., 2013 ) factors on individual privacy protection; however, only a few scholars explored the effect of privacy violation experiences on privacy protection intentions. Some studies proved that individuals' prior privacy violation experiences are an antecedent to their information privacy concerns, including in the mobile context and at the online marketplace (Pavlou and Gefen, 2005 ; Belanger and Crossler, 2019 ). Regarding privacy concerns, prior studies widely demonstrated a significant antecedent to privacy protection intentions and protective behaviors. In addition, a meta-analysis found that users who worried about privacy were less likely to use internet services and were more likely to adopt privacy-protective actions (Baruh et al., 2017 ).

People make sense of the world based on their prior experiences (Floyd et al., 2000 ), while network users who have had privacy-invasive experiences tend to believe that the privacy risks are closely related to themselves (Li, 2008 ). They tend to be more aware of the seriousness and vulnerability of privacy issues (Mohamed and Ahmad, 2012 ). The effects of previous negative experiences on perceived vulnerability can also be explained by the availability heuristic, which assumes that the easier it is to retrieve experienced cases from memory, the higher the perceived frequency of the event. In contrast, when fewer cases are retrieved, people may estimate that the event is less likely to occur than in objective situations. Therefore, people's accumulated experiences of negative events might influence their perception of future vulnerability to risk (Tversky and Kahneman, 1974 ). However, in accordance with PMT, seriousness and vulnerability affect protective behavior in the context of social media privacy issues. Therefore, we can assume that the more memories of privacy violations people have, the more likely they are to believe that their privacy will be violated by privacy exposure, thereby increasing their motivation to protect privacy that is, their willingness to protect privacy. Therefore, this study proposed the following hypothesis:

  • H1: Privacy invasion experience is positively affecting protective privacy willingness.

PMT suggests that cognitive evaluation consists of assessing response costs (Rogers, 1975 ), and response costs refer to any costs, such as monetary, time, and effort (Floyd et al., 2000 ). According to findings from a health psychology study, when faced with the threat of skin cancer, people prefer to use sunscreen rather than avoid the sun (Jones and Leary, 1994 ; Wichstrom, 1994 ). It may be because of the lower response costs of utilizing sunscreen. These findings inspire us to believe that individuals calculate the response cost before they take protective actions. Privacy protection-related studies also indicate that prior experiences with personal information violations may significantly increase consumers' privacy concerns about both offline and online privacy and that privacy concerns are related to perceived risks (Okazaki et al., 2009 ; Bansal et al., 2010 ). It has also been shown that individuals who have experienced privacy invasion perceive a greater severity of risk (Petronio, 2002 ). Considering individuals' perceptions of risks affects their assessment of costs, which is part of the game between risks and benefits. In other words, a stronger risk perception indicates that higher response costs should be paid. Thus, this study assumed that people with more privacy violation experiences might perceive higher response costs and tend to take protective actions to avoid paying more. Consequently, this study made the following hypothesis:

  • H2a: A higher level of privacy-invasive experiences results in a higher perception of response costs.
  • H2b: A higher level of perception of response costs will result in higher privacy protection intentions.
  • H2c: Response cost mediates the effect of privacy-invasive experiences on privacy protection intentions.

2.2. Privacy invasion experiences, response costs, and privacy protection intentions

The medical community first introduced the concept of fatigue and referred to it as a subjective unpleasant feeling of tiredness (Piper et al., 1987 ). The concept of fatigue has been used in many research fields, such as clinical medicine (Mao et al., 2018 ), psychology, and more (Ong et al., 2006 ). In recent years, scholars also used the concept of “fatigue” in the study of social media and regarded it as an important antecedent to individual behaviors (Ravindran et al., 2014 ). Choi et al. ( 2018 ) defined “privacy fatigue” as a psychological state of fatigue caused by privacy issues. Specifically, “privacy fatigue” manifests itself as an unwillingness to actively manage and protect one's personal information and privacy (Hargittai and Marwick, 2016 ).

With the increasing severity of social network and personal information issues, the research around privacy fatigue, especially the examination of the antecedents and effects of privacy fatigue, has been widely developed. Regarding antecedents, scholars found that privacy concerns, self-disclosure, learning about privacy statements and information security, and the complexity of privacy protection practices could influence individuals' levels of privacy fatigue (Dhir et al., 2019 ; Oh et al., 2019 ). In terms of the effects, privacy fatigue can not only cause people to reduce the frequency of using social media or even withdraw from the Internet (Ravindran et al., 2014 ), but it can also motivate individuals to resist disclosing personal information (Keith et al., 2014 ); however, only a few studies examined privacy invasion experiences, privacy fatigue, and privacy protection intentions under one theoretical framework.

Furnell and Thomson ( 2009 ) pointed out that “privacy fatigue” is triggered by an individual's experience with privacy problems. Additionally, privacy fatigue has a boundary. When this boundary is crossed, social network users become bored with privacy management, leading them to abandon social network services. It has also been suggested that privacy data breaches can cause individuals to feel “disappointed.” In a study of medical data protection, the results showed that breaches of patients' medical data can have a cumulative effect on patients' behavioral decisions by causing them to perceive that their requests for privacy protection are being ignored (Juhee and Eric, 2018 ). The relationship between privacy invasion experiences and privacy fatigue has been widely demonstrated. Such social media characteristics as internet privacy threat experience and privacy invasion could lead to users' sense of emotional exhaustion and privacy cynicism, which was further associated with social media privacy fatigue (Xiao and Mou, 2019 ; Sheng et al., 2022 ). In terms of the outcomes, some other studies focusing on the privacy paradox found that emotional exhaustion and powerlessness (the same concept as exhaustion) would weaken the positive influence relationship between privacy concerns and their willingness to protect personal information (Tian et al., 2022 ). On account of the above reviews, it is reasonable to analogize that an individual's privacy invasion experience in the context of social media use can exacerbate an individual's perception of privacy fatigue. In other words, considering the social media privacy context, privacy fatigue may lead network users to abandon privacy protection behaviors and create opportunities for privacy invasion. Based on the above discussions, we proposed the following hypotheses:

  • H3a: Privacy invasion experiences positively affect privacy fatigue.
  • H3b: Privacy fatigue negatively affects privacy protection intentions.
  • H3c: Privacy fatigue has a masking (a form of mediating effect) role in the effects of individual social media privacy invasion experiences on privacy protection intentions.

As discussed above, we hypothesized that both response costs and privacy fatigue mediate the effect of social media users' privacy invasion experiences on their privacy protection intentions. Assuming that both response costs and privacy fatigue could mediate the effect of social media users' privacy invasion experiences on their privacy protection intentions, what is the association between response costs and privacy fatigue? It has been argued that a common shortcoming of current research applying PMT theory is that it ignores the role emotions play in this mechanism (Mousavi et al., 2020 ). This view is supported by Li's research, which argues that most research on privacy topics is conducted from a risk assessment perspective and tends to ignore the impact of emotions on privacy protection behaviors (Li et al., 2016 ). It was believed that emotions could change an individual's attention and beliefs (Friestad and Thorson, 1985 ). These factors are both related to behavioral intentions.

It has also been suggested that emotions play a mediating role in the process of behavioral decision-making (Tanner et al., 1991 ). However, only a few studies explored this influential mechanism to date. Zhang et al. ( 2022 ) found a positive influence between response costs and privacy fatigue. They conducted the research based on the Stressor-Strain-Outcome (S-S-O) framework to explore which factors (stressors) could cause privacy fatigue intentions (strain) and related behaviors (outcome). The results discovered that time cost and several other stressors significantly positively impact social media fatigue intention. As quoted from Floyd et al. ( 2000 ), “response costs” refer to any costs in which time costs were included. Despite an important reference to the above study's results provided for this study, the time cost is just one factor among response costs. This piece of research will focus on general response costs, assisting in a better understanding of this influential mechanism. Based on this, we proposed the following hypotheses:

  • H4a: Privacy response costs are positively associated with privacy fatigue.
  • H4b: Response costs and privacy fatigue play chain mediating roles in the effect of privacy invasion experiences on privacy protection intentions.

2.3. Regulatory focus as the moderator

Differences in individual psychological traits can lead to significant differences in individuals' cognition and behaviors (Benbasat and Dexter, 1982 ), and it has been shown that personal psychological traits can influence individuals' perceptions of fatigue (Dhir et al., 2019 ). A recent study also found that neuroticism has positive effects on privacy fatigue but that traits like agreeableness and extraversion have negative effects (Tang et al., 2021 ). However, previous research on social media privacy fatigue is relatively limited. Given the critical nature of privacy fatigue in research models, it is necessary to explore the differences in perceived fatigue among individuals with different psychological traits. This study introduced individual levels of regulatory focus as a moderator and examined the effect of privacy invasion experiences on privacy fatigue. Regulatory focus as a psychological trait was applied to explain social media users' privacy management and privacy protection problems (Wirtz and Lwin, 2009 ; Li et al., 2019 ).

Regulatory Focus Theory (RFT) classifies individuals into two different levels based on psychological traits: promotion focus, which focuses more on benefits and ignores potential risks, and prevention focus, which tends to avoid risks and ignore benefits when making decisions (Higgins, 1997 ). Research demonstrated that perceptions of benefits are supposed to reduce fatigue, while perceptions of risk could exacerbate fatigue (Boksem and Tops, 2008 ). By the same analogy, promotion-focused individuals are more inclined to notice the benefits of using social media (Jin, 2012 ) and thus may experience less fatigue and lower response costs when experiencing privacy violations; in contrast, individuals with a prevention focus are more aware of the risks associated with privacy invasion and thus have more concerns about privacy issues, which can lead to more feelings of fatigue and higher perceived response costs about privacy issues. Combined with H4, we can reason that the path of influence of social media privacy invasion experiences on privacy protection intentions may be affected by the level of individual regulatory focus. The effect of privacy invasion experiences on privacy fatigue and response costs was stronger for individuals who tended to be prevention focused than for those who tended to be promotion focused. Therefore, the mediating effect of privacy fatigue and response cost is stronger. In summary, this study proposed the hypotheses as follows:

  • H5a: Compared to promotion-focused users, the effect of privacy invasion experiences on privacy fatigue is greater for prevention-focused users.
  • H5b: Compared to prevention-focused users, the effect of privacy invasion experiences on response costs is greater for promotion-focused users.

2.4. Current study

In summary, the current study concluded that, in the social media context, users' experiences of privacy invasion would increase their perception of response costs and thus result in privacy fatigue. Privacy fatigue decreases individuals' privacy protection intentions. However, this process differed for individuals with different regulatory focuses. In detail, individuals with a promotion focus are less likely to experience privacy fatigue than individuals with a prevention focus. Based on the above logic, the conceptual model constructed in this study is shown in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1031592-g0001.jpg

Conceptual model.

3. Materials and methods

3.1. participants and procedures.

This survey was conducted in December 2021, and Zhejiang Lab collected the data. The questionnaire was pretested with a small group of participants to ensure the questions were clearly phrased. Participants were informed of their right to withdraw and were assured of confidentiality and anonymity before participating in this research survey. Computers, tablets, and mobile phones were all used to complete the cross-sectional survey. After giving their consent, participants were asked to complete the following scales. After the screening, 4,800 valid questionnaires were selected. The invalid questionnaires were deleted mainly based on not passing the test of the screening questions rather than not answering the questions carefully (e.g., the answers to the questions of several consecutive variables are the same, or the number of repeated options is >70%).

To guarantee data quality and reduce possible interference from gender and geographical factors, this survey used a quota sampling method, as shown in Table 1 , with a sample gender ratio of 1:1 and samples from 16 cities in China, with 300 valid samples in each city. Considering the possible relationship between the privacy invasion experience and the years of Internet usage, participants' previous privacy invasion experience is meaningful to this study, and the final sample had 34.5 and 57.3% of Internet usage between 5 and 10 years and more than 10 years, respectively, which met the requirements of the study. In terms of education level, college and bachelor's degrees accounted for the largest proportion, at 62.0%, followed by high school/junior high school and vocational high school, at 27.3%. In terms of the age of the sample, the ratio of those younger than 46 years old to those above was 59.7:40.3 with a balanced distribution among all age groups. The basic demographic variables are tabulated as shown in Table 1 .

Statistical table of basic information on effective samples.

3.2. Measurements

Based on the model and hypotheses of this study, the instruments of this study included measures of privacy invasion experiences, response costs, privacy fatigue, privacy protection intentions, and regulatory focus (including promotion focus and prevention focus). This study's questionnaire was designed on scales that have been pre-validated. All scales were adapted based on social media contexts, and all responses were graded on a Likert scale ranging from 0 (strongly disagree) to 6 (strongly agree). A higher score was a better fit for that measure. Sub-items within each scale were averaged, resulting in composite scores.

The privacy invasion experiences scale was referenced from Su's study (Su et al., 2018 ). The scale is a 3-item self-reported scale (e.g., “My personal information, such as my phone number, shopping history, and more, is used to be shared by intelligent media with third-party platforms.”). The response cost scale was developed from the scale in the study by Yoon et al. ( 2012 ), which included three measurement questions (e.g., “When personal information security is at risk on social media, I consider that taking practical action will take too much time and effort.”). The privacy fatigue scale was derived from a related study by Choi et al. ( 2018 ), and the current study applied this 4-item scale to measure privacy fatigue on social media (e.g., “Dealing with personal information protection issues on social media makes me tired.”). The privacy protection intention scale was based on the scale developed by Liang and Xue ( 2010 ), which contains three measurement items (e.g., “When my personal information security is threatened on social media, I am willing to make efforts to protect it.”). The regulatory focus scale was derived from the original scale developed by Higgins ( 2002 ) and later adapted by Chinese scholars for use with Chinese samples (Cui et al., 2014 ). The scale contains six items on measures for promotion focus (e.g., “For what I want to do, I can do it all well”) and four items on measures for prevention focus (e.g., “While growing up, I often did things that my parents didn't agree were right”). The regulatory focus was measured by subtracting the average prevention score from the average promotion score, with higher differences indicating a greater tendency toward promotion focus and lower differences indicating a greater tendency toward prevention focus (Cui et al., 2014 ).

3.3. Data analysis

The validity and reliability of our questionnaire were tested using Mplus8. The PROCESS macro for SPSS was used to evaluate the moderated chain mediation model with the bootstrapping method (95 percent CI, 5,000 samples). Gender (1 = men, 0 = women), age, the highest degree obtained, and Internet lifetime are among the covariates examined in this model.

4.1. Measurement of the model

As shown in Table 2 , privacy invasion experiences, response costs, privacy fatigue, and privacy protection intentions are all factors to consider. Cronbach's α and composite reliability of scales are higher than the acceptable value (>0.70). Although the Cronbach's α for promotion and prevention focus were slightly <0.70, they were >0.60 and close to 0.70, which was also considered permissible due to the large sample size of this study, and the reliability test of the measurement model in this study was qualified (Hair et al., 2019 ).

Results of the validity and reliability.

PIE, privacy invasion experiences; PC, response costs; PF, privacy fatigue; PPI, privacy protection intentions. Bold value is the square root of AVE.

Since the measurement instruments in this study were derived from validated scales, the average variance extracted (AVE) was higher than 0.5, but we can accept 0.4. According to Fornell and Larcker ( 1981 ), if the AVE is <0.5, but the composite reliability is higher than 0.6, the construct's convergent validity is still acceptable (Fornell and Larcker, 1981 ). Further, Lam ( 2012 ) also explained and confirmed this view (Lam, 2012 ). Discriminant validity was tested by comparing the square root of AVE with the correlations of the researched variables. The square root of the AVE was higher than the correlation, indicating good discriminant validity.

Then, we tested the goodness of fit indices. Confirmatory factor analysis (CFA) of our questionnaire produced acceptable fit values for the one-dimensional factor structure (RMSEA = 0.048 0.15, SRMR = 0.042 0.05, GFI = 0.955 > 0.9, CFI = 0.947 > 0.9, NFI = 0.943 > 0.9, and 948 = 0.945 > 0.9) after introducing the error covariances in the model. In summary, the current study passed the reliability and validity tests.

4.2. Descriptive statistics

Table 3 shows the descriptive statistics and correlation analysis results. Response costs, privacy fatigue, and privacy protection intentions were all positively correlated with privacy invasion experiences. Privacy fatigue and privacy protection intentions were both positively correlated with response costs. Private fatigue was found to be negatively related to privacy protection intentions.

Means, standard deviations, and correlations among research variables.

PIE, privacy invasion experiences; PC, response costs; PF, privacy fatigue; PPI, privacy protection intentions; RF, regulatory focus; ** p < 0.01.

4.3. Relationship between privacy invasion experience and privacy protection intentions

Table 4 shows the results of the polynomial regression analysis. Privacy invasion experiences significantly influenced levels of response costs (β = 0.466, SE = 0.023, t = 11.936, p = 0.000), privacy fatigue (β = 0.297, SE = 0.022, t = 13.722, p = 0.000), and privacy protection intentions (β = 0.133, SE = 0.011, t = 12.382, p = 0.000) after controlling for gender, highest degree obtained, age, and Internet lifetime. Response costs positively predicted privacy fatigue (β = 0.382, SE = 0.013, t = 29.793, p = 0.000) and privacy protection intention (β = 0.098, SE = 0.010, t = 9.495, p = 0.000). However, privacy fatigue was significantly negatively correlated with privacy protection intentions (β = −0.130, SE = 0.011, t = −12.303, p = 0.000) in this model. In conclusion, H1, H2a, H2b, H3a, H3b, and H4a were supported.

Multiple regression results of the moderated mediation model.

PIE, privacy invasion experiences; PC, response costs; PF, privacy fatigue; PPI, privacy protection intentions; RF, regulatory focus; * p < 0.05; ** p < 0.01; *** p < 0.001; β, unstandardized regression weight; SE, standard error for the unstandardized regression weight; t, t-test statistic; F, F-test statistic.

Then, we used Model 6 of PROCESS to test the mediating effect in our model. As the results in Table 5 , H2c, H3c, and H4b were accepted.

Results of mediating effect test.

PIE, privacy invasion experiences; PC, response costs; PF, privacy fatigue; PPI, privacy protection intentions.

Model 84 in the SPSS PROCESS macro is applied to carry out the bootstrapping test to examine the moderation effect of regulatory focus. Privacy invasion experiences, response costs, privacy fatigue, and regulatory focus were centralized before constructing the interaction term. The results showed that regulatory focus significantly moderated the effect of privacy invasion experiences on privacy fatigue [95% Boot CI = (0.002, 0.006), and H5a was supported. In addition, the mediating effect was significant at a low level of regulatory focus (−1 SD; Effec t = −0.038; 95% Boot CI = (−0.046, −0.030)], medium level of regulatory focus [Effec t = −0.032; 95% Boot CI = (−0.039, −0.026)] and high level of regulatory focus [+1 SD; Effec t = −0.026; 95% Boot CI = (−0.032, 0.020)]. Specifically, the mediating effect of privacy fatigue decreased as individuals increasingly tended to be promotion focused. However, the regulatory focus did not significantly moderate the effect of privacy invasion experiences on response costs [95% Boot CI = (−0.001, 0.003)], and H5b was rejected.

Meanwhile, privacy invasion experiences × regulatory focus interaction significantly predicted privacy fatigue (β = −0.046, SE = 0.008, t = −3.694, p = 0.000; see Figure 2 ). The influence of privacy invasion experiences on privacy fatigue was significant when the level of regulatory focus was high (β = 0.385, SE = 0.016, t = 23.981, p = 0.000), medium (β = 0.430, SE = 0.015, t = 29.415, p = 0.000), and low (β = 0.475, SE = 0.022, t = 22.061, p = 0.000). Specifically, the more the individuals tended to be promotion focused (high regulatory focus scores), the less the level of fatigue caused by privacy invasion, and the more the individuals tended to be prevention focused (low regulatory focus scores), the more the level of fatigue was caused by privacy invasion.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1031592-g0002.jpg

Simple slope test of the interaction between PIE and RF on the PF.

5. Discussion

The purpose of the present study was to explore the relationship among privacy invasion experiences, response costs, privacy fatigue, privacy protection intentions, and regulatory focus. This study showed that response costs and privacy fatigue play mediating roles, whereas regulatory focus plays a moderating role in this process (as shown in Figure 3 ). These findings help clarify how and under which circumstances social media users' privacy invasion experiences affect their privacy protection intentions, thereby providing a means to improve people's privacy situation on social media platforms.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1031592-g0003.jpg

The moderated chain mediation model. Dashed lines represent nonsignificant relations *** p < 0.001.

5.1. A chain mediation of response costs and privacy fatigue

The current study found that social media users' privacy invasion experiences have a significant positive effect on their response costs, and the increase in response costs will in turn increase individuals' privacy protection intentions. This finding was consistent with previous literature on health psychology, which found that individuals calculate response costs for different actions before making decisions. The higher the response costs individuals perceive, the greater the possibility that they will improve their protective intention (Jones and Leary, 1994 ; Wichstrom, 1994 ). Compared with users who experienced less privacy invasion on social media, people who experienced more privacy violations would perceive a higher level of response costs, which would further increase their protective intention to avoid dealing with the negative outcomes followed by privacy invasion.

The study also found that social media users' privacy invasion experiences had a significant positive effect on privacy fatigue, which is consistent with prior research on social media use (Xiao and Mou, 2019 ; Sheng et al., 2022 ). At the same time, response costs also positively affected privacy fatigue, and research on social media fatigue behaviors indicated this influential mechanism in the past (Zhang et al., 2022 ). However, this study additionally found that response costs partially mediated the effect of privacy invasion experiences on privacy fatigue. Although both increased privacy invasion experiences and increased response costs will improve social media users' privacy protection intentions, privacy fatigue can mask this process, i.e., increased privacy fatigue reduces individuals' privacy protection intentions.

Moreover, this study revealed that response costs and privacy fatigue play chain-mediated roles in the effect of social media privacy invasion experiences on privacy protection intentions and further explained the mechanism. In addition, the masking effect of privacy fatigue also explains why privacy invasion experiences do not have a strong effect on privacy protection intentions. In other words, this privacy fatigue is an important reason that people currently “lie flat” (adopt passive protection) in the face of privacy-invasive issues online.

5.2. Regulatory focus as moderator

The relationship between social media privacy invasion experiences and privacy fatigue was moderated by regulatory focus. To be more specific, the more the people who promoted their privacy, the less the level of privacy fatigue they felt; the more the people who prevented their privacy, the more the level of privacy fatigue they felt. In other words, promotion focus acts as a buffer in this process. In other words, promotion focus has a buffering effect in this process. To some extent, the result of this study verified that different regulated individuals would sense different levels of fatigue due to their pursuing benefits or avoiding risks when they make decisions (Boksem and Tops, 2008 ; Jin, 2012 ). On the other hand, the regulatory focus did not moderate the relationship between privacy invasion experiences and response costs. One possible explanation is that, compared with privacy fatigue, response costs to privacy violations are based on exact experiences in users' memories. Individuals who have had more privacy invasions have more experience dealing with the negative consequences of privacy violations. Thus, whether psychological traits were added or not, the effect of privacy-invasive experiences on response costs would not be strengthened or weakened.

Meanwhile, this study has proven a moderated mediation model investigating the moderating role of regulatory focus in mediating “privacy invasion experiences—privacy fatigue—privacy protection intentions.” The results indicated that, as individuals tend to be prevention focused, privacy invasion experiences affect individuals' privacy protection intentions through the mediating role of privacy fatigue; specifically, the more they tend to be prevention focused, the stronger their privacy fatigue and the weaker their privacy protection intentions. Therefore, interventions for privacy fatigue (e.g., improving media literacy, creating a better online environment, and more) can be used to enhance social media users' privacy protection intentions (Bucher et al., 2013 ; Agozie and Kaya, 2021 ). In particular, focusing on social media users who tend to be prevention focused is crucial.

5.3. Implication

From a theoretical perspective, our study found a mechanism for influencing privacy-protective behavior based on an extension of the protective motivation theory. Protection motivation theory is a fear-based theory. We used our experiences with social media privacy invasions as a source of fear. Based on this, we found that these experiences were associated with individuals' privacy protection intentions. We explained the mechanism through the mediating variable of response costs, which is also consistent with previous findings (Chen et al., 2016 ).

More importantly, however, in response to what previous researchers have argued is an emotional factor that traditional protection motivation theory ignores (Mousavi et al., 2020 ), our study extended traditional protection motivation theory to include privacy fatigue as a factor and verified that fatigue significantly reduces social media users' privacy protection intentions. The introduction of “privacy fatigue” can better explain why occasional privacy invasion experiences do not cause privacy-protective behaviors, which is another possible explanation for the privacy paradox in addition to the traditional privacy calculus theory. The introduction of “privacy fatigue” has also inspired researchers to pay attention to individual emotions in privacy research. This study also compared differences in privacy protection intentions among social media users of different regulatory focus types, which are mainly caused by fatigue rather than response costs. By combining privacy fatigue and regulatory focus, it was found that not all subjects felt the same level of privacy fatigue after experiencing privacy invasion. This study also expanded the application of both privacy fatigue and regulatory focus theories and built a bridge between online privacy research and regulatory focus theory.

In addition to the aforementioned implications for research and theory, the findings also have some useful, practical implications. First of all, the findings of this piece ask for measures to reduce privacy invasion on social media. (a) Reducing the incidence of privacy violations at their root requires improving the current online privacy environment on social media platforms. We call on the government to strengthen the regulation of online privacy and social media platforms to reinforce the protection of users' privacy. To a large extent, users' personal information should not be misused. (b) From the social media agent perspective, relevant studies mentioned that content relevance perceived by online users could mitigate the negative relations between privacy invasion and continuous use intention (Zhu and Chang, 2016 ). Social media agents should improve their efficiency in using qualified personal information, giving users a smoother experience on online platforms.

Second, the results show that privacy fatigue could affect users' privacy protection intentions. (c) According to Choi et al. ( 2018 ), users have a tolerance threshold for privacy fatigue. The policy should formulate an acceptable level of privacy protection. Other scholars suggested that online service providers should avoid excessively or unnecessarily collecting personal information and forbid sharing or selling users' personal information strictly with any third party without their permission (Tang et al., 2021 ). (d) Another effective way is to reduce response costs to reduce the costs of protecting one's privacy. For example, social media platforms can optimize privacy interfaces and management tools or provide more effective feedback mechanisms for users. (e) In addition, improving users' privacy literacy (especially for prevention-focused individuals) can also be effective in reducing privacy fatigue (Bucher et al., 2013 ).

Finally, different measures should be applied based on different regulatory-focused users. (f) Social media managers could further classify users into groups based on their psychological characteristics and manage them in accordance with their requirements for the level of privacy protection. Thereby, social media users may have a wider range of choices. Specifically, due to previous privacy invasive experience, prevention-focused individuals tend to feel more privacy fatigue, requiring additional privacy protection features for prevention-focused users. For example, social media platforms could offer specific explanations of privacy protection technologies to increase prevention-focused individuals' trust in privacy protection technologies.

5.4. Limitations and future directions

There are still some limitations present in this article. Firstly, this study solely selected response costs as individuals' cognitive process, whereas threat appraisal was also included in the cognitive process of protection motivation theory, which focused on the potential outcomes of risky behaviors, including perceived vulnerability, perceived severity of the risk, and rewards associated with risky behavior (Prentice-Dunn et al., 2009 ). Future studies could systematically consider the association between these factors and privacy protection intentions. Second, users' perceptions of privacy invasion are different across various social media platforms (e.g., Instagram and Facebook), and this study only applies to a generalized social media context. Future research could pay more attention to the differences among users on different social media platforms (with different functions). Finally, this study did not focus on specific privacy invasion experiences. However, studies pointed out that different types of privacy invasions affect people differently. Moreover, people with different demographical backgrounds, such as cultural backgrounds and gender, would react differently when faced with the same situation (Klein and Helweg-Larsen, 2002 ). Future research can investigate this in more depth through experiments.

6. Conclusion

In conclusion, our findings suggest that social media privacy invasion experiences increase individuals' privacy protection intentions by increasing their response costs, but e increase in privacy fatigue masks this effect. Pivacy fatigue is a barrier to increasing social media users' willingness to protect their privacy, which explains why users do not seem to show a stronger willingness to protect their privacy when privacy invasion is a growing problem in social networks nowadays. Our study also revealed a different level of fatigue that individuals with different levels of regulatory focus exhibit when faced with the same level of privacy invasion experience. In particular, prevention-focused social media users are more likely to become fatigued. Therefore, social media agents should pay special attention to these individuals because they may be particularly vulnerable to privacy violations. Furthermore, the current research on privacy fatigue has yet to be expanded, and future researchers can add to it.

Our theoretical analysis and empirical results further emphasize the distinction between individuals, a differentiation that allows researchers to align their analyses with theoretical hypotheses more tightly. This applies not only to research on the effects of privacy invasion experiences on privacy behavior but also to exploring other privacy topics. Therefore, we recommend that future privacy research be more human-oriented, which will also benefit the current “hierarchical governance” of the Internet privacy issue.

Data availability statement

Ethics statement.

This study was approved by the Academic Committee of the School of Journalism and Communication at Xiamen University, and we carefully verified that we complied strictly with the ethical guidelines.

Author contributions

CG is responsible for the overall research design, thesis writing, collation of the questionnaire, and data analysis. SC and ML are responsible for the guidance. JW is responsible for the proofreading and article touch-up. All authors contributed to the article and approved the submitted version.

Acknowledgments

The authors thank all the participants of this study. The participants were all informed about the purpose and content of the study and voluntarily agreed to participate. The participants were able to stop participating at any time without penalty.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2022.1031592/full#supplementary-material

  • Agozie D. Q., Kaya T. (2021). Discerning the effect of privacy information transparency on privacy fatigue in e-government . Govern. Inf. Q . 38, 101601. 10.1016/j.giq.2021.101601 [ CrossRef ] [ Google Scholar ]
  • Baby A., Kannammal A. (2020). Network Path Analysis for developing an enhanced TAM model: A user-centric e-learning perspective . Comput. Hum. Behav . 107, 24. 10.1016/j.chb.2019.07.024 [ CrossRef ] [ Google Scholar ]
  • Bansal G., Zahedi F. M., Gefen D. (2010). The impact of personal dispositions on information sensitivity, privacy concern and trust in disclosing health information online . Decision Support Syst . 49 , 138–150. 10.1016/j.dss.2010.01.010 [ CrossRef ] [ Google Scholar ]
  • Baruh L., Secinti E., Cemalcilar Z. (2017). Online privacy concerns and privacy management: a meta-analytical review . J. Commun. 67 , 26–53. 10.1111/jcom.12276 [ CrossRef ] [ Google Scholar ]
  • Belanger F., Crossler R. E. (2019). Dealing with digital traces: Understanding protective behaviors on mobile devices . J. Strat. Inf. Syst. 28 , 34–49. 10.1016/j.jsis.2018.11.002 [ CrossRef ] [ Google Scholar ]
  • Benbasat I., Dexter A. S. (1982). Individual differences in the use of decision support aids . J. Account. Res . 20 , 1–11. 10.2307/2490759 [ CrossRef ] [ Google Scholar ]
  • Boksem M. A. S., Tops M. (2008). Mental fatigue: costs and benefits . Brain Res. Rev. 59 , 125–139. 10.1016/j.brainresrev.2008.07.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bucher E., Fieseler C., Suphan A. (2013). The stress potential of social media in the workplace . Inf. Commun. Soc . 16 , 1639–1667. 10.1080/1369118X.2012.710245 [ CrossRef ] [ Google Scholar ]
  • Chen H., Beaudoin C. E., Hong T. (2015). Teen online information disclosure: Empirical testing of a protection motivation and social capital model . J. Assoc. Inf. Sci. Technol . 67 , 2871–2881. 10.1002/asi.23567 [ CrossRef ] [ Google Scholar ]
  • Chen H., Beaudoin C. E., Hong T. (2016). Protecting oneself online: The effects of negative privacy experiences on privacy protective behaviors . J. Mass Commun. Q. . 93 , 409–429. 10.1177/1077699016640224 [ CrossRef ] [ Google Scholar ]
  • Cho H., Roh S., Park B. (2019). Of promoting networking and protecting privacy: effects of defaults and regulatory focus on social media users' preference settings . Comput. Hum. Behav . 101 , 1–13. 10.1016/j.chb.2019.07.001 [ CrossRef ] [ Google Scholar ]
  • Choi H., Park J., Jung Y. (2018). The role of privacy fatigue in online privacy behavior . Comput. Hum. Behav . 81 , 42–51. 10.1016/j.chb.2017.12.001 [ CrossRef ] [ Google Scholar ]
  • Cui Q., Yin C. Y., Lu H. L. (2014). The reaction of consumers to others' assessments under different social distance . Chin. J. Manage . 11 , 1396–1402. [ Google Scholar ]
  • Dhir A., Kaur P., Chen S., Pallesen S. (2019). Antecedents and consequences of social media fatigue . Int. J. Inf. Manage . 8 , 193–202. 10.1016/j.ijinfomgt.2019.05.021 [ CrossRef ] [ Google Scholar ]
  • Floyd D. L., Prentice-Dunn S., Rogers R. W. A. (2000). meta-analysis of research on protection motivation theory . J. Appl. Soc. Psychol . 30 , 407–429. 10.1111/j.1559-1816.2000.tb02323.x [ CrossRef ] [ Google Scholar ]
  • Fornell C., Larcker D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error . J. Market. Res . 18 , 39–50. 10.1177/002224378101800104 [ CrossRef ] [ Google Scholar ]
  • Friestad M., Thorson E. (1985). The Role of Emotion in Memory for Television Commercials . Washington, DC: Educational Resources Information Center. [ Google Scholar ]
  • Furnell S., Thomson K. L. (2009). Recognizing and addressing “security fatigue” . Comput. Fraud Secur . 11 , 7–11. 70139-3 10.1016/S1361-3723(09)70139-3 [ CrossRef ] [ Google Scholar ]
  • Hair J. F., Ringle C. M., Gudergan S. P. (2019). Partial least squares structural equation modeling-based discrete choice modeling: an illustration in modeling retailer choice . Bus. Res . 12 , 115–142. 10.1007/s40685-018-0072-4 [ CrossRef ] [ Google Scholar ]
  • Hargittai E., Marwick A. (2016). “What can I really do?” Explaining the privacy paradox with online apathy . Int. J. Commun. 10 , 21. 1932–8036/20160005. [ Google Scholar ]
  • Higgins E. T. (1997). Beyond pleasure and pain . Am. Psychol . 52 , 1280–1300. 10.1037/0003-066X.52.12.1280 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Higgins E. T. (2002). How self-regulation creates distinct values: the case of promotion and prevention decision making . J. Consum. Psychol . 12 , 177–191. 10.1207/S15327663JCP1203_01 [ CrossRef ] [ Google Scholar ]
  • Hsu C. L., Park S. J., Park H. W. (2013). Political discourse among key Twitter users: the case of Sejong city in South Korea . J. Contemp. Eastern Asia . 12 , 65–79. 10.17477/jcea.2013.12.1.065 [ CrossRef ] [ Google Scholar ]
  • Jin S. A. A. (2012). To disclose or not to disclose, that is the question: A structural equation modeling approach to communication privacy management in e-health . Comput. Hum. Behav . 28 , 69–77. 10.1016/j.chb.2011.08.012 [ CrossRef ] [ Google Scholar ]
  • Jones J. L., Leary M. R. (1994). Effects of appearance-based admonitions against sun exposure on tanning intentions in young-adults . Health Psychol. 13 , 86–90. 10.1037/0278-6133.13.1.86 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Juhee K. Eric J. The Market Effect of Healthcare Security: Do Patients Care About Data Breaches? (2018). Available online at: https//www.econinfosec.org/archive/weis2015/papers/WEIS_2015_kwon.pdf (accessed October 30, 2018).
  • Keith M. J., Maynes C., Lowry P. B., Babb J. (2014). “Privacy fatigue: the effect of privacy control complexity on consumer electronic information disclosure,” in International Conference on Information Systems (ICIS 2014) , Auckland , 14–17. [ Google Scholar ]
  • Kisekka V., Giboney J. S. (2018). The effectiveness of health care information technologies: evaluation of trust, security beliefs, and privacy as determinants of health care outcomes . J. Med. Int. Res . 20, 9014. 10.2196/jmir.9014 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Klein C. T., Helweg-Larsen M. (2002). Perceived control and the optimistic bias: a meta-analytic review . Psychol. Health . 17 , 437–446. 10.1080/0887044022000004920 [ CrossRef ] [ Google Scholar ]
  • Lai Y. L., Hui K. L. (2006). “Internet opt-in and opt-out: Investigating the roles of frames, defaults and privacy concerns,” in Proceedings of the 2006 ACM SIGMIS CPR Conference on Computer Personnel Research . New York, NY: ACM , 253–263. [ Google Scholar ]
  • Lam L. W. (2012). Impact of competitiveness on salespeople's commitment and performance . J. Bus. Res. 65 , 1328–1334. 10.1016/j.jbusres.2011.10.026 [ CrossRef ] [ Google Scholar ]
  • Li H., Wu J., Gao Y., Shi Y. (2016). Examining individuals' adoption of healthcare wearable devices: an empirical study from privacy calculus perspective . Int. J. Med. Inf. 88 , 8–17. 10.1016/j.ijmedinf.2015.12.010 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Li P., Cho H., Goh Z. H. (2019). Unpacking the process of privacy management and self-disclosure from the perspectives of regulatory focus and privacy calculus . Telematic. Inf. 41 , 114–125. 10.1016/j.tele.2019.04.006 [ CrossRef ] [ Google Scholar ]
  • Li X. (2008). Third-person effect, optimistic bias, and sufficiency resource in Internet use . J. Commun . 58 , 568–587. 10.1111/j.1460-2466.2008.00400.x [ CrossRef ] [ Google Scholar ]
  • Liang H., Xue Y. L. (2010). Understanding security behaviors in personal computer usage: a threat avoidance perspective . J. Assoc. Inf. Syst . 11 , 394–413. 10.17705/1jais.00232 [ CrossRef ] [ Google Scholar ]
  • Mao H., Bao T., Shen X., Li Q., Seluzicki C., Im E. O., et al.. (2018). Prevalence and risk factors for fatigue among breast cancer survivors on aromatase inhibitors . Eur. J. Cancer . 101 , 47–54. 10.1016/j.ejca.2018.06.009 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McLeod A., Dolezel D. (2022). Information security policy non-compliance: can capitulation theory explain user behaviors? Comput. Secur . 112, 102526. 10.1016/j.cose.2021.102526 [ CrossRef ] [ Google Scholar ]
  • Mohamed N., Ahmad I. H. (2012). Information privacy concerns, antecedents and privacy measure use in social networking sites: evidence from Malaysia . Comput. Hum. Behav . 28 , 2366–2375. 10.1016/j.chb.2012.07.008 [ CrossRef ] [ Google Scholar ]
  • Mousavi R., Chen R., Kim D. J., Chen K. (2020). Effectiveness of privacy assurance mechanisms in users' privacy protection on social networking sites from the perspective of protection motivation theory . Decision Supp. Syst . 135, 113323. 10.1016/j.dss.2020.113323 [ CrossRef ] [ Google Scholar ]
  • Oh J., Lee U., Lee K. (2019). Privacy fatigue in the internet of things (IoT) environment . INPRA 6 , 21–34. [ Google Scholar ]
  • Okazaki S., Li H., Hirose M. (2009). Consumer privacy concerns and preference for degree of regulatory control . J. Adv. 38 , 63–77. 10.2753/JOA0091-3367380405 [ CrossRef ] [ Google Scholar ]
  • Ong A. D., Bergeman C. S., Bisconti T. L., Wallace K. A. (2006). Psychological resilience, positive emotions, and successful adaptation to stress in later life . J. Pers. Soc. Psychol. 91 , 730. 10.1037/0022-3514.91.4.730 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pavlou P. A., Gefen D. (2005). Psychological contract violation in online marketplaces: antecedents, consequences, and moderating role . Inf. Syst. Res. 16 , 372–399. 10.1287/isre.1050.0065 [ CrossRef ] [ Google Scholar ]
  • Petronio S. (2002). Boundaries of Privacy: Dialectics of Disclosure . Albany, NY: State University of New York Press. [ Google Scholar ]
  • Piper B. F., Lindsey A. M., Dodd M. J. (1987). Fatigue mechanisms in cancer patients: developing nursing theory . Oncol. Nurs. Forum . 14, 17. [ PubMed ] [ Google Scholar ]
  • Prentice-Dunn S., Mcmath B. F., Cramer R. J. (2009). Protection motivation theory and stages of change in sun protective behavior . J. Health Psychol . 14 , 297–305. 10.1177/1359105308100214 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Price B. A., Adam K., Nuseibeh B. (2005). Keeping ubiquitous computing to yourself: a practical model for user control of privacy . Int. J. Hum. Comput. Stu. 63 , 228–253. 10.1016/j.ijhcs.2005.04.008 [ CrossRef ] [ Google Scholar ]
  • Ravindran T., Yeow Kuan A. C., Hoe Lian D. G. (2014). Antecedents and effects of social network fatigue . J. Assoc. Inf. Sci. Technol . 65 , 2306–2320. 10.1002/asi.23122 [ CrossRef ] [ Google Scholar ]
  • Revell T. (2019). Facebook Must Come Clean and Hand Over Election Campaign Data. New Scientist . Available online at: https://www.newscientist.com/article/mg24332472-300-face-book-must-come-clean-and-hand-over-election-campaign-data/ (accessed September 11, 2019).
  • Rogers R. W. A. (1975). protection motivation theory of fear appeals and attitude change . J. Psychol . 91 , 93–114. 10.1080/00223980.1975.9915803 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sheng N., Yang C., Han L., Jou M. (2022). Too much overload and concerns: antecedents of social media fatigue and the mediating role of emotional exhaustion . Comput. Hum. Behav. 139 , 107500. 10.1016/j.chb.2022.107500 [ CrossRef ] [ Google Scholar ]
  • Su P., Wang L., Yan J. (2018). How users' internet experience affects the adoption of mobile payment: a mediation model . Technol. Anal. Strat. Manage . 30 , 186–197. 10.1080/09537325.2017.1297788 [ CrossRef ] [ Google Scholar ]
  • Tang J., Akram U., Shi W. (2021). Why people need privacy? The role of privacy fatigue in app users' intention to disclose privacy: based on personality traits . J. Ent. Inf. Manage . 34 , 1097–1120. 10.1108/JEIM-03-2020-0088 [ CrossRef ] [ Google Scholar ]
  • Tanner J. F., Hunt J. B., Eppright D. R. (1991). The protection motivation model: a normative model of fear appeals . J. Market . 55 , 36–45. 10.1177/002224299105500304 [ CrossRef ] [ Google Scholar ]
  • Tian X., Chen L., Zhang X. (2022). The role of privacy fatigue in privacy paradox: a psm and heterogeneity analysis . Appl. Sci. 12 , 9702. 10.3390/app12199702 [ CrossRef ] [ Google Scholar ]
  • Tversky A., Kahneman D. (1974). Judgement under uncertainty: heuristics and biases . Science . 185 , 1124–1131. 10.1126/science.185.4157.1124 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang L., Yan J., Lin J., Cui W. (2017). Let the users tell the truth: Self-disclosure intention and self-disclosure honesty in mobile social networking . Int. J. Inf. Manage . 37 , 1428–1440. 10.1016/j.ijinfomgt.2016.10.006 [ CrossRef ] [ Google Scholar ]
  • Wichstrom L. (1994). Predictors of Norwegian adolescents sunbathing and use of sunscreen . Health Psychol. 13 , 412–420. 10.1037/0278-6133.13.5.412 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wirtz J., Lwin M. O. (2009). Regulatory focus theory, trust, and privacy concern . J. Serv. Res . 12 , 190–207. 10.1177/1094670509335772 [ CrossRef ] [ Google Scholar ]
  • Wu Z., Xie J., Lian X., Pan J. (2019). A privacy protection approach for XML-based archives management in a cloud environment . Electr. Lib . 37 , 970–983. 10.1108/EL-05-2019-0127 [ CrossRef ] [ Google Scholar ]
  • Xiao L., Mou J. (2019). Social media fatigue -Technological antecedents and the moderating roles of personality traits: the case of WeChat . Comput. Hum. Behav . 101 , 297–310. 10.1016/j.chb.2019.08.001 [ CrossRef ] [ Google Scholar ]
  • Xu F., Michael K., Chen X. (2013). Factors affecting privacy disclosure on social network sites: an integrated model . Electr. Comm. Res 13 , 151–168. 10.1007/s10660-013-9111-6 [ CrossRef ] [ Google Scholar ]
  • Yoon C., Hwang J. W., Kim R. (2012). Exploring factors that influence students' behaviors in information security . J. Inf. Syst. Educ . 23 , 407–415. [ Google Scholar ]
  • Youn S., Kim S. (2019). Newsfeed native advertising on Facebook. Young millennials' knowledge, pet peeves, reactance and ad avoidance . Int. J. Adv . 38 , 651–683. 10.1080/02650487.2019.1575109 [ CrossRef ] [ Google Scholar ]
  • Zhang Y., He W., Peng L. (2022). How perceived pressure affects users' social media fatigue behavior: a case on WeChat . J. Comput. Inf. Syst . 62 , 337–348. 10.1080/08874417.2020.1824596 [ CrossRef ] [ Google Scholar ]
  • Zhou Y., Schaub F. (2018). “Concern but no action: consumers, reactions to the equifax data breach,” in Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems , Montreal, QC , 22–26. [ Google Scholar ]
  • Zhu Y.Q., Chang J. H. (2016). The key role of relevance in personalized advertisement: examining its impact on perceptions of privacy invasion, self-awareness, and continuous use intentions . Comput. Hum. Behav . 65 , 442–447. 10.1016/j.chb.2016.08.048 [ CrossRef ] [ Google Scholar ]

IMAGES

  1. Does Social Media Violate Our Privacy

    essay on does social media violate our privacy

  2. How Using Social Media Affects Teenagers? Free Essay Example

    essay on does social media violate our privacy

  3. Amazing Social Media Argumentative Essay ~ Thatsnotus

    essay on does social media violate our privacy

  4. Thesis On Social Media Negative Effects

    essay on does social media violate our privacy

  5. What happens when your privacy is violated on social media

    essay on does social media violate our privacy

  6. DOES SOCIAL MEDIA VIOLATE OUR PRIVACY.docx

    essay on does social media violate our privacy

VIDEO

  1. Why does social media promote fake news?

  2. Ch10 : Ethical issues in social media

  3. Does Social Media NEGATIVELY or POSITIVELY Impact a RELATIONSHIP #socialmedia#relationship#positive

COMMENTS

  1. Social Media Is a Threat to Privacy, Essay Example

    Social media users usually post private information as part of the process of knowing one another. Since social media is associated with having a large number of users unknown to the client, there is an increased risk of exposing personal details to cybercriminals. Social media is a threat to privacy. Social media has increased privacy concerns ...

  2. How Americans feel about social media and privacy

    Overall, a 2014 survey found that 91% of Americans "agree" or "strongly agree" that people have lost control over how personal information is collected and used by all kinds of entities. Some 80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms, and ...

  3. The Assault on Our Privacy Is Being Conducted in Private

    The assaults on our privacy have become not only more secretive but also far more efficient. Americans once blanched at government efforts to sweep up data, including through the Patriot Act after ...

  4. Social Media & Privacy: A Facebook Case Study

    Globally, the website h as over 968 million. daily users and 1.49 billion monthly users, with nearl y 844 million mobile daily users and. 3.31 billion mobile monthly users ( See Figure 1 ...

  5. Social Media and Privacy: The Dangers and Privacy Issues

    According to Cha (2011), security and privacy problems emanating from social media are classified into behavioral and technical issues. One of the problems associated with social media is the invasion of privacy. This has been attributed to technological advancements that have made invasion of privacy not only feasible, but also achievable.

  6. Social Media Users' Legal Consciousness About Privacy

    Drawing on the concept of legal consciousness, this article investigates through focus group interviews, the ways in which social media users make sense of privacy as a right and the ways in which they experience and respond to challenges to privacy. Our research aims to explore what role, if any, law—both private and public policy—plays in ...

  7. The Erosion of Privacy: Social Media's Impact on Personal Information

    Yihan Ding 10 August 2023 3. Social media has become an integral part of our lives in digital age, providing a platform for sharing experiences, opinions, and personal information. However, this convenience comes with a price - the erosion of our privacy. Social media encourages users to share vast amounts of personal information, often ...

  8. Why protecting privacy is a losing game today—and how to ...

    July 12, 2018. Recent congressional hearings and data breaches have prompted more legislators and business leaders to say the time for broad federal privacy legislation has come. Cameron Kerry ...

  9. On Privacy and Security in Social Media

    This paper provides a comprehensive study of privacy and security issues in social media, covering various aspects such as user behavior, data collection, legal frameworks, and technical solutions ...

  10. Privacy in Social Media

    Abstract. Most people's immediate concern about privacy in social media, and about the internet more generally, relates to data protection. People fear that information they post on various platforms is potentially abused by corporate entities, governments, or even criminals, in all sorts of nefarious ways. The main premise of this chapter is ...

  11. The Right to Privacy in a Digital Age ...

    "The proper legal response to the issue of social media and privacy has proven elusive because there is no fixed conceptualization of privacy" (Hartzog, 2013, pg. 51). This paper examines privacy as it relates to the Internet, how it has changed the concept of personal privacy, and how the legal standards we use today are inadequate in our ...

  12. Can We Protect Our Privacy on Social Media?

    Mark Engler. There's a hidden cost to our free accounts on Facebook, Instagram, Snapchat, and other social media platforms: our privacy. In this lesson, students learn about and discuss how corporations make a profit from our data, potential policy solutions, and how young people are making their own decisions about online privacy. Current Issues.

  13. Social Media and the Protection of Privacy: Current Gaps and ...

    The protection of privacy in the context of social media raises intricate and so far not fully resolved questions. Social media platforms such as Facebook or Twitter operate in an international environment; frequently, the habitual residence of a user of social media in Europe and the seat of a service provider diverge, as the latter is often based in the USA.

  14. Social Media and Privacy

    Section Highlights. Information disclosure privacy issues have been a dominant focus in online technologies and the primary focus for social media. It focuses on access to data and defining public vs. private disclosures.It emphasizes user control over who sees what. With so many people from different social circles able to access a user's social media content, the issues of context collapse ...

  15. Social media and its effect on privacy

    articles, magazine articles, and research papers pertaining to social media to determine what effects social media has on the user's privacy and how much trust should be placed in social media networks such as Facebook. It provides a comprehensive view of the most used social media networks in 2012 and offers methods and suggestions for users ...

  16. Full article: Ethical concerns about social media privacy policies: do

    Introduction. With 4.76 billion (59.4%) of the global population using social media (Petrosyan, Citation 2023) and over 46% of the world's population logging on to a Meta Footnote 1 product monthly (Meta, Citation 2022), social media is ubiquitous and habitual (Bartoli et al., Citation 2022; Geeling & Brown, Citation 2019).In 2022 alone, there were over 500 million downloads of the image ...

  17. IELTS essay Does Social Media Violate Our Privacy?

    Data privacy and violation have become an alarming concern over recent years as the information shared on social media, becomes compromised. It poses a greater risk as users become susceptible and vulnerable to cyberattacks and data breaches. Cases of identity theft, leaking of personal information have risen over the years, and the most common ...

  18. The Battle for Digital Privacy Is Reshaping the Internet

    Now that system, which ballooned into a $350 billion digital ad industry, is being dismantled. Driven by online privacy fears, Apple and Google have started revamping the rules around online data ...

  19. Social Media and a Right to Privacy

    The right to access social media content during litigation has created new tensions between a right to privacy over that content and the right to include relevant materials in the broad permissible scope of discovery matters. With the large presence of Americans on Social Networking Sites ("SNS") and social apps, attorneys are now using ...

  20. 6 common social media privacy issues

    Data protection issues and loopholes in privacy controls can put user information at risk when using social media. Other social media privacy issues include the following. 1. Data mining for identity theft. Scammers do not need a great deal of information to steal someone's identity.

  21. Social Media Ethics and Patient Privacy Breaches Essay

    Because ethics is an important subject in nursing, stressing the ethical side of the problem is essential to show its scope and consequences. Finally, the guidelines provide information on how to avoid patient privacy breaches while using social media. NCSBN (2018) states that nurses must refrain from publishing images and videos of patients ...

  22. Media and the right to privacy, the incursion of social media

    The concept of privacy in this act is comprehended in a very liberal and traditional sense. The act of knowingly sending pictures of a person's private parts, without his permission, then Section 66E of this act is violated. Social media finds only a mention in Section 79 of this act.

  23. Research on the influence mechanism of privacy invasion experiences

    From a theoretical perspective, our study found a mechanism for influencing privacy-protective behavior based on an extension of the protective motivation theory. Protection motivation theory is a fear-based theory. We used our experiences with social media privacy invasions as a source of fear.