Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Anonymous and Non-anonymous User Behavior on Social Media: A Case Study of Jodel and Instagram

Profile image of Regina Ka

2018, Journal of Information Science Theory and Practice

Anonymity plays an increasingly important role on social media. This is reflected by more and more applications enabling anonymous interactions. However, do social media users behave different when they are anonymous? In our research, we investigated social media services meant for solely anonymous use (Jodel) and for widely spread non-anonymous sharing of pictures and videos (Instagram). This study examines the impact of anonymity on the behavior of users on Jodel compared to their non-anonymous use of Instagram as well as the differences between the user types: producer, consumer, and participant. Our approach is based on the uses and gratifications theory (U&GT) by E. Katz, specifically on the sought gratifications (motivations) of self-presentation, information, socialization, and entertainment. Since Jodel is mostly used in Germany, we developed an online survey in German. The questions addressed the three different user types and were subdivided according to the four motivatio...

Related Papers

Information, Communication and Society

Atte Oksanen

The dominance of computer-mediated communication in the online relational landscape continues to affect millions of users; yet, few studies have identified and analyzed characteristics shared by those specifically valuing its anonymous aspects for self-expression. This article identifies and investigates key characteristics of such users through online survey by two samples of Finnish users of social networking sites aged 15–30 years (n = 1013; 544). Various characteristics espoused by those especially valuing anonymity for self-expression online were identified and analyzed in relation to the users in question. Favoring anonymity was positively correlated with both grandiosity, a component of narcissism, and low self-esteem. In addition, users with stronger anonymity preference tended to be younger, highly trusting, having strong ties to online communities while having few offline friends. Findings emphasize the significance of a deeper understanding of how anonymity effects and attracts users seeking its benefits while also providing new insights into how user characteristics interact depending on motivation.

anonymous social media case study

Anna Alexandra Ndegwa

Christina Frederick

This research presents several aspects of anonymous social media postings using an anonymous social media application (i.e., Yik Yak) that is GPS-linked to college campuses. Anonymous social media been widely criticized for postings containing threats/harassment, vulgarity and suicidal intentions. However, little research has empirically examined the content of anonymous social media postings, and whether they contain a large quantity of negative social content. To best understand this phenomenon an analysis of the content of anonymous social media posts was conducted in accordance with Deindividuation Theory (Reicher, Spears, & Postmes, 1995). Deindividuation Theory predicts group behavior is congruent with group norms. Therefore, if a group norm is antisocial in nature, then so too will be group behavior. In other words, individuals relinquish their individual identity to a group identity, while they are a part of that group. Since the application used in this study is limited to ...

Loyiso C Ngcongo

This research is aimed at exploring the uses and gratifications of social media by the Erand Court Residents in Midrand, Johannesburg, Gauteng. Social media is used amongst friends, and loved ones as another form of communication in order to receive feedback instantaneously. Social media is sometimes used to strengthen relationships, personally and professionally. As a result the social media evolution and trends has transformed communication tremendously. In the case of Erand Court complex in Midrand, the main stakeholders of the consists of residents, property owners, estate agency, the body corporate and the municipality and so forth. The Erand court body corporate has the responsibility to keep these stakeholders informed of their activities and be able to receive feedback from them on the services they receive. In addition the residents more specifically have family-like living arrangements and also have vibrant lifestyle of which most revolves in the circle of friends. Therefore social media is more important to them when they want to keep up to date with their friend and in the lives of their loved ones and colleagues. Erand Court complex is not yet exposed to alternative forms of communication and still dwells on the traditional forms of communication such as sending out letters to convey key important messages. Therefore it was important to investigate what social media the residents are using and for what reason in order for the body corporate to develop an alternative form of communication that would provide them with instant feedback. In order to achieve this, the body corporate has to ensure that communication methods used are able to keep the conversations going between the residents and other key stakeholders. This research study is aimed at exploring the uses and gratifications of social media by the tenants in order to help the body corporate find a better way to communicate with stakeholders and ensure that messages are delivered on time with feedback being able to come instantly form all residents who have concerns or suggestions about certain issues of concern with the complex. Social media is often taken for granted without realizing the power it has at influencing the opinion or circulating the news, hence it is vital for any organization or area of business to have social media as part of their communication strategy. After all, communication in its nature has to be a two way symmetrical activity where or message sender is able to get feedback swiftly or the other way around. Social media helps communication participants achieve this amicably. Though, the social media have the responsibility to educate themselves about the social media platforms they choose to be feature in including its advantages and the disadvantages. In most cases social media has become a threat to the society, such as promoting hate speech, violence, and expression. This is a challenge most people are facing because they choose not to educate themselves, and in this study is is explained how people feel about having social media privacy settings.

kandie joseph

Nicole Ellison

Online Journal of Communication and Media Technologies

Neslihan Özmelek Taş

Proceedings of HCI Korea 2013

Tasos Spiliotopoulos

Social networks are commonplace tools, accessed everyday by millions of users. However, despite their popularity, we still know relatively little about why they are so appealing and what specifically they are used for - what novel needs they meet. This paper presents two studies that extend work on applying Uses and Gratification theory to answer such questions. The first study explores motivations for using a content community on a social network service – a music video sharing group on Facebook. In a two-stage experiment, 20 users generated words or phrases to describe how they used the group, and what they enjoyed about their use. These phrases were coded into 34 questionnaire items that were then completed by 57 new participants. Factor analysis on this data revealed four gratifications: contribution; discovery; social interaction and entertainment. These factors are interpreted and discussed. The second study explores the links between motives for using a social network service and numerical measures of that activity. Specifically, it identified motives for Facebook use and then investigated the extent to which these motives can be predicted through a range of usage and network metrics collected automatically collected via the Facebook API. Results showed that all three variable types in this expanded U&G frame of analysis (covering social antecedents, usage metrics, and personal network metrics) effectively predicted motives and highlighted interesting behaviours. Together these studies extend the methodological framework of Uses and Gratification theory and show how it can be effectively used to better understand and appreciate the complexity of online social behaviors.

Visual Studies

Elisa Serafinelli

Liberato Camilleri

RELATED PAPERS

Physica B: Condensed Matter

Daniel Estève

Journal of Biological Chemistry

Michael Uhler

Turkish Journal of Forensic Medicine

METE KORKUT GULMEN

Revista de El Colegio de San Luis

Jorge Dolores Bautista

Marko Baretić

namita soni

Glenda Miralles

Strojniški vestnik

Urška Mlakar

Psympathic : Jurnal Ilmiah Psikologi

imaduddin Hamzah

Materials Today Communications

Harish Dubey

IEEE Transactions on Robotics

Jong-Hwan Kim

E3S Web of Conferences

Mustapha HASNAOUI

Physica Scripta

Erfan Fatehi

Bulletin du cancer

charles charles

Progress in Computational Fluid Dynamics, An International Journal

Amina Mataoui

Journal of Mathematical Physics

Jolly Kamwanga

Wisnu Putra

PSICOTHEMA-OVIEDO …

Luis De la Corte

Vivanceli Brunello

Modern pathology : an official journal of the United States and Canadian Academy of Pathology, Inc

Kerwin Shannon

International Journal of Food Science

Dr maha Ali

Ben Zarrouk Hatem

tyghfg hjgfdfd

Perspectivas y retos en los sistemas y ambientes educativos para el desarrollo de procesos de aprendizaje

Eduardo Gabriel Barrios-Pérez

See More Documents Like This

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

A phone screen with the Sendit app opened in the app store

Sendit, Yolo, NGL: anonymous social apps are taking over once more, but they aren’t without risks

anonymous social media case study

Research Fellow, Blockchain Innovation Hub, RMIT, RMIT University

Disclosure statement

Alexia Maddox does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

RMIT University provides funding as a strategic partner of The Conversation AU.

View all partners

Have you ever told a stranger a secret about yourself online? Did you feel a certain kind of freedom doing so, specifically because the context was removed from your everyday life? Personal disclosure and anonymity have long been a potent mix laced through our online interactions.

We’ve recently seen this through the resurgence of anonymous question apps targeting young people, including Sendit and NGL (which stands for “not gonna lie”). The latter has been installed 15 million times globally, according to recent reports .

These apps can be linked to users’ Instagram and Snapchat accounts, allowing them to post questions and receive anonymous answers from followers.

Although they’re trending at the moment, it’s not the first time we’ve seen them. Early examples include ASKfm, launched in 2010, and Spring.me, launched in 2009 (as “Fromspring”).

These platforms have a troublesome history. As a sociologist of technology, I’ve studied human-technology encounters in contentious environments. Here’s my take on why anonymous question apps have once again taken the internet by storm, and what their impact might be.

A series of screens advertising various features of the 'NGL' app.

Why are they so popular?

We know teens are drawn to social platforms. These networks connect them with their peers, support their journeys towards forming identity, and provide them space for experimentation, creativity and bonding.

We also know they manage online disclosures of their identity and personal life through a technique sociologists call “audience segregation”, or “code switching”. This means they’re likely to present themselves differently online to their parents than they are to their peers.

Digital cultures have long used online anonymity to separate real-world identities from online personas, both for privacy and in response to online surveillance. And research has shown online anonymity enhances self-disclosure and honesty .

For young people, having online spaces to express themselves away from the adult gaze is important. Anonymous question apps provide this space. They promise to offer the very things young people seek: opportunities for self-expression and authentic encounters.

Risky by design

We now have a generation of kids growing up with the internet. On one hand, young people are hailed as pioneers of the digital age – and on they other, we fear for them as its innocent victims.

A recent TechCrunch article chronicled the rapid uptake of anonymous question apps by young users, and raised concerns about transparency and safety.

NGL exploded in popularity this year, but hasn’t solved the issue of hate speech and bullying. Anonymous chat app YikYak was shut down in 2017 after becoming littered with hateful speech – but has since returned .

A screenshot of a Tweet from @Mistaaaman

These apps are designed to hook users in. They leverage certain platform principles to provide a highly engaging experience, such as interactivity and gamification (wherein a form of “play” is introduced into non-gaming platforms).

Also, given their experimental nature, they’re a good example of how social media platforms have historically been developed with a “move fast and break things” attitude. This approach, first articulated by Meta CEO Mark Zuckerberg, has arguably reached its use-by date .

Breaking things in real life is not without consequence. Similarly, breaking away from important safeguards online is not without social consequence. Rapidly developed social apps can have harmful consequences for young people, including cyberbullying, cyber dating abuse, image-based abuse and even online grooming.

In May 2021, Snapchat suspended integrated anonymous messaging apps Yolo and LMK, after being sued by the distraught parents of teens who committed suicide after being bullied through the apps.

Yolo’s developers overestimated the capacity of their automated content moderation to identify harmful messages.

In the wake of these suspensions, Sendit soared through the app store charts as Snapchat users sought a replacement.

Snapchat then banned anonymous messaging from third-party apps in March this year, in a bid to limit bullying and harassment. Yet it appears Sendit can still be linked to Snapchat as a third-party app, so the implementation conditions are variable.

Are kids being manipulated by chatbots?

It also seems these apps may feature automated chatbots parading as anonymous responders to prompt interactions – or at least that’s what staff at Tech Crunch found.

Although chatbots can be harmless (or even helpful), problems arise if users can’t tell whether they’re interacting with a bot or a person. At the very least it’s likely the apps are not effectively screening bots out of conversations.

Users can’t do much either. If responses are anonymous (and don’t even have a profile or post history linked to them), there’s no way to know if they’re communicating with a real person or not.

It’s difficult to confirm whether bots are widespread on anonymous question apps, but we’ve seen them cause huge problems on other platforms – opening avenues for deception and exploitation.

For example, in the case of Ashley Madison , a dating and hook-up platform that was hacked in 2015, bots were used to chat with human users to keep them engaged. These bots used fake profiles created by Ashley Madison employees.

Read more: 'Anorexia coach': sexual predators online are targeting teens wanting to lose weight. Platforms are looking the other way

What can we do?

Despite all of the above, some research has found many of the risks teens experience online pose only brief negative effects, if any. This suggests we may be overemphasising the risks young people face online.

At the same time, implementing parental controls to mitigate online risk is often in tension with young people’s digital rights .

So the way forward isn’t simple. And just banning anonymous question apps isn’t the solution.

Rather than avoid anonymous online spaces, we’ll need to trudge through them together – all the while demanding as much accountability and transparency from tech companies as we can.

For parents, there are some useful resources on how to help children and teens navigate tricky online environments in a sensible way.

Read more: Ending online anonymity won't make social media less toxic

  • Social media
  • Online anonymity
  • Anonymous comments
  • Online abuse
  • Online harassment
  • Online hate
  • Online bullying
  • Online harm
  • Young people and technology

anonymous social media case study

Research Fellow in High Impact Weather under Climate Change

anonymous social media case study

Director, Defence and Security

anonymous social media case study

Opportunities with the new CIEHF

anonymous social media case study

School of Social Sciences – Public Policy and International Relations opportunities

anonymous social media case study

Deputy Editor - Technology

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 20 March 2024

Persistent interaction patterns across social media platforms and over time

  • Michele Avalle   ORCID: orcid.org/0009-0007-4934-2326 1   na1 ,
  • Niccolò Di Marco 1   na1 ,
  • Gabriele Etta 1   na1 ,
  • Emanuele Sangiorgio   ORCID: orcid.org/0009-0003-1024-3735 2 ,
  • Shayan Alipour 1 ,
  • Anita Bonetti 3 ,
  • Lorenzo Alvisi 1 ,
  • Antonio Scala 4 ,
  • Andrea Baronchelli 5 , 6 ,
  • Matteo Cinelli   ORCID: orcid.org/0000-0003-3899-4592 1 &
  • Walter Quattrociocchi   ORCID: orcid.org/0000-0002-4374-9324 1  

Nature ( 2024 ) Cite this article

8512 Accesses

191 Altmetric

Metrics details

  • Mathematics and computing
  • Social sciences

Growing concern surrounds the impact of social media platforms on public discourse 1 , 2 , 3 , 4 and their influence on social dynamics 5 , 6 , 7 , 8 , 9 , especially in the context of toxicity 10 , 11 , 12 . Here, to better understand these phenomena, we use a comparative approach to isolate human behavioural patterns across multiple social media platforms. In particular, we analyse conversations in different online communities, focusing on identifying consistent patterns of toxic content. Drawing from an extensive dataset that spans eight platforms over 34 years—from Usenet to contemporary social media—our findings show consistent conversation patterns and user behaviour, irrespective of the platform, topic or time. Notably, although long conversations consistently exhibit higher toxicity, toxic language does not invariably discourage people from participating in a conversation, and toxicity does not necessarily escalate as discussions evolve. Our analysis suggests that debates and contrasting sentiments among users significantly contribute to more intense and hostile discussions. Moreover, the persistence of these patterns across three decades, despite changes in platforms and societal norms, underscores the pivotal role of human behaviour in shaping online discourse.

Similar content being viewed by others

anonymous social media case study

Decoding chromatin states by proteomic profiling of nucleosome readers

Saulius Lukauskas, Andrey Tvardovskiy, … Till Bartke

anonymous social media case study

Interviews in the social sciences

Eleanor Knott, Aliya Hamid Rao, … Chana Teeger

anonymous social media case study

A cross-verified database of notable people, 3500BC-2018AD

Morgane Laouenan, Palaash Bhargava, … Etienne Wasmer

The advent and proliferation of social media platforms have not only transformed the landscape of online participation 2 but have also become integral to our daily lives, serving as primary sources for information, entertainment and personal communication 13 , 14 . Although these platforms offer unprecedented connectivity and information exchange opportunities, they also present challenges by entangling their business models with complex social dynamics, raising substantial concerns about their broader impact on society. Previous research has extensively addressed issues such as polarization, misinformation and antisocial behaviours in online spaces 5 , 7 , 12 , 15 , 16 , 17 , revealing the multifaceted nature of social media’s influence on public discourse. However, a considerable challenge in understanding how these platforms might influence inherent human behaviours lies in the general lack of accessible data 18 . Even when researchers obtain data through special agreements with companies like Meta, it may not be enough to clearly distinguish between inherent human behaviours and the effects of the platform’s design 3 , 4 , 8 , 9 . This difficulty arises because the data, deeply embedded in platform interactions, complicate separating intrinsic human behaviour from the influences exerted by the platform’s design and algorithms.

Here we address this challenge by focusing on toxicity, one of the most prominent aspects of concern in online conversations. We use a comparative analysis to uncover consistent patterns across diverse social media platforms and timeframes, aiming to shed light on toxicity dynamics across various digital environments. In particular, our goal is to gain insights into inherently invariant human patterns of online conversations.

The lack of non-verbal cues and physical presence on the web can contribute to increased incivility in online discussions compared with face-to-face interactions 19 . This trend is especially pronounced in online arenas such as newspaper comment sections and political discussions, where exchanges may degenerate into offensive comments or mockery, undermining the potential for productive and democratic debate 20 , 21 . When exposed to such uncivil language, users are more likely to interpret these messages as hostile, influencing their judgement and leading them to form opinions based on their beliefs rather than the information presented and may foster polarized perspectives, especially among groups with differing values 22 . Indeed, there is a natural tendency for online users to seek out and align with information that echoes their pre-existing beliefs, often ignoring contrasting views 6 , 23 . This behaviour may result in the creation of echo chambers, in which like-minded individuals congregate and mutually reinforce shared narratives 5 , 24 , 25 . These echo chambers, along with increased polarization, vary in their prevalence and intensity across different social media platforms 1 , suggesting that the design and algorithms of these platforms, intended to maximize user engagement, can substantially shape online social dynamics. This focus on engagement can inadvertently highlight certain behaviours, making it challenging to differentiate between organic user interaction and the influence of the platform’s design. A substantial portion of current research is devoted to examining harmful language on social media and its wider effects, online and offline 10 , 26 . This examination is crucial, as it reveals how social media may reflect and amplify societal issues, including the deterioration of public discourse. The growing interest in analysing online toxicity through massive data analysis coincides with advancements in machine learning capable of detecting toxic language 27 . Although numerous studies have focused on online toxicity, most concentrate on specific platforms and topics 28 , 29 . Broader, multiplatform studies are still limited in scale and reach 12 , 30 . Research fragmentation complicates understanding whether perceptions about online toxicity are accurate or misconceptions 31 . Key questions include whether online discussions are inherently toxic and how toxic and non-toxic conversations differ. Clarifying these dynamics and how they have evolved over time is crucial for developing effective strategies and policies to mitigate online toxicity.

Our study involves a comparative analysis of online conversations, focusing on three dimensions: time, platform and topic. We examine conversations from eight different platforms, totalling about 500 million comments. For our analysis, we adopt the toxicity definition provided by the Perspective API, a state-of-the-art classifier for the automatic detection of toxic speech. This API considers toxicity as “a rude, disrespectful or unreasonable comment likely to make someone leave a discussion”. We further validate this definition by confirming its consistency with outcomes from other detection tools, ensuring the reliability and comparability of our results. The concept of toxicity in online discourse varies widely in the literature, reflecting its complexity, as seen in various studies 32 , 33 , 34 . The efficacy and constraints of current machine-learning-based automated toxicity detection systems have recently been debated 11 , 35 . Despite these discussions, automated systems are still the most practical means for large-scale analyses.

Here we analyse online conversations, challenging common assumptions about their dynamics. Our findings reveal consistent patterns across various platforms and different times, such as the heavy-tailed nature of engagement dynamics, a decrease in user participation and an increase in toxic speech in lengthier conversations. Our analysis indicates that, although toxicity and user participation in debates are independent variables, the diversity of opinions and sentiments among users may have a substantial role in escalating conversation toxicity.

To obtain a comprehensive picture of online social media conversations, we analysed a dataset of about 500 million comments from Facebook, Gab, Reddit, Telegram, Twitter, Usenet, Voat and YouTube, covering diverse topics and spanning over three decades (a dataset breakdown is shown in Table 1 and Supplementary Table 1 ; for details regarding the data collection, see the ‘Data collection’ section of the Methods ).

Our analysis aims to comprehensively compare the dynamics of diverse social media accounting for human behaviours and how they evolved. In particular, we first characterize conversations at a macroscopic level by means of their engagement and participation, and we then analyse the toxicity of conversations both after and during their unfolding. We conclude the paper by examining potential drivers for the emergence of toxic speech.

Conversations on different platforms

This section provides an overview of online conversations by considering user activity and thread size metrics. We define a conversation (or a thread) as a sequence of comments that follow chronologically from an initial post. In Fig. 1a and Extended Data Fig. 1 , we observe that, across all platforms, both user activity (defined as the number of comments posted by the user) and thread length (defined as the number of comments in a thread) exhibit heavy-tailed distributions. The summary statistics about these distributions are reported in Supplementary Tables 1 and 2 .

figure 1

a , The distributions of user activity in terms of comments posted for each platform and each topic. b , The mean user participation as conversations evolve. For each dataset, participation is computed for the threads belonging to the size interval [0.7–1] (Supplementary Table 2 ). Trends are reported with their 95% confidence intervals. The x axis represents the normalized position of comment intervals in the threads.

Consistent with previous studies 36 , 37 our analysis shows that the macroscopic patterns of online conversations, such as the distribution of users/threads activity and lifetime, are consistent across all datasets and topics (Supplementary Tables 1 – 4 ). This observation holds regardless of the specific features of the diverse platforms, such as recommendation algorithms and moderation policies (described in the ‘Content moderation policies’ of the Methods ), as well as other factors, including the user base and the conversation topics. We extend our analysis by examining another aspect of user activity within conversations across all platforms. To do this, we introduce a metric for the participation of users as a thread evolves. In this analysis, threads are filtered to ensure sufficient length as explained in the ‘Logarithmic binning and conversation size’ section of the Methods .

The participation metric, defined over different conversation intervals (that is, 0–5% of the thread arranged in chronological order, 5–10%, and so on), is the ratio of the number of unique users to the number of comments in the interval. Considering a fixed number of comments c , smaller values of participation indicate that fewer unique users are producing c comments in a segment of the conversation. In turn, a value of participation equal to 1 means that each user is producing one of the c comments, therefore obtaining the maximal homogeneity of user participation. Our findings show that, across all datasets, the participation of users in the evolution of conversations, averaged over almost all considered threads, is decreasing, as indicated by the results of Mann–Kendall test—a nonparametric test assessing the presence of a monotonic upward or downward tendency—shown in Extended Data Table 1 . This indicates that fewer users tend to take part in a conversation as it evolves, but those who do are more active (Fig. 1b ). Regarding patterns and values, the trends in user participation for various topics are consistent across each platform. According to the Mann–Kendall test, the only exceptions were Usenet Conspiracy and Talk, for which an ambiguous trend was detected. However, we note that their regression slopes are negative, suggesting a decreasing trend, even if with a weaker effect. Overall, our first set of findings highlights the shared nature of certain online interactions, revealing a decrease in user participation over time but an increase in activity among participants. This insight, consistent across most platforms, underscores the dynamic interplay between conversation length, user engagement and topic-driven participation.

Conversation size and toxicity

To detect the presence of toxic language, we used Google’s Perspective API 34 , a state-of-the-art toxicity classifier that has been used extensively in recent literature 29 , 38 . Perspective API defines a toxic comment as “A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion”. On the basis of this definition, the classifier assigns a toxicity score in the [0,1] range to a piece of text that can be interpreted as an estimate of the likelihood that a reader would perceive the comment as toxic ( https://developers.perspectiveapi.com/s/about-the-api-score ). To define an appropriate classification threshold, we draw from the existing literature 39 , which uses 0.6 as the threshold for considering a comment as toxic. A robustness check of our results using different threshold and classification tools is reported in the ‘Toxicity detection and validation of employed models’ section of the Methods , together with a discussion regarding potential shortcomings deriving from automatic classifiers. To further investigate the interplay between toxicity and conversation features across various platforms, our study first examines the prevalence of toxic speech in each dataset. We then analyse the occurrence of highly toxic users and conversations. Lastly, we investigate how the length of conversations correlates with the probability of encountering toxic comments. First of all, we define the toxicity of a user as the fraction of toxic comments that she/he left. Similarly, the toxicity of a thread is the fraction of toxic comments it contains. We begin by observing that, although some toxic datasets exist on unmoderated platforms such as Gab, Usenet and Voat, the prevalence of toxic speech is generally low. Indeed, the percentage of toxic comments in each dataset is mostly below 10% (Table 1 ). Moreover, the complementary cumulative distribution functions illustrated in Extended Data Fig. 2 show that the fraction of extremely toxic users is very low for each dataset (in the range between 10 −3 and 10 −4 ), and the majority of active users wrote at least one toxic comment, as reported in Supplementary Table 5 , therefore suggesting that the overall volume of toxicity is not a phenomenon limited to the activity of very few users and localized in few conversations. Indeed, the number of users versus their toxicity decreases sharply following an exponential trend. The toxicity of threads follows a similar pattern. To understand the association between the size and toxicity of a conversation, we start by grouping conversations according to their length to analyse their structural differences 40 . The grouping is implemented by means of logarithmic binning (see the ‘Logarithmic binning and conversation size’ section of the Methods ) and the evolution of the average fraction of toxic comments in threads versus the thread size intervals is reported in Fig. 2 . Notably, the resulting trends are almost all increasing, showing that, independently of the platform and topic, the longer the conversation, the more toxic it tends to be.

figure 2

The mean fraction of toxic comments in conversations versus conversation size for each dataset. Trends represent the mean toxicity over each size interval and their 95% confidence interval. Size ranges are normalized to enable visual comparison of the different trends.

We assessed the increase in the trends by both performing linear regression and applying the Mann–Kendall test to ensure the statistical significance of our results (Extended Data Table 2 ). To further validate these outcomes, we shuffled the toxicity labels of comments, finding that trends are almost always non-increasing when data are randomized. Furthermore, the z -scores of the regression slopes indicate that the observed trends deviate from the mean of the distributions resulting from randomizations, being at least 2 s.d. greater in almost all cases. This provides additional evidence of a remarkable difference from randomness. The only decreasing trend is Usenet Politics. Moreover, we verified that our results are not influenced by the specific number of bins as, after estimating the same trends again with different intervals, we found that the qualitative nature of the results remains unchanged. These findings are summarized in Extended Data Table 2 . These analyses have been validated on the same data using a different threshold for identifying toxic comments and on a new dataset labelled with three different classifiers, obtaining similar results (Extended Data Fig. 5 , Extended Data Table 5 , Supplementary Fig. 1 and Supplementary Table 8 ). Finally, using a similar approach, we studied the toxicity content of conversations versus their lifetime—that is, the time elapsed between the first and last comment. In this case, most trends are flat, and there is no indication that toxicity is generally associated either with the duration of a conversation or the lifetime of user interactions (Extended Data Fig. 4 ).

Conversation evolution and toxicity

In the previous sections, we analysed the toxicity level of online conversations after their conclusion. We next focus on how toxicity evolves during a conversation and its effect on the dynamics of the discussion. The common beliefs that (1) online interactions inevitably devolve into toxic exchanges over time and (2) once a conversation reaches a certain toxicity threshold, it would naturally conclude, are not modern notions but they were also prevalent in the early days of the World Wide Web 41 . Assumption 2 aligns with the Perspective API’s definition of toxic language, suggesting that increased toxicity reduces the likelihood of continued participation in a conversation. However, this observation should be reconsidered, as it is not only the peak levels of toxicity that might influence a conversation but, for example, also a consistent rate of toxic content. To test these common assumptions, we used a method similar to that used for measuring participation; we select sufficiently long threads, divide each of them into a fixed number of equal intervals, compute the fraction of toxic comments for each of these intervals, average it over all threads and plot the toxicity trend through the unfolding of the conversations. We find that the average toxicity level remains mostly stable throughout, without showing a distinctive increase around the final part of threads (Fig. 3a (bottom) and Extended Data Fig. 3 ). Note that a similar observation was made previously 41 , but referring only to Reddit. Our findings challenge the assumption that toxicity discourages people from participating in a conversation, even though this notion is part of the definition of toxicity used by the detection tool. This can be seen by checking the relationship between trends in user participation, a quantity related to the number of users in a discussion at some point, and toxicity. The fact that the former typically decreases while the latter remains stable during conversations indicates that toxicity is not associated with participation in conversations (an example is shown in Fig. 3a ; box plots of the slopes of participation and toxicity for the whole dataset are shown in Fig. 3b ). This suggests that, on average, people may leave discussions regardless of the toxicity of the exchanges. We calculated the Pearson’s correlation between user participation and toxicity trends for each dataset to support this hypothesis. As shown in Fig. 3d , the resulting correlation coefficients are very heterogeneous, indicating no consistent pattern across different datasets. To further validate this analysis, we tested the differences in the participation of users commenting on either toxic or non-toxic conversations. To split such conversations into two disjoint sets, we first compute the toxicity distribution T i of long threads in each dataset i , and we then label a conversation j in dataset i as toxic if it has toxicity t ij  ≥  µ ( T i ) +  σ ( T i ), with µ ( T i ) being mean and σ ( T i ) the standard deviation of T i ; all of the other conversations are considered to be non-toxic. After splitting the threads, for each dataset, we compute the Pearson’s correlation of user participation between sets to find strongly positive values of the coefficient in all cases (Fig. 3c,e ). This result is also confirmed by a different analysis of which the results are reported in Supplementary Table 8 , in which no significant difference between slopes in toxic and non-toxic threads can be found. Thus, user behaviour in toxic and non-toxic conversations shows almost identical patterns in terms of participation. This reinforces our finding that toxicity, on average, does not appear to affect the likelihood of people participating in a conversation. These analyses were repeated with a lower toxicity classification threshold (Extended Data Fig. 5 ) and on additional datasets (Supplementary Fig. 2 and Supplementary Table 11 ), finding consistent results.

figure 3

a , Examples of a typical trend in averaged user participation (top) and toxicity (bottom) versus the normalized position of comment intervals in the threads (Twitter news dataset). b , Box plot distributions of toxicity ( n  = 25, minimum = −0.012, maximum = 0.015, lower whisker = −0.012, quartile 1 (Q1) = − 0.004, Q2 = 0.002, Q3 = 0.008, upper whisker = 0.015) and participation ( n  = 25, minimum = −0.198, maximum = −0.022, lower whisker = −0.198, Q1 = − 0.109, Q2 = − 0.071, Q3 = − 0.049, upper whisker = −0.022) trend slopes for all datasets, as resulting from linear regression. c , An example of user participation in toxic and non-toxic thread sets (Twitter news dataset). d , Pearson’s correlation coefficients between user participation and toxicity trends for each dataset. e , Pearson’s correlation coefficients between user participation in toxic and non-toxic threads for each dataset.

Controversy and toxicity

In this section, we aim to explore why people participate in toxic online conversations and why longer discussions tend to be more toxic. Several factors could be the subject matter. First, controversial topics might lead to longer, more heated debates with increased toxicity. Second, the endorsement of toxic content by other users may act as an incentive to increase the discussion’s toxicity. Third, engagement peaks, due to factors such as reduced discussion focus or the intervention of trolls, may bring a higher share of toxic exchanges. Pursuing this line of inquiry, we identified proxies to measure the level of controversy in conversations and examined how these relate to toxicity and conversation size. Concurrently, we investigated the relationship between toxicity, endorsement and engagement.

As shown previously 24 , 42 , controversy is likely to emerge when people with opposing views engage in the same debate. Thus, the presence of users with diverse political leanings within a conversation could be a valid proxy for measuring controversy. We operationalize this definition as follows. Exploiting the peculiarities of our data, we can infer the political leaning of a subset of users in the Facebook News, Twitter News, Twitter Vaccines and Gab Feed datasets. This is achieved by examining the endorsement, for example, in the form of likes, expressed towards news outlets of which the political inclinations have been independently assessed by news rating agencies (see the ‘Polarization and user leaning attribution’ section of the Methods ). Extended Data Table 3 shows a breakdown of the datasets. As a result, we label users with a leaning score l   ∈  [−1, 1], −1 being left leaning and +1 being right leaning. We then select threads with at least ten different labelled users, in which at least 10% of comments (with a minimum of 20) are produced by such users and assign to each of these comments the same leaning score of those who posted them. In this setting, the level of controversy within a conversation is assumed to be captured by the spread of the political leaning of the participants in the conversation. A natural way for measuring such a spread is the s.d. σ ( l ) of the distribution of comments possessing a leaning score: the higher the σ ( l ), the greater the level of ideological disagreement and therefore controversy in a thread. We analysed the relationship between controversy and toxicity in online conversations of different sizes. Figure 4a shows that controversy increases with the size of conversations in all datasets, and its trends are positively correlated with the corresponding trends in toxicity (Extended Data Table 3 ). This supports our hypothesis that controversy and toxicity are closely related in online discussions.

figure 4

a , The mean controversy ( σ ( l )) and mean toxicity versus thread size (log-binned and normalized) for the Facebook news, Twitter news, Twitter vaccines and Gab feed datasets. Here toxicity is calculated in the same conversations in which controversy could be computed (Extended Data Table 3 ); the relative Pearson’s, Spearman’s and Kendall’s correlation coefficients are also provided in Extended Data Table 3 . Trends are reported with their 95% confidence interval. b , Likes/upvotes versus toxicity (linearly binned). c , An example (Voat politics dataset) of the distributions of the frequency of toxic comments in threads before ( n  = 2,201, minimum = 0, maximum = 1, lower whisker = 0, Q1 = 0, Q2 = 0.15, Q3 = 0.313, upper whisker = 0.769) at the peak ( n  = 2,798, minimum = 0, maximum = 0.8, lower whisker = 0, Q1 = 0.125, Q2 = 0.196, Q3 = 0.282, upper whisker = 0.513) and after the peak ( n  = 2,791, minimum = 0, maximum = 1, lower whisker = 0, Q1 = 0.129, Q2 = 0.200, Q3 = 0.282, upper whisker = 0.500) of activity, as detected by Kleinberg’s burst detection algorithm.

As a complementary analysis, we draw on previous results 43 . In that study, using a definition of controversy operationally different but conceptually related to ours, a link was found between a greater degree of controversy of a discussion topic and a wider distribution of sentiment scores attributed to the set of its posts and comments. We quantified the sentiment of comments using a pretrained BERT model available from Hugging Face 44 , used also in previous studies 45 . The model predicts the sentiment of a sentence through a scoring system ranging from 1 (negative) to 5 (positive). We define the sentiment attributed to a comment c as its weighted mean \(s(c)=\sum _{i=1.5}{x}_{i}{p}_{i}\) , where x i   ∈  [1, 5] is the output score from the model and p i is the probability associated to that value. Moreover, we normalize the sentiment score s for each dataset between 0 and 1. We observe the trends of the mean s.d. of sentiment in conversations, \(\bar{\sigma }(s)\) , and toxicity are positively correlated for moderated platforms such as Facebook and Twitter but are negatively correlated on Gab (Extended Data Table 3 ). The positive correlation observed in Facebook and Twitter indicates that greater discrepancies in sentiment of the conversations can, in general, be linked to toxic conversations and vice versa. Instead, on unregulated platforms such as Gab, highly conflicting sentiments seem to be more likely to emerge in less toxic conversations.

As anticipated, another factor that may be associated with the emergence of toxic comments is the endorsement they receive. Indeed, such positive reactions may motivate posting even more comments of the same kind. Using the mean number of likes/upvotes as a proxy of endorsement, we have an indication that this may not be the case. Figure 4b shows that the trend in likes/upvotes versus comments toxicity is never increasing past the toxicity score threshold (0.6).

Finally, to complement our analysis, we inspect the relationship between toxicity and user engagement within conversations, measured as the intensity of the number of comments over time. To do so, we used a method for burst detection 46 that, after reconstructing the density profile of a temporal stream of elements, separates the stream into different levels of intensity and assigns each element to the level to which it belongs (see the ‘Burst analysis’ section of the Methods ). We computed the fraction of toxic comments at the highest intensity level of each conversation and for the levels right before and after it. By comparing the distributions of the fraction of toxic comments for the three intervals, we find that these distributions are statistically different in almost all cases (Fig. 4c and Extended Data Table 4 ). In all datasets but one, distributions are consistently shifted towards higher toxicity at the peak of engagement, compared with the previous phase. Likewise, in most cases, the peak shows higher toxicity even if compared to the following phase, which in turn is mainly more toxic than the phase before the peak. These results suggest that toxicity is likely to increase together with user engagement.

Here we examine one of the most prominent and persistent characteristics online discussions—toxic behaviour, defined here as rude, disrespectful or unreasonable conduct. Our analysis suggests that toxicity is neither a deterrent to user involvement nor an engagement amplifier; rather, it tends to emerge when exchanges become more frequent and may be a product of opinion polarization. Our findings suggest that the polarization of user opinions—intended as the degree of opposed partisanship of users in a conversation—may have a more crucial role than toxicity in shaping the evolution of online discussions. Thus, monitoring polarization could indicate early interventions in online discussions. However, it is important to acknowledge that the dynamics at play in shaping online discourse are probably multifaceted and require a nuanced approach for effective moderation. Other factors may influence toxicity and engagement, such as the specific subject of the conversation, the presence of influential users or ‘trolls’, the time and day of posting, as well as cultural or demographic aspects, such as user average age or geographical location. Furthermore, even though extremely toxic users are rare (Extended Data Fig. 2 ), the relationship between participation and toxicity of a discussion may in principle be affected also by small groups of highly toxic and engaged users driving the conversation dynamics. Although the analysis of such subtler aspects is beyond the scope of this Article, they are certainly worth investigating in future research.

However, when people encounter views that contradict their own, they may react with hostility and contempt, consistent with previous research 47 . In turn, it may create a cycle of negative emotions and behaviours that fuels toxicity. We also show that some online conversation features have remained consistent over the past three decades despite the evolution of platforms and social norms.

Our study has some limitations that we acknowledge and discuss. First, we use political leaning as a proxy for general leaning, which may capture only some of the nuances of online opinions. However, political leaning represents a broad spectrum of opinions across different topics, and it correlates well with other dimensions of leaning, such as news preferences, vaccine attitudes and stance on climate change 48 , 49 . We could not assign a political leaning to users to analyse controversies on all platforms. Still, those considered—Facebook, Gab and Twitter—represent different populations and moderation policies, and the combined data account for nearly 90% of the content in our entire dataset. Our analysis approach is based on breadth and heterogeneity. As such, it may raise concerns about potential reductionism due to the comparison of different datasets from different sources and time periods. We acknowledge that each discussion thread, platform and context has unique characteristics and complexities that might be diminished when homogenizing data. However, we aim not to capture the full depth of every discussion but to identify and highlight general patterns and trends in online toxicity across platforms and time. The quantitative approach used in our study is similar to numerous other studies 15 and enables us to uncover these overarching principles and patterns that may otherwise remain hidden. Of course, it is not possible to account for the behaviours of passive users. This entails, for example, that even if toxicity does not seem to make people leave conversations, it could still be a factor that discourages them from joining them. Our study leverages an extensive dataset to examine the intricate relationship between persistent online human behaviours and the characteristics of different social media platforms. Our findings challenge the prevailing assumption by demonstrating that toxic content, as traditionally defined, does not necessarily reduce user engagement, thereby questioning the assumed direct correlation between toxic content and negative discourse dynamics. This highlights the necessity for a detailed examination of the effect of toxic interactions on user behaviour and the quality of discussions across various platforms. Our results, showing user resilience to toxic content, indicate the potential for creating advanced, context-aware moderation tools that can accurately navigate the complex influence of antagonistic interactions on community engagement and discussion quality. Moreover, our study sets the stage for further exploration into the complexities of toxicity and its effect on engagement within online communities. Advancing our grasp of online discourse necessitates refining content moderation techniques grounded in a thorough understanding of human behaviour. Thus, our research adds to the dialogue on creating more constructive online spaces, promoting moderation approaches that are effective yet nuanced, facilitating engaging exchanges and reducing the tangible negative effects of toxic behaviour.

Through the extensive dataset presented here, critical aspects of the online platform ecosystem and fundamental dynamics of user interactions can be explored. Moreover, we provide insights that a comparative approach such as the one followed here can prove invaluable in discerning human behaviour from platform-specific features. This may be used to investigate further sensitive issues, such as the formation of polarization and misinformation. The resulting outcomes have multiple potential impacts. Our findings reveal consistent toxicity patterns across platforms, topics and time, suggesting that future research in this field should prioritize the concept of invariance. Recognizing that toxic behaviour is a widespread phenomenon that is not limited by platform-specific features underscores the need for a broader, unified approach to understanding online discourse. Furthermore, the participation of users in toxic conversations suggests that a simple approach to removing toxic comments may not be sufficient to prevent user exposure to such phenomena. This indicates a need for more sophisticated moderation techniques to manage conversation dynamics, including early interventions in discussions that show warnings of becoming toxic. Furthermore, our findings support the idea that examining content pieces in connection with others could enhance the effectiveness of automatic toxicity detection models. The observed homogeneity suggests that models trained using data from one platform may also have applicability to other platforms. Future research could explore further into the role of controversy and its interaction with other elements contributing to toxicity. Moreover, comparing platforms could enhance our understanding of invariant human factors related to polarization, disinformation and content consumption. Such studies would be instrumental in capturing the drivers of the effect of social media platforms on human behaviour, offering valuable insights into the underlying dynamics of online interactions.

Data collection

In our study, data collection from various social media platforms was strategically designed to encompass various topics, ensuring maximal heterogeneity in the discussion themes. For each platform, where feasible, we focus on gathering posts related to diverse areas such as politics, news, environment and vaccinations. This approach aims to capture a broad spectrum of discourse, providing a comprehensive view of conversation dynamics across different content categories.

We use datasets from previous studies that covered discussions about vaccines 50 , news 51 and brexit 52 . For the vaccines topic, the resulting dataset contains around 2 million comments retrieved from public groups and pages in a period that ranges from 2 January 2010 to 17 July 2017. For the news topic, we selected a list of pages from the Europe Media Monitor that reported the news in English. As a result, the obtained dataset contains around 362 million comments between 9 September 2009 and 18 August 2016. Furthermore, we collect a total of about 4.5 billion likes that the users put on posts and comments concerning these pages. Finally, for the brexit topic, the dataset contains around 460,000 comments from 31 December 2015 to 29 July 2016.

We collect data from the Pushshift.io archive ( https://files.pushshift.io/gab/ ) concerning discussions taking place from 10 August 2016, when the platform was launched, to 29 October 2018, when Gab went temporarily offline due to the Pittsburgh shooting 53 . As a result, we collect a total of around 14 million comments.

Data were collected from the Pushshift.io archive ( https://pushshift.io/ ) for the period ranging from 1 January 2018 to 31 December 2022. For each topic, whenever possible, we manually identified and selected subreddits that best represented the targeted topics. As a result of this operation, we obtained about 800,000 comments from the r/conspiracy subreddit for the conspiracy topic. For the vaccines topic, we collected about 70,000 comments from the r/VaccineDebate subreddit, focusing on the COVID-19 vaccine debate. We collected around 400,000 comments from the r/News subreddit for the news topic. We collected about 70,000 comments from the r/environment subreddit for the climate change topic. Finally, we collected around 550,000 comments from the r/science subreddit for the science topic.

We created a list of 14 channels, associating each with one of the topics considered in the study. For each channel, we manually collected messages and their related comments. As a result, from the four channels associated with the news topic (news notiziae, news ultimora, news edizionestraordinaria, news covidultimora), we obtained around 724,000 comments from posts between 9 April 2018 and 20 December 2022. For the politics topic, instead, the corresponding two channels (politics besttimeline, politics polmemes) produced a total of around 490,000 comments between 4 August 2017 and 19 December 2022. Finally, the eight channels assigned to the conspiracy topic (conspiracy bennyjhonson, conspiracy tommyrobinsonnews, conspiracy britainsfirst, conspiracy loomeredofficial, conspiracy thetrumpistgroup, conspiracy trumpjr, conspiracy pauljwatson, conspiracy iononmivaccino) produced a total of about 1.4 million comments between 30 August 2019 and 20 December 2022.

We used a list of datasets from previous studies that includes discussions about vaccines 54 , climate change 49 and news 55 topics. For the vaccines topic, we collected around 50 million comments from 23 January 2010 to 25 January 2023. For the news topic, we extend the dataset used previously 55 by collecting all threads composed of less than 20 comments, obtaining a total of about 9.5 million comments for a period ranging from 1 January 2020 to 29 November 2022. Finally, for the climate change topic, we collected around 9.7 million comments between 1 January 2020 and 10 January 2023.

We collected data for the Usenet discussion system by querying the Usenet Archive ( https://archive.org/details/usenet?tab=about ). We selected a list of topics considered adequate to contain a large, broad and heterogeneous number of discussions involving active and populated newsgroups. As a result of this selection, we selected conspiracy, politics, news and talk as topic candidates for our analysis. For the conspiracy topic, we collected around 280,000 comments between 1 September 1994 and 30 December 2005 from the alt.conspiracy newsgroup. For the politics topics, we collected around 2.6 million comments between 29 June 1992 and 31 December 2005 from the alt.politics newsgroup. For the news topic, we collected about 620,000 comments between 5 December 1992 and 31 December 2005 from the alt.news newsgroup. Finally, for the talk topic, we collected all of the conversations from the homonym newsgroup on a period that ranges from 13 February 1989 to 31 December 2005 for around 2.1 million contents.

We used a dataset presented previously 56 that covers the entire lifetime of the platform, from 9 January 2018 to 25 December 2020, including a total of around 16.2 million posts and comments shared by around 113,000 users in about 7,100 subverses (the equivalent of a subreddit for Voat). Similarly to previous platforms, we associated the topics to specific subverses. As a result of this operation, for the conspiracy topic, we collected about 1 million comments from the greatawakening subverse between 9 January 2018 and 25 December 2020. For the politics topic, we collected around 1 million comments from the politics subverse between 16 June 2014 and 25 December 2020. Finally, for the news topic, we collected about 1.4 million comments from the news subverse between 21 November 2013 and 25 December 2020.

We used a dataset proposed in previous studies that collected conversations about the climate change topic 49 , which is extended, coherently with previous platforms, by including conversations about vaccines and news topics. The data collection process for YouTube is performed using the YouTube Data API ( https://developers.google.com/youtube/v3 ). For the climate change topic, we collected around 840,000 comments between 16 March 2014 and 28 February 2022. For the vaccines topic, we collected conversations between 31 January 2020 and 24 October 2021 containing keywords about COVID-19 vaccines, namely Sinopharm, CanSino, Janssen, Johnson&Johnson, Novavax, CureVac, Pfizer, BioNTech, AstraZeneca and Moderna. As a result of this operation, we gathered a total of around 2.6 million comments to videos. Finally, for the news topic, we collected about 20 million comments between 13 February 2006 and 8 February 2022, including videos and comments from a list of news outlets, limited to the UK and provided by Newsguard (see the ‘Polarization and user leaning attribution’ section).

Content moderation policies

Content moderation policies are guidelines that online platforms use to monitor the content that users post on their sites. Platforms have different goals and audiences, and their moderation policies may vary greatly, with some placing more emphasis on free expression and others prioritizing safety and community guidelines.

Facebook and YouTube have strict moderation policies prohibiting hate speech, violence and harassment 57 . To address harmful content, Facebook follows a ‘remove, reduce, inform’ strategy and uses a combination of human reviewers and artificial intelligence to enforce its policies 58 . Similarly, YouTube has a similar set of community guidelines regarding hate speech policy, covering a wide range of behaviours such as vulgar language 59 , harassment 60 and, in general, does not allow the presence of hate speech and violence against individuals or groups based on various attributes 61 . To ensure that these guidelines are respected, the platform uses a mix of artificial intelligence algorithms and human reviewers 62 .

Twitter also has a comprehensive content moderation policy and specific rules against hateful conduct 63 , 64 . They use automation 65 and human review in the moderation process 66 . At the date of submission, Twitter’s content policies have remained unchanged since Elon Musk’s takeover, except that they ceased enforcing their COVID-19 misleading information policy on 23 November 2022. Their policy enforcement has faced criticism for inconsistency 67 .

Reddit falls somewhere in between regarding how strict its moderation policy is. Reddit’s content policy has eight rules, including prohibiting violence, harassment and promoting hate based on identity or vulnerability 68 , 69 . Reddit relies heavily on user reports and volunteer moderators. Thus, it could be considered more lenient than Facebook, YouTube and Twitter regarding enforcing rules. In October 2022, Reddit announced that they intend to update their enforcement practices to apply automation in content moderation 70 .

By contrast, Telegram, Gab and Voat take a more hands-off approach with fewer restrictions on content. Telegram has ambiguity in its guidelines, which arises from broad or subjective terms and can lead to different interpretations 71 . Although they mentioned they may use automated algorithms to analyse messages, Telegram relies mainly on users to report a range of content, such as violence, child abuse, spam, illegal drugs, personal details and pornography 72 . According to Telegram’s privacy policy, reported content may be checked by moderators and, if it is confirmed to violate their terms, temporary or permanent restrictions may be imposed on the account 73 . Gab’s Terms of Service allow all speech protected under the First Amendment to the US Constitution, and unlawful content is removed. They state that they do not review material before it is posted on their website and cannot guarantee prompt removal of illegal content after it has been posted 74 . Voat was once known as a ‘free-speech’ alternative to Reddit and allowed content even if it may be considered offensive or controversial 56 .

Usenet is a decentralized online discussion system created in 1979. Owing to its decentralized nature, Usenet has been difficult to moderate effectively, and it has a reputation for being a place where controversial and even illegal content can be posted without consequence. Each individual group on Usenet can have its own moderators, who are responsible for monitoring and enforcing their group’s rules, and there is no single set of rules that applies to the entire platform 75 .

Logarithmic binning and conversation size

Owing to the heavy-tailed distributions of conversation length (Extended Data Fig. 1 ), to plot the figures and perform the analyses, we used logarithmic binning. Thus, according to its length, each thread of each dataset is assigned to 1 out of 21 bins. To ensure a minimal number of points in each bin, we iteratively change the left bound of the last bin so that it contains at least N  = 50 elements (we set N  = 100 in the case of Facebook news, due to its larger size). Specifically, considering threads ordered in increasing length, the size of the largest thread is changed to that of the second last largest one, and the binning is recalculated accordingly until the last bin contains at least N points.

For visualization purposes, we provide a normalization of the logarithmic binning outcome that consists of mapping discrete points into coordinates of the x axis such that the bins correspond to {0, 0.05, 0.1, ..., 0.95, 1}.

To perform the part of the analysis, we select conversations belonging to the [0.7, 1] interval of the normalized logarithmic binning of thread length. This interval ensures that the conversations are sufficiently long and that we have a substantial number of threads. Participation and toxicity trends are obtained by applying to such conversations a linear binning of 21 elements to a chronologically ordered sequence of comments, that is, threads. A breakdown of the resulting datasets is provided in Supplementary Table 2 .

Finally, to assess the equality of the growth rates of participation values in toxic and non-toxic threads (see the ‘Conversation evolution and toxicity’ section), we implemented the following linear regression model:

where the term β 2 accounts for the effect that being a toxic conversation has on the growth of participation. Our results show that β 2 is not significantly different from 0 in most original and validation datasets (Supplementary Tables 8 and 11 )

Toxicity detection and validation of the models used

The problem of detecting toxicity is highly debated, to the point that there is currently no agreement on the very definition of toxic speech 64 , 76 . A toxic comment can be regarded as one that includes obscene or derogatory language 32 , that uses harsh, abusive language and personal attacks 33 , or contains extremism, violence and harassment 11 , just to give a few examples. Even though toxic speech should, in principle, be distinguished from hate speech, which is commonly more related to targeted attacks that denigrate a person or a group on the basis of attributes such as race, religion, gender, sex, sexual orientation and so on 77 , it sometimes may also be used as an umbrella term 78 , 79 . This lack of agreement directly reflects the challenging and inherent subjective nature of the concept of toxicity. The complexity of the topic makes it particularly difficult to assess the reliability of natural language processing models for automatic toxicity detection despite the impressive improvements in the field. Modern natural language processing models, such as Perspective API, are deep learning models that leverage word-embedding techniques to build representations of words as vectors in a high-dimensional space, in which a metric distance should reflect the conceptual distance among words, therefore providing linguistic context. A primary concern regarding toxicity detection models is their limited ability to contextualize conversations 11 , 80 . These models often struggle to incorporate factors beyond the text itself, such as the participant’s personal characteristics, motivations, relationships, group memberships and the overall tone of the discussion 11 . Consequently, what is considered to be toxic content can vary significantly among different groups, such as ethnicities or age groups 81 , leading to potential biases. These biases may stem from the annotators’ backgrounds and the datasets used for training, which might not adequately represent cultural heterogeneity. Moreover, subtle forms of toxic content, like indirect allusions, memes and inside jokes targeted at specific groups, can be particularly challenging to detect. Word embeddings equip current classifiers with a rich linguistic context, enhancing their ability to recognize a wide range of patterns characteristic of toxic expression. However, the requirements for understanding the broader context of a conversation, such as personal characteristics, motivations and group dynamics, remain beyond the scope of automatic detection models. We acknowledge these inherent limitations in our approach. Nonetheless, reliance on automatic detection models is essential for large-scale analyses of online toxicity like the one conducted in this study. We specifically resort to the Perspective API for this task, as it represents state-of-the-art automatic toxicity detection, offering a balance between linguistic nuance and scalable analysis capabilities. To define an appropriate classification threshold, we draw from the existing literature 64 , which uses 0.6 as the threshold for considering a comment to be toxic. This threshold can also be considered a reasonable one as, according to the developer guidelines offered by Perspective, it would indicate that the majority of the sample of readers, namely 6 out of 10, would perceive that comment as toxic. Due to the limitations mentioned above (for a criticism of Perspective API, see ref. 82 ), we validate our results by performing a comparative analysis using two other toxicity detectors: Detoxify ( https://github.com/unitaryai/detoxify ), which is similar to Perspective, and IMSYPP, a classifier developed for a European Project on hate speech 16 ( https://huggingface.co/IMSyPP ). In Supplementary Table 14 , the percentages of agreement among the three models in classifying 100,000 comments taken randomly from each of our datasets are reported. For Detoxify we used the same binary toxicity threshold (0.6) as used with Perspective. Although IMSYPP operates on a distinct definition of toxicity as outlined previously 16 , our comparative analysis shows a general agreement in the results. This alignment, despite the differences in underlying definitions and methodologies, underscores the robustness of our findings across various toxicity detection frameworks. Moreover, we perform the core analyses of this study using all classifiers on a further, vast and heterogeneous dataset. As shown in Supplementary Figs. 1 and 2 , the results regarding toxicity increase with conversation size and user participation and toxicity are quantitatively very similar. Furthermore, we verify the stability of our findings under different toxicity thresholds. Although the main analyses in this paper use the threshold value recommended by the Perspective API, set at 0.6, to minimize false positives, our results remain consistent even when applying a less conservative threshold of 0.5. This is demonstrated in Extended Data Fig. 5 , confirming the robustness of our observations across varying toxicity levels. For this study, we used the API support for languages prevalent in the European and American continents, including English, Spanish, French, Portuguese, German, Italian, Dutch, Polish, Swedish and Russian. Detoxify also offers multilingual support. However, IMSYPP is limited to English and Italian text, a factor considered in our comparative analysis.

Polarization and user leaning attribution

Our approach to measuring controversy in a conversation is based on estimating the degree of political partisanship among the participants. This measure is closely related to the political science concept of political polarization. Political polarization is the process by which political attitudes diverge from moderate positions and gravitate towards ideological extremes, as described previously 83 . By quantifying the level of partisanship within discussions, we aim to provide insights into the extent and nature of polarization in online debates. In this context, it is important to distinguish between ‘ideological polarization’ and ‘affective polarization’. Ideological polarization refers to divisions based on political viewpoints. By contrast, affective polarization is characterized by positive emotions towards members of one’s group and hostility towards those of opposing groups 84 , 85 . Here we focus specifically on ideological polarization. The subsequent description of our procedure for attributing user political leanings will further clarify this focus. On online social media, the individual leaning of a user toward a topic can be inferred through the content produced or the endorsement shown toward specific content. In this study, we consider the endorsement of users to news outlets of which the political leaning has been evaluated by trustworthy external sources. Although not without limitations—which we address below—this is a standard approach that has been used in several studies, and has become a common and established practice in the field of social media analysis due to its practicality and effectiveness in providing a broad understanding of political dynamics on these online platforms 1 , 43 , 86 , 87 , 88 . We label news outlets with a political score based on the information reported by Media Bias/Fact Check (MBFC) ( https://mediabiasfactcheck.com ), integrating with the equivalent information from Newsguard ( https://www.newsguardtech.com/ ). MBFC is an independent fact-checking organization that rates news outlets on the basis of the reliability and the political bias of the content that they produce and share. Similarly, Newsguard is a tool created by an international team of journalists that provides news outlet trust and political bias scores. Following standard methods used in the literature 1 , 43 , we calculated the individual leaning of a user l   ∈  [−1, 1] as the average of the leaning scores l c   ∈  [−1, 1] attributed to each of the content it produced/shared, where l c results from a mapping of the news organizations political scores provided by MBFC and Newsguard, respectively: [left, centre-left, centre, centre-right, right] to [−1, − 0.5, 0, 0.5, 1], and [far left, left, right, far right] to [−1, −0.5, 0.5, 1]). Our datasets have different structures, so we have to evaluate user leanings in different ways. For Facebook News, we assign a leaning score to users who posted a like at least three times and commented at least three times under news outlet pages that have a political score. For Twitter News, a leaning is assigned to users who posted at least 15 comments under scored news outlet pages. For Twitter Vaccines and Gab, we consider users who shared content produced by scored news outlet pages at least three times. A limitation of our approach is that engaging with politically aligned content does not always imply agreement; users may interact with opposing viewpoints for critical discussion. However, research indicates that users predominantly share content aligning with their own views, especially in politically charged contexts 87 , 89 , 90 . Moreover, our method captures users who actively express their political leanings, omitting the ‘passive’ ones. This is due to the lack of available data on users who do not explicitly state their opinions. Nevertheless, analysing active users offers valuable insights into the discourse of those most engaged and influential on social media platforms.

Burst analysis

We used the Kleinberg burst detection algorithm 46 (see the ‘Controversy and toxicity’ section) to all conversations with at least 50 comments in a dataset. In our analysis, we randomly sample up to 5,000 conversations, each containing a specific number of comments. To ensure the reliability of our data, we exclude conversations with an excessive number of double timestamps—defined as more than 10 consecutive or over 100 within the first 24 h. This criterion helps to mitigate the influence of bots, which could distort the patterns of human activity. Furthermore, we focus on the first 24 h of each thread to analyse streams of comments during their peak activity period. Consequently, Usenet was excluded from our study. The unique usage characteristics of Usenet render such a time-constrained analysis inappropriate, as its activity patterns do not align with those of the other platforms under consideration. By reconstructing the density profile of the comment stream, the algorithm divides the entire stream’s interval into subintervals on the basis of their level of intensity. Labelled as discrete positive values, higher levels of burstiness represent higher activity segments. To avoid considering flat-density phases, threads with a maximum burst level equal to 2 are excluded from this analysis. To assess whether a higher intensity of comments results in a higher comment toxicity, we perform a Mann–Whitney U -test 91 with Bonferroni correction for multiple testing between the distributions of the fraction of toxic comments t i in three intensity phases: during the peak of engagement and at the highest levels before and after. Extended Data Table 4 shows the corrected P values of each test, at a 0.99 confidence level, with H1 indicated in the column header. An example of the distribution of the frequency of toxic comments in threads at the three phases of a conversation considered (pre-peak, peak and post-peak) is reported in Fig. 4c .

Toxicity detection on Usenet

As discussed in the section on toxicity detection and the Perspective API above, automatic detectors derive their understanding of toxicity from the annotated datasets that they are trained on. The Perspective API is predominantly trained on recent texts, and its human labellers conform to contemporary cultural norms. Thus, although our dataset dates back to no more than the early 1990s, we provide a discussion on the viability of the application of Perspective API to Usenet and validation analysis. Contemporary society, especially in Western contexts, is more sensitive to issues of toxicity, including gender, race and sexual orientation, compared with a few decades ago. This means that some comments identified as toxic today, including those from older platforms like Usenet, might not have been considered as such in the past. However, this discrepancy does not significantly affect our analysis, which is centred on current standards of toxicity. On the other hand, changes in linguistic features may have some repercussions: there may be words and locutions that were frequently used in the 1990s that instead appear sparsely in today’s language, making Perspective potentially less effective in classifying short texts that contain them. We therefore proceeded to evaluate the impact that such a possible scenario could have on our results. In light of the above considerations, we consider texts labelled as toxic as correctly classified; instead, we assume that there is a fixed probability p that a comment may be incorrectly labelled as non-toxic. Consequently, we randomly designate a proportion p of non-toxic comments, relabel them as toxic and compute the toxicity versus conversation size trend (Fig. 2 ) on the altered dataset across various p . Specifically, for each value, we simulate 500 different trends, collecting their regression slopes to obtain a null distribution for them. To assess if the probability of error could lead to significant differences in the observed trend, we compute the fraction f of slopes lying outside the interval (−| s |,| s |), where s is the slope of the observed trend. We report the result in Supplementary Table 9 for different values of p . In agreement with our previous analysis, we assume that the slope differs significantly from the ones obtained from randomized data if f is less than 0.05.

We observed that only the Usenet Talk dataset shows sensitivity to small error probabilities, and the others do not show a significant difference. Consequently, our results indicate that Perspective API is suitable for application to Usenet data in our analyses, notwithstanding the potential linguistic and cultural shifts that might affect the classifier’s reliability with older texts.

Toxicity of short conversations

Our study focuses on the relationship between user participation and the toxicity of conversations, particularly in engaged or prolonged discussions. A potential concern is that concentrating on longer threads overlooks conversations that terminate quickly due to early toxicity, therefore potentially biasing our analysis. To address this, we analysed shorter conversations, comprising 6 to 20 comments, in each dataset. In particular, we computed the distributions of toxicity scores of the first and last three comments in each thread. This approach helps to ensure that our analysis accounts for a range of conversation lengths and patterns of toxicity development, providing a more comprehensive understanding of the dynamics at play. As shown in Supplementary Fig. 3 , for each dataset, the distributions of the toxicity scores display high similarity, meaning that, in short conversations, the last comments are not significantly more toxic than the initial ones, indicating that the potential effects mentioned above do not undermine our conclusions. Regarding our analysis of longer threads, we notice here that the participation quantity can give rise to similar trends in various cases. For example, high participation can be achieved because many users take part in the conversation, but also with small groups of users in which everyone is equally contributing over time. Or, in very large discussions, the contributions of individual outliers may remain hidden. By measuring participation, these and other borderline cases may not be distinct from the statistically highly likely discussion dynamics but, ultimately, this lack of discriminatory power does not have any implications on our findings nor on the validity of the conclusions that we draw.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

Facebook, Twitter and YouTube data are made available in accordance with their respective terms of use. IDs of comments used in this work are provided at Open Science Framework ( https://doi.org/10.17605/osf.io/fq5dy ). For the remaining platforms (Gab, Reddit, Telegram, Usenet and Voat), all of the necessary information to recreate the datasets used in this study can be found in the ‘Data collection’ section.

Code availability

The code used for the analyses presented in the Article is available at Open Science Framework ( https://doi.org/10.17605/osf.io/fq5dy ). The repository includes dummy datasets to illustrate the required data format and make the code run.

Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W. & Starnini, M. The echo chamber effect on social media. Proc. Natl Acad. Sci. USA 118 , e2023301118 (2021).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Tucker, J. A. et al. Social media, political polarization, and political disinformation: a review of the scientific literature. Preprint at SSRN https://doi.org/10.2139/ssrn.3144139 (2018).

González-Bailón, S. et al. Asymmetric ideological segregation in exposure to political news on Facebook. Science 381 , 392–398 (2023).

Article   PubMed   ADS   Google Scholar  

Guess, A. et al. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science 381 , 398–404 (2023).

Article   CAS   PubMed   ADS   Google Scholar  

Del Vicario, M. et al. The spreading of misinformation online. Proc. Natl Acad. Sci. USA 113 , 554–559 (2016).

Article   PubMed   PubMed Central   ADS   Google Scholar  

Bakshy, E., Messing, S. & Adamic, L. A. Exposure to ideologically diverse news and opinion on Facebook. Science 348 , 1130–1132 (2015).

Article   MathSciNet   CAS   PubMed   ADS   Google Scholar  

Bail, C. A. et al. Exposure to opposing views on social media can increase political polarization. Proc. Natl Acad. Sci. USA 115 , 9216–9221 (2018).

Article   CAS   PubMed   PubMed Central   ADS   Google Scholar  

Nyhan, B. et al. Like-minded sources on Facebook are prevalent but not polarizing. Nature 620 , 137–144 (2023).

Guess, A. et al. Reshares on social media amplify political news but do not detectably affect beliefs or opinions. Science 381 , 404–408 (2023).

Castaño-Pulgaŕın, S. A., Suárez-Betancur, N., Vega, L. M. T. & López, H. M. H. Internet, social media and online hate speech. Systematic review. Aggress. Viol. Behav. 58 , 101608 (2021).

Article   Google Scholar  

Sheth, A., Shalin, V. L. & Kursuncu, U. Defining and detecting toxicity on social media: context and knowledge are key. Neurocomputing 490 , 312–318 (2022).

Lupu, Y. et al. Offline events and online hate. PLoS ONE 18 , e0278511 (2023).

Gentzkow, M. & Shapiro, J. M. Ideological segregation online and offline. Q. J. Econ. 126 , 1799–1839 (2011).

Aichner, T., Grünfelder, M., Maurer, O. & Jegeni, D. Twenty-five years of social media: a review of social media applications and definitions from 1994 to 2019. Cyberpsychol. Behav. Social Netw. 24 , 215–222 (2021).

Lazer, D. M. et al. The science of fake news. Science 359 , 1094–1096 (2018).

Cinelli, M. et al. Dynamics of online hate and misinformation. Sci. Rep. 11 , 22083 (2021).

González-Bailón, S. & Lelkes, Y. Do social media undermine social cohesion? A critical review. Soc. Issues Pol. Rev. 17 , 155–180 (2023).

Roozenbeek, J. & Zollo, F. Democratize social-media research—with access and funding. Nature 612 , 404–404 (2022).

Article   CAS   PubMed   Google Scholar  

Dutton, W. H. Network rules of order: regulating speech in public electronic fora. Media Cult. Soc. 18 , 269–290 (1996).

Papacharissi, Z. Democracy online: civility, politeness, and the democratic potential of online political discussion groups. N. Media Soc. 6 , 259–283 (2004).

Coe, K., Kenski, K. & Rains, S. A. Online and uncivil? Patterns and determinants of incivility in newspaper website comments. J. Commun. 64 , 658–679 (2014).

Anderson, A. A., Brossard, D., Scheufele, D. A., Xenos, M. A. & Ladwig, P. The “nasty effect:” online incivility and risk perceptions of emerging technologies. J. Comput. Med. Commun. 19 , 373–387 (2014).

Garrett, R. K. Echo chambers online?: Politically motivated selective exposure among internet news users. J. Comput. Med. Commun. 14 , 265–285 (2009).

Del Vicario, M. et al. Echo chambers: emotional contagion and group polarization on Facebook. Sci. Rep. 6 , 37825 (2016).

Garimella, K., De Francisci Morales, G., Gionis, A. & Mathioudakis, M. Echo chambers, gatekeepers, and the price of bipartisanship. In Proc. 2018 World Wide Web Conference , 913–922 (International World Wide Web Conferences Steering Committee, 2018).

Johnson, N. et al. Hidden resilience and adaptive dynamics of the global online hate ecology. Nature 573 , 261–265 (2019).

Fortuna, P. & Nunes, S. A survey on automatic detection of hate speech in text. ACM Comput. Surv. 51 , 85 (2018).

Phadke, S. & Mitra, T. Many faced hate: a cross platform study of content framing and information sharing by online hate groups. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 1–13 (Association for Computing Machinery, 2020).

Xia, Y., Zhu, H., Lu, T., Zhang, P. & Gu, N. Exploring antecedents and consequences of toxicity in online discussions: a case study on Reddit. Proc. ACM Hum. Comput. Interact. 4 , 108 (2020).

Sipka, A., Hannak, A. & Urman, A. Comparing the language of qanon-related content on Parler, GAB, and Twitter. In Proc. 14th ACM Web Science Conference 2022 411–421 (Association for Computing Machinery, 2022).

Fortuna, P., Soler, J. & Wanner, L. Toxic, hateful, offensive or abusive? What are we really classifying? An empirical analysis of hate speech datasets. In Proc. 12th Language Resources and Evaluation Conference (eds Calzolari, E. et al.) 6786–6794 (European Language Resources Association, 2020).

Davidson, T., Warmsley, D., Macy, M. & Weber, I. Automated hate speech detection and the problem of offensive language. In Proc. International AAAI Conference on Web and Social Media 11 (Association for the Advancement of Artificial Intelligence, 2017).

Kolhatkar, V. et al. The SFU opinion and comments corpus: a corpus for the analysis of online news comments. Corpus Pragmat. 4 , 155–190 (2020).

Article   PubMed   Google Scholar  

Lees, A. et al. A new generation of perspective API: efficient multilingual character-level transformers. In KDD'22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 3197–3207 (Association for Computing Machinery, 2022).

Vidgen, B. & Derczynski, L. Directions in abusive language training data, a systematic review: garbage in, garbage out. PLoS ONE 15 , e0243300 (2020).

Ross, G. J. & Jones, T. Understanding the heavy-tailed dynamics in human behavior. Phys. Rev. E 91 , 062809 (2015).

Article   MathSciNet   ADS   Google Scholar  

Choi, D., Chun, S., Oh, H., Han, J. & Kwon, T. T. Rumor propagation is amplified by echo chambers in social media. Sci. Rep. 10 , 310 (2020).

Beel, J., Xiang, T., Soni, S. & Yang, D. Linguistic characterization of divisive topics online: case studies on contentiousness in abortion, climate change, and gun control. In Proc. International AAAI Conference on Web and Social Media Vol. 16, 32–42 (Association for the Advancement of Artificial Intelligence, 2022).

Saveski, M., Roy, B. & Roy, D. The structure of toxic conversations on Twitter. In Proc. Web Conference 2021 (eds Leskovec, J. et al.) 1086–1097 (Association for Computing Machinery, 2021).

Juul, J. L. & Ugander, J. Comparing information diffusion mechanisms by matching on cascade size. Proc. Natl Acad. Sci. USA 118 , e2100786118 (2021).

Fariello, G., Jemielniak, D. & Sulkowski, A. Does Godwin’s law (rule of Nazi analogies) apply in observable reality? An empirical study of selected words in 199 million Reddit posts. N. Media Soc. 26 , 14614448211062070 (2021).

Qiu, J., Lin, Z. & Shuai, Q. Investigating the opinions distribution in the controversy on social media. Inf. Sci. 489 , 274–288 (2019).

Garimella, K., Morales, G. D. F., Gionis, A. & Mathioudakis, M. Quantifying controversy on social media. ACM Trans. Soc. Comput. 1 , 3 (2018).

NLPTown. bert-base-multilingual-uncased-sentiment, huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment (2023).

Ta, H. T., Rahman, A. B. S., Najjar, L. & Gelbukh, A. Transfer Learning from Multilingual DeBERTa for Sexism Identification CEUR Workshop Proceedings Vol. 3202 (CEUR-WS, 2022).

Kleinberg, J. Bursty and hierarchical structure in streams. Data Min. Knowl. Discov. 7 , 373–397 (2003).

Article   MathSciNet   Google Scholar  

Zollo, F. et al. Debunking in a world of tribes. PLoS ONE 12 , e0181821 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Albrecht, D. Vaccination, politics and COVID-19 impacts. BMC Publ. Health 22 , 96 (2022).

Article   CAS   Google Scholar  

Falkenberg, M. et al. Growing polarization around climate change on social media. Nat. Clim. Change 12 , 1114–1121 (2022).

Schmidt, A. L., Zollo, F., Scala, A., Betsch, C. & Quattrociocchi, W. Polarization of the vaccination debate on Facebook. Vaccine 36 , 3606–3612 (2018).

Schmidt, A. L. et al. Anatomy of news consumption on Facebook. Proc. Natl Acad. Sci. USA 114 , 3035–3039 (2017).

Del Vicario, M., Zollo, F., Caldarelli, G., Scala, A. & Quattrociocchi, W. Mapping social dynamics on Facebook: the brexit debate. Soc. Netw. 50 , 6–16 (2017).

Hunnicutt, T. & Dave, P. Gab.com goes offline after Pittsburgh synagogue shooting. Reuters , www.reuters.com/article/uk-pennsylvania-shooting-gab-idUKKCN1N20QN (29 October 2018).

Valensise, C. M. et al. Lack of evidence for correlation between COVID-19 infodemic and vaccine acceptance. Preprint at arxiv.org/abs/2107.07946 (2021).

Quattrociocchi, A., Etta, G., Avalle, M., Cinelli, M. & Quattrociocchi, W. in Social Informatics (eds Hopfgartner, F. et al.) 245–256 (Springer, 2022).

Mekacher, A. & Papasavva, A. “I can’t keep it up” a dataset from the defunct voat.co news aggregator. In Proc. International AAAI Conference on Web and Social Media Vol. 16, 1302–1311 (AAAI, 2022).

Facebook Community Standards , transparency.fb.com/policies/community-standards/hate-speech/ (Facebook, 2023).

Rosen, G. & Lyons, T. Remove, reduce, inform: new steps to manage problematic content. Meta , about.fb.com/news/2019/04/remove-reduce-inform-new-steps/ (10 April 2019).

Vulgar Language Policy , support.google.com/youtube/answer/10072685? (YouTube, 2023).

Harassment & Cyberbullying Policies , support.google.com/youtube/answer/2802268 (YouTube, 2023).

Hate Speech Policy , support.google.com/youtube/answer/2801939 (YouTube, 2023).

How Does YouTube Enforce Its Community Guidelines? , www.youtube.com/intl/enus/howyoutubeworks/policies/community-guidelines/enforcing-community-guidelines (YouTube, 2023).

The Twitter Rules , help.twitter.com/en/rules-and-policies/twitter-rules (Twitter, 2023).

Hateful Conduct , help.twitter.com/en/rules-and-policies/hateful-conduct-policy (Twitter, 2023).

Gorwa, R., Binns, R. & Katzenbach, C. Algorithmic content moderation: technical and political challenges in the automation of platform governance. Big Data Soc. 7 , 2053951719897945 (2020).

Our Range of Enforcement Options , help.twitter.com/en/rules-and-policies/enforcement-options (Twitter, 2023).

Elliott, V. & Stokel-Walker, C. Twitter’s moderation system is in tatters. WIRED (17 November 2022).

Reddit Content Policy , www.redditinc.com/policies/content-policy (Reddit, 2023).

Promoting Hate Based on Identity or Vulnerability , www.reddithelp.com/hc/en-us/articles/360045715951 (Reddit, 2023).

Malik, A. Reddit acqui-hires team from ML content moderation startup Oterlu. TechCrunch , tcrn.ch/3yeS2Kd (4 October 2022).

Terms of Service , telegram.org/tos (Telegram, 2023).

Durov, P. The rules of @telegram prohibit calls for violence and hate speech. We rely on our users to report public content that violates this rule. Twitter , twitter.com/durov/status/917076707055751168?lang=en (8 October 2017).

Telegram Privacy Policy , telegram.org/privacy (Telegram, 2023).

Terms of Service , gab.com/about/tos (Gab, 2023).

Salzenberg, C. & Spafford, G. What is Usenet? , www0.mi.infn.it/ ∼ calcolo/Wis usenet.html (1995).

Castelle, M. The linguistic ideologies of deep abusive language classification. In Proc. 2nd Workshop on Abusive Language Online (ALW2) (eds Fišer, D. et al.) 160–170, aclanthology.org/W18-5120 (Association for Computational Linguistics, 2018).

Tontodimamma, A., Nissi, E. & Sarra, A. E. A. Thirty years of research into hate speech: topics of interest and their evolution. Scientometrics 126 , 157–179 (2021).

Sap, M. et al. Annotators with attitudes: how annotator beliefs and identities bias toxic language detection. In Proc. 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (eds. Carpuat, M. et al.) 5884–5906 (Association for Computational Linguistics, 2022).

Pavlopoulos, J., Sorensen, J., Dixon, L., Thain, N. & Androutsopoulos, I. Toxicity detection: does context really matter? In Proc. 58th Annual Meeting of the Association for Computational Linguistics (eds Jurafsky, D. et al.) 4296–4305 (Association for Computational Linguistics, 2020).

Yin, W. & Zubiaga, A. Hidden behind the obvious: misleading keywords and implicitly abusive language on social media. Online Soc. Netw. Media 30 , 100210 (2022).

Sap, M., Card, D., Gabriel, S., Choi, Y. & Smith, N. A. The risk of racial bias in hate speech detection. In Proc. 57th Annual Meeting of the Association for Computational Linguistics (eds Kohonen, A. et al.) 1668–1678 (Association for Computational Linguistics, 2019).

Rosenblatt, L., Piedras, L. & Wilkins, J. Critical perspectives: a benchmark revealing pitfalls in PerspectiveAPI. In Proc. Second Workshop on NLP for Positive Impact (NLP4PI) (eds Biester, L. et al.) 15–24 (Association for Computational Linguistics, 2022).

DiMaggio, P., Evans, J. & Bryson, B. Have American’s social attitudes become more polarized? Am. J. Sociol. 102 , 690–755 (1996).

Fiorina, M. P. & Abrams, S. J. Political polarization in the American public. Annu. Rev. Polit. Sci. 11 , 563–588 (2008).

Iyengar, S., Gaurav, S. & Lelkes, Y. Affect, not ideology: a social identity perspective on polarization. Publ. Opin. Q. 76 , 405–431 (2012).

Cota, W., Ferreira, S. & Pastor-Satorras, R. E. A. Quantifying echo chamber effects in information spreading over political communication networks. EPJ Data Sci. 8 , 38 (2019).

Bessi, A. et al. Users polarization on Facebook and Youtube. PLoS ONE 11 , e0159641 (2016).

Bessi, A. et al. Science vs conspiracy: collective narratives in the age of misinformation. PLoS ONE 10 , e0118093 (2015).

Himelboim, I., McCreery, S. & Smith, M. Birds of a feather tweet together: integrating network and content analyses to examine cross-ideology exposure on Twitter. J. Comput. Med. Commun. 18 , 40–60 (2013).

An, J., Quercia, D. & Crowcroft, J. Partisan sharing: Facebook evidence and societal consequences. In Proc. Second ACM Conference on Online Social Networks, COSN ′ 14 13–24 (Association for Computing Machinery, 2014).

Mann, H. B. & Whitney, D. R. On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18 , 50–60 (1947).

Download references

Acknowledgements

We thank M. Samory for discussions; T. Quandt and Z. Zhang for suggestions during the review process; and Geronimo Stilton and the Hypnotoad for inspiring the data analysis and result interpretation. The work is supported by IRIS Infodemic Coalition (UK government, grant no. SCH-00001-3391), SERICS (PE00000014) under the NRRP MUR program funded by the EU NextGenerationEU project CRESP from the Italian Ministry of Health under the program CCM 2022, PON project ‘Ricerca e Innovazione’ 2014-2020, and PRIN Project MUSMA for Italian Ministry of University and Research (MUR) through the PRIN 2022CUP G53D23002930006 and EU Next-Generation EU, M4 C2 I1.1.

Author information

These authors contributed equally: Michele Avalle, Niccolò Di Marco, Gabriele Etta

Authors and Affiliations

Department of Computer Science, Sapienza University of Rome, Rome, Italy

Michele Avalle, Niccolò Di Marco, Gabriele Etta, Shayan Alipour, Lorenzo Alvisi, Matteo Cinelli & Walter Quattrociocchi

Department of Social Sciences and Economics, Sapienza University of Rome, Rome, Italy

Emanuele Sangiorgio

Department of Communication and Social Research, Sapienza University of Rome, Rome, Italy

Anita Bonetti

Institute of Complex Systems, CNR, Rome, Italy

Antonio Scala

Department of Mathematics, City University of London, London, UK

Andrea Baronchelli

The Alan Turing Institute, London, UK

You can also search for this author in PubMed   Google Scholar

Contributions

Conception and design: W.Q., M.A., M.C., G.E. and N.D.M. Data collection: G.E. and N.D.M. with collaboration from M.C., M.A. and S.A. Data analysis: G.E., N.D.M., M.A., M.C., W.Q., E.S., A. Bonetti, A. Baronchelli and A.S. Code writing: G.E. and N.D.M. with collaboration from M.A., E.S., S.A. and M.C. All of the authors provided critical feedback and helped to shape the research, analysis and manuscript, and contributed to the preparation of the manuscript.

Corresponding authors

Correspondence to Matteo Cinelli or Walter Quattrociocchi .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Thorsten Quandt, Ziqi Zhang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 general characteristics of online conversations..

a . Distributions of conversation length (number of comments in a thread). b . Distributions of the time duration (days) of user activity on a platform for each platform and each topic. c . Time duration (days) distributions of threads. Colour-coded legend on the side.

Extended Data Fig. 2 Extremely toxic authors and conversations are rare.

a . Complementary cumulative distribution functions (CCDFs) of the toxicity of authors who posted more than 10 comments. Toxicity is defined as usual as the fraction of toxic comments over the total of comments posted by a user. b . CCDFs of the toxicity of conversations containing more than 10 comments. Colour-coded legend on the side.

Extended Data Fig. 3 User toxicity as conversations evolve.

Mean fraction of toxic comments as conversations progress. The x-axis represents the normalized position of comment intervals in the threads. For each dataset, toxicity is computed in the thread size interval [0.7−1] (see main text and Tab. S 2 in SI). Trends are reported with their 95% confidence interval. Colour-coded legend on the side.

Extended Data Fig. 4 Toxicity is not associated with conversation lifetime.

Mean toxicity of a . users versus their time of permanence in the dataset and b . threads versus their time duration. Trends are reported with their 95% confidence interval and they are reported using a normalized log-binning. Colour-coded legend on the side.

Extended Data Fig. 5 Results hold for a different toxicity threshold.

Core analyses presented in the paper repeated employing a lower (0.5) toxicity binary classification threshold. a . Mean fraction of toxic comments in conversations versus conversation size, for each dataset (see Fig. 2 ). Trends are reported with their 95% confidence interval. b . Pearson’s correlation coefficients between user participation and toxicity trends for each dataset. c . Pearson’s correlation coefficients between users’ participation in toxic and non-toxic thread sets, for each dataset. d . Boxplot of the distribution of toxicity ( n  = 25, min = −0.016, max = 0.020, lower whisker = −0.005, Q 1 = − 0.005, Q 2  = 0.004, Q 3  = 0.012, upper whisker = 0.020) and participation ( n  = 25, min = −0.198, max = −0.022, lower whisker = −0.198, Q 1 = − 0.109, Q 2  = − 0.070, Q 3  = − 0.049, upper whisker = −0.022) trend slopes for all datasets, as resulting from linear regression. The results of the relative Mann-Kendall tests for trend assessment are shown in Extended Data Table 5 .

Supplementary information

Supplementary information.

Supplementary Information 1–4, including details regarding data collection for validation dataset, Supplementary Figs. 1–3, Supplementary Tables 1–17 and software and coding specifications.

Reporting Summary

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Avalle, M., Di Marco, N., Etta, G. et al. Persistent interaction patterns across social media platforms and over time. Nature (2024). https://doi.org/10.1038/s41586-024-07229-y

Download citation

Received : 30 April 2023

Accepted : 22 February 2024

Published : 20 March 2024

DOI : https://doi.org/10.1038/s41586-024-07229-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

anonymous social media case study

It’s Time to Give Up on Ending Social Media’s Misinformation Problem

There’s a better approach to keeping users safe.

Angry emojis falling into a trash can

If you don’t trust social media, you should know you’re not alone. Most people surveyed around the world feel the same—in fact, they’ve been saying so for a decade. There is clearly a problem with misinformation and hazardous speech on platforms such as Facebook and X. And before the end of its term this year, the Supreme Court may redefine how that problem is treated.

Over the past few weeks, the Court has heard arguments in three cases that deal with controlling political speech and misinformation online. In the first two , heard last month, lawmakers in Texas and Florida claim that platforms such as Facebook are selectively removing political content that its moderators deem harmful or otherwise against their terms of service; tech companies have argued that they have the right to curate what their users see. Meanwhile, some policy makers believe that content moderation hasn’t gone far enough , and that misinformation still flows too easily through social networks; whether (and how) government officials can directly communicate with tech platforms about removing such content is at issue in the third case , which was put before the Court this week.

We’re Harvard economists who study social media and platform design. (One of us, Scott Duke Kominers, is also a research partner at the crypto arm of a16z , a venture-capital firm with investments in social platforms, and an adviser to Quora.) Our research offers a perhaps counterintuitive solution to disagreements about moderation: Platforms should give up on trying to prevent the spread of information that is simply false, and focus instead on preventing the spread of information that can be used to cause harm . These are related issues, but they’re not the same.

As the presidential election approaches, tech platforms are gearing up for a deluge of misinformation. Civil-society organizations say that platforms need a better plan to combat election misinformation, which some academics expect to reach new heights this year. Platforms say they have plans for keeping sites secure, yet despite the resources devoted to content moderation, fact-checking, and the like, it’s hard to escape the feeling that the tech titans are losing the fight.

Read: I asked 13 tech companies about their plans for election violence

Here is the issue: Platforms have the power to block, flag, or mute content that they judge to be false. But blocking or flagging something as false doesn’t necessarily stop users from believing it. Indeed, because many of the most pernicious lies are believed by those inclined to distrust the “establishment,” blocking or flagging false claims can even make things worse.

On December 19, 2020, then-President Donald Trump posted a now-infamous message about election fraud, telling readers to “be there,” in Washington, D.C., on January 6. If you visit that post on Facebook today, you’ll see a sober annotation from the platform itself that “the US has laws, procedures, and established institutions to ensure the integrity of our elections.” That disclaimer is sourced from the Bipartisan Policy Center . But does anyone seriously believe that the people storming the Capitol on January 6, and the many others who cheered them on, would be convinced that Joe Biden won just because the Bipartisan Policy Center told Facebook that everything was okay?

Our research shows that this problem is intrinsic: Unless a platform’s users trust the platform’s motivations and its process, any action by the platform can look like evidence of something it is not. To reach this conclusion, we built a mathematical model. In the model, one user (a “sender”) tries to make a claim to another user (a “receiver”). The claim might be true or false, harmful or not. Between the two users is a platform—or maybe an algorithm acting on its behalf—that can block the sender’s content if it wants to.

We wanted to find out when blocking content can improve outcomes, without a risk of making them worse. Our model, like all models, is an abstraction—and thus imperfectly captures the complexity of actual interactions. But because we wanted to consider all possible policies, not just those that have been tried in practice, our question couldn’t be answered by data alone. So we instead approached it using mathematical logic, treating the model as a kind of wind tunnel to test the effectiveness of different policies.

Our analysis shows that if users trust the platform to both know what’s right and do what’s right (and the platform truly does know what’s true and what isn’t), then the platform can successfully eliminate misinformation. The logic is simple: If users believe the platform is benevolent and all-knowing, then if something is blocked or flagged, it must be false, and if it is let through, it must be true.

You can see the problem, though: Many users don’t trust Big Tech platforms, as those previously mentioned surveys demonstrate. When users don’t trust a platform, even well-meaning attempts to make things better can make things worse. And when the platforms seem to be taking sides , that can add fuel to the very fire they are trying to put out.

Does this mean that content moderation is always counterproductive? Far from it. Our analysis also shows that moderation can be very effective when it blocks information that can be used to do something harmful.

Going back to Trump’s December 2020 post about election fraud, imagine that, instead of alerting users to the sober conclusions of the Bipartisan Policy Center, the platform had simply made it much harder for Trump to communicate the date (January 6) and place (Washington, D.C.) for supporters to gather. Blocking that information wouldn’t have prevented users from believing that the election was stolen—to the contrary, it might have fed claims that tech-sector elites were trying to influence the outcome. Nevertheless, making it harder to coordinate where and when to go might have helped slow the momentum of the eventual insurrection, thus limiting the post’s real-world harms.

Read: So maybe Facebook didn’t ruin politics

Unlike removing misinformation per se, removing information that enables harm can work even if users don’t trust the platform’s motives at all. When it is the information itself that enables the harm, blocking that information blocks the harm as well. A similar logic extends to other kinds of harmful content, such as doxxing and hate speech. There, the content itself—not the beliefs it encourages—is the root of the harm, and platforms do indeed successfully moderate these types of content.

Do we want tech companies to decide what is and is not harmful? Maybe not; the challenges and downsides are clear. But platforms already routinely make judgments about harm—is a post calling for a gathering at a particular place and time that includes the word violent an incitement to violence, or an announcement of an outdoor concert? Clearly the latter if you’re planning to see the Violent Femmes . Often context and language make these judgments apparent enough that an algorithm can determine them. When that doesn’t happen, platforms can rely on internal experts or even independent bodies, such as Meta’s Oversight Board, which handles tricky cases related to the company’s content policies.

And if platforms accept our reasoning, they can divert resources from the misguided task of deciding what is true toward the still hard, but more pragmatic, task of determining what enables harm. Even though misinformation is a huge problem, it’s not one that platforms can solve. Platforms can help keep us safer by focusing on what content moderation can do, and giving up on what it can’t.

IMAGES

  1. How to Write Powerful Anonymous Case Studies

    anonymous social media case study

  2. Social Media Case Study on Behance

    anonymous social media case study

  3. Case study: Anonymous social media mobile app

    anonymous social media case study

  4. Social Media Case Study on Behance

    anonymous social media case study

  5. How to Develop an Anonymous Social Network? Case Studies and Expert Advice

    anonymous social media case study

  6. How to write a social media case study (with template) (2022)

    anonymous social media case study

COMMENTS

  1. Beyond a Dichotomous Understanding of Online Anonymity: Bridging the

    We build on results from a 4-year project (Online anonymity, 2014-2017) where we conducted two complementing case studies; a study of social interaction in online gaming and an experiment at an online auction site. ... Sharon T, John NA (2018) Unpacking (the) secret: Anonymous social media and the impossibility of networked anonymity. New Media ...

  2. Anonymous and Non-anonymous User Behavior on Social Media: A Case Study

    In our research, we investigated social media services meant for solely anonymous use (Jodel) and for widely spread non-anonymous sharing of pictures and videos (Instagram). This study examines the impact of anonymity on the behavior of users on Jodel compared to their non-anonymous use of Instagram as well as the differences between the user ...

  3. Anonymous and non-anonymous user behavior on social media: A case study

    In our research, we investigated social media services meant for solely anonymous use (Jodel) and for widely spread non-anonymous sharing of pictures and videos (Instagram). This study examines ...

  4. A Qualitative Research on Usage Intention and Platform Swinging ...

    By iheriting online natural properties, anonymous social media (ASM) applications have become popular and have attracted large amounts of mobile users (e.g., the youth) who can construct new identities for role-play and show themselves in anonymous ways. In order to investigate the influencing factors toward usage intention (UI) and platform swinging (PS) behavior among anonymous social ...

  5. Instruction, Student Engagement, and Learning Outcomes: A Case Study

    To investigate these issues, this paper presents a case study on the usage of a social media app (SpeakUp) during a semester-long face-to-face university course, and its relations with the context ...

  6. PDF Anonymous and Non-anonymous User Behavior on Social Media: A Case Study

    by using anonymous or non-anonymous social media platforms. To target users of anonymous and non-anonymous social media, we have chosen to study two mobile SNSs, namely Jodel and Instagram. Instagram represents an SNS in which users can be identifiable, whereas Jodel is an SNS where users remain anonymous. These two services were chosen

  7. Anonymous and Non-anonymous User Behavior on Social Media: A Case Study

    Research Questions and Objectives are needs that a user with a particular role wants to satisfy This study examines the impact of anonymity on the by using anonymous or non-anonymous social media behavior of users on Jodel compared to non-anonymous platforms. usage of Instagram as well as the possible differences To target users of anonymous ...

  8. Full article: The social value of anonymity on campus: a study of the

    The social value of anonymity on campus: a study of the decline of Yik Yak. This paper considers the social value of anonymity in online university student communities, through the presentation of research which tracked the final year of life of the social media application Yik Yak. Yik Yak was an anonymous, geosocial mobile application ...

  9. [PDF] Anonymous social media

    Yik Yak: An Exploratory Study of College Student Uses and Gratifications. J. M. Vaterlaus. Education, Sociology. 2017. Publically launched in 2013 and discontinued in 2017, Yik Yak was an anonymous and geographically restricted social media application. A uses and gratifications theoretical framework and a….

  10. What's in a (pseudo)name? Ethical conundrums for the principles of

    Internet identities exist on a spectrum, ranging from the 'totally anonymous to the thoroughly named' (Donath, 1999: 51). ... which is often implicitly framed as an ethical challenge through case studies of internet and social media research but not discussed in its own right.

  11. The Many Shades of Anonymity: Characterizing Anonymous Social Media

    Another recent study [15], comparing anonymous (Whisper 5 ) to non-anonymous (Twitter 6 ) social media, found that anonymity implies more personal information, more negative emotions (anger and ...

  12. Anonymity in Social Media: Effects of Content Controversiality and

    This study investigates the effect of controversiality and social endorsement of media content on sharing behavior when choosing between sharing publicly or anonymously, and Anonymous sharing is found to be a popular choice. The amount of information shared via social media is rapidly increasing amid growing concerns over online privacy. This study investigates the effect of controversiality ...

  13. Online abuse: banning anonymous social media accounts is not the answer

    The idea has since gained traction, with other MPs calling for a ban on anonymous social media accounts as a way to mitigate online abuse. Yet while it's clear there is a need to address how we ...

  14. Social Media Futures: How to Reconcile Anonymity, Abuse and Identity Online

    This article is part of the tech and public policy team's series on understanding the future of social media, including trends in regulation, innovation, safety and harm. Here we set out how banning online anonymity is not the answer to tackling abuse, but digital credentials could help make social media safer.Get in touch at @andrewjb_ or @maxjb. ...

  15. #GoodVibesOnly: Exploring social constructionism on high-level

    Kasakowskij R, Friedrich N, Fietkiewicz KJ, et al. (2018) Anonymous and non-anonymous user behavior on social media: a case study of Jodel and Instagram. Journal of Information Science Theory and Practice 6:25-36

  16. Impact of Anonymity and Identity Deception on Social Media eWOM

    The current study looks closely on the impact of anonymity in typical eWOM behaviour context on social media by drawing observations from a recent case in point and literature. The paper concludes with a list of relevant factors and propositions that must be tested empirically to draw greater understanding of the phenomenon.

  17. Sendit, Yolo, NGL: anonymous social apps are taking over once more, but

    Anonymous question apps are just one example of anonymous online spaces. Screenshot/Twitter. These apps are designed to hook users in. They leverage certain platform principles to provide a highly ...

  18. Persistent interaction patterns across social media platforms ...

    Growing concern surrounds the impact of social media platforms on public discourse 1,2,3,4 and their influence on social dynamics 5,6,7,8,9, especially in the context of toxicity 10,11,12.Here, to ...

  19. Almost human? A comparative case study on the social media presence of

    Previous research has found this to be a hallmark of the social media posting behaviour of both women (the majority of users on Instagram; Sheldon and Bryant, 2016) and influential social media users (Bogomilova, 2016; SwiftKey, 2015). Given the nature of the cases in this study and the platform considered, the higher emoji use by the HVI may ...

  20. Anonymity and its role in digital aggression: A systematic review

    That said, we concluded that four quantitative studies focused on technical anonymity in that they explicitly compared DA from more technically anonymous social media with that from less technically anonymous media (Liu & Sui, 2017) or examined online comments published with or without self-identifying information (Mondal et al., 2018; Moore et ...

  21. It's Time to Give Up on Ending Social Media's Misinformation Problem

    Scott Duke Kominers is the Sarofim-Rock Professor of Business Administration at Harvard Business School, a faculty affiliate in the Harvard Department of Economics, and a research partner at a16z ...

  22. Impact of AnonStalk (Anonymous Stalking) on users of Social Media: a

    Impact of AnonStalk (Anonymous Stalking) on users of Social Media: a Case Study V. Kanakaris *, K. Tzo velekis and D. V. Bandekas Depa rtment of Electrical Engineering, Eastern Macedonia and ...

  23. Case study

    Dec 31, 2022. T he idea of an anonymous social media app popped up when my young IT colleagues & I sat down with avidity to launch our own product. We went through many youtube videos with a bunch ...

  24. The migrant squatter freakout shows how the right-wing media ecosystem

    Right-wing media outlets turned a single TikTok video — in which the creator explains in Spanish how to expropriate an uninhabited residence — into a nationwide freakout.

  25. Impact of AnonStalk (Anonymous Stalking) on users of Social Media: a

    The architecture and implementation of a method that collects data anonymously from Twitter and Instagram using Twitter REST API and GeoJSON along with "Instagram Real-Time API" and "Genymotion" are described, showing that the intruder can retrieve sensitive personal data and the ability to map the activity could allow a malicious user to track an unsuspecting person's activity and ...

  26. Recent presidential elections triggered religious and spiritual

    "I wanted to extend our research on spiritual struggles into topics related to current events. In this study, my team and I wanted to understand whether and why U.S. adults might experience spiritual struggles in response to Presidential election results," said study author Julie J. Exline, a professor of psychological sciences at Case Western Reserve University.

  27. A Terrorist Attack in Russia

    Warning: this episode contains descriptions of violence. More than a hundred people died and scores more were wounded on Friday night in a terrorist attack on a concert hall near Moscow — the ...

  28. The moderating effect of environmental gamification on the relationship

    Social media marketing plays a relevant role in the brand promotion of enterprises owing to its advantages of rapid and diversified communication with consumers. The Chinese Internet enterprise Alipay launched Ant Forest as a mobile application with gamified social functions, bringing consumer-brand engagement. Ant Forest provides a variety of gamification functions (e.g. point, leaderboard ...