U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

How do we raise media bias awareness effectively? Effects of visualizations to communicate bias

Timo spinde.

1 Department of Computer and Information Science, University of Konstanz, Konstanz, Germany

2 School of Electrical, Information and Media Engineering, University of Wuppertal, Wuppertal, Germany

Christin Jeggle

3 Department of Psychology, University of Konstanz, Konstanz, Germany

Magdalena Haupt

Wolfgang gaissmaier, helge giese, associated data.

Data are available at https://osf.io/e95dh/ .

Media bias has a substantial impact on individual and collective perception of news. Effective communication that may counteract its potential negative effects still needs to be developed. In this article, we analyze how to facilitate the detection of media bias with visual and textual aids in the form of (a) a forewarning message, (b) text annotations, and (c) political classifiers. In an online experiment, we randomized 985 participants to receive a biased liberal or conservative news article in any combination of the three aids. Meanwhile, their subjective perception of media bias in this article, attitude change, and political ideology were assessed. Both the forewarning message and the annotations increased media bias awareness, whereas the political classification showed no effect. Incongruence between an articles’ political position and individual political orientation also increased media bias awareness. Visual aids did not mitigate this effect. Likewise, attitudes remained unaltered.

Introduction

The Internet age has a significant impact on today’s news communication: It allows individuals to access news and information from an ever-increasing variety of sources, at any time, on any subject. Regardless of journalistic standards, media outlets with a wide reach have the power to affect public opinion and shape collective decision-making processes [ 1 ]. However, it is well known that the wording and selection of news in media coverage often are biased and provide limited viewpoints [ 2 ], commonly referred to as media bias . According to Domke and colleagues [ 3 ], media bias is a structural, often wilful defect in news coverage that potentially influences public opinion. Labeling named entities with terms that are ambiguous in the concepts they allude to (e.g. "illegal immigrants" and "illegal aliens" [ 4 ] or combining concepts beyond their initial contexts into figurative speech that carry a positive or negative association ("a wave of immigrants flooded the country") can induce bias. Still, the conceptualization of media bias is complex since biased and balanced reporting cannot be distinguished incisively [ 5 ]. Many definitions exist, and media bias, in general, has been researched from various angles, such as psychology [ 6 ], computer science [ 7 ], linguistics [ 8 ], economics [ 9 ], or political science [ 10 ]. Therefore, we believe advancement in media bias communication is relevant for multiple scientific areas.

Previous research shows the effects of media bias on individual and public perception of news events [ 6 ]. Since the media are citizens’ primary source of political information [ 11 ], associated bias may affect the political beliefs of the audience, party preferences [ 12 ] and even alter voting behavior [ 13 ]. Moreover, exposure to biased information can lead to negative societal outcomes, including group polarization, intolerance of dissent, and political segregation [ 14 ]. It can also affect collective decision-making [ 15 ]. The implications of selective exposure theory intensify the severity of biased news coverage: Researchers observed long ago that people prefer to consume information that fits their worldview and avoid information that challenges these beliefs [ 16 ]. By selecting only confirmatory information, one’s own opinion is reaffirmed, and there is no need to re-evaluate existing stances [ 17 ]. In this way, the unpleasant feeling of cognitive dissonance is avoided [ 18 ]. Isolation in one’s own filter bubble or echo chamber confirms internal biases and might lead to a general decrease in the diversity of news consumption [ 14 ]. This decrease is further exacerbated by recent technological developments like personalized overview features of, e.g., news aggregators [ 19 ]. How partisans select and perceive political news is thus an important question in political communication research [ 20 ]. Therefore, this study tries to test ways to increase the awareness of media bias (which might mitigate its negative impact) and the partisan evaluation of the media through transparent bias communication.

Media bias communication

Media bias occurs in various forms, for example, whether or how a topic is reported (D’Alessio & Allen, 2000) and may not always be easy to identify. As a result, news consumers often engage with distorted media but are not aware of it and exhibit a lack of media bias awareness [ 21 ]. To address this issue, revealing the existence and nature of media can be an essential route to attain media bias awareness and promote informed and reflective news consumption [ 19 ]. For instance, visualizations may generally help to raise media bias awareness and lead to a more balanced news intake by warning people of potential biases [ 22 ], highlighting individual instances of bias [ 19 ], or facilitating the comparison of contents [ 2 , 23 ].

Although knowledge of how to communicate media bias effectively is crucial, visualizations and enhanced perception of media bias have only played a minor role in existing research, and several approaches have not yet been investigated. Therefore, this paper tests how effectively different strategies promote media bias awareness and thereby may also help understand common barriers to informed media consumption. We selected three major methods in related work [ 19 , 22 ] on the topic to further investigate them in one combined study: forewarning messages, text annotations, and political classifications. Theoretical foundations of bias messages and visualizations are yet scarce, and neither in visualization theory nor in bias theory, suitable strategies in the domain have been extensively tested.

Forewarning message

According to socio-psychological inoculation theory [ 24 ], it is possible to pre-emptively confer psychological resistance against persuasion attempts by exposing people to a message of warning character. It is similar to the process of immunizing against a virus by administering a weakened dose of the virus: A so-called inoculation message is expected to protect people from a persuasive attack by exposing them to weakened forms of the persuasion attempt. Due to the perceived threat of the forewarning inoculation message, people tend to strengthen their own position and are thus more resistant to influences of imminent persuasion attacks [ 25 ]. Therefore, one strategy to help people detect bias is to prepare them ahead of media consumption that media bias may occur, thereby "forewarning" them against biased language influences. Such warnings have been widely established in persuasion and shown to be effective in different applied contexts [ 26 ]. Furthermore, such warnings also seem to help not only to protect attitudes against influences but also to determine the quality of a piece of information [ 27 – 29 ] and communicate the information accordingly [ 30 ]. For biased language, this may work specifically by focusing the reader’s attention on a universal motive to evaluate the accuracy of information while relying on the individual’s capacity to detect the bias when encountered [ 30 ]; Bolsen & Druckman, 2015).

Annotations

Other than informing people in advance about bias occurrence, a further approach is to inform them during reading, thereby increasing their awareness of biased language and providing direct help to detect it in an article. Recently, there has been a lot of research on media bias from information science, but it is mainly concerned with its identification and detection [ 31 – 34 ]. However, whereas some research concerning the effects of visualizations of media bias in news articles to detect bias are promising (here: flagging fake news as debunked [ 35 ]) others did not find such effects, potentially also due to the technical issues in accurately annotating single articles [ 19 ]. Still, they offer a good prospect to enable higher media bias awareness and more balanced news consumption. We show our annotation visualization in Fig 1 .

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g001.jpg

Example of the bias annotation "subjective term". Boxed annotation appeared by moving the cursor/finger over the highlighted text section.

Political classification

Another attempt to raise media bias awareness is a political classification of biased material after readers have dealt with it. An and colleagues [ 36 ] proposed an ideological left-right map where media sources are politically classified. The authors suggest that showing a source’s political leaning helps readers question their attitudes and even promotes browsing for news articles with multiple viewpoints. Likewise, several other studies indicate that feedback on the political orientation of an article or a source may lead to more media bias awareness and a more balanced news consumption [ 19 ]. Additionally, exposing users to multiple diverse viewpoints on controversial topics encourages the development of more balanced viewpoints [ 23 ]. A study of Munson and colleagues (2013) further suggests that a feedback element indicating whether the user’s browsing history consists of biased news consumption modestly leads to a more balanced news consumption. Based on these findings, we will test whether the sole representation of a source’s leaning helps raise bias awareness among users on the condition that the article is classified as politically skewed. We show our political classification bar in Fig 2 .

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g002.jpg

Example of an article classification as being politically left-oriented.

Partisan media bias awareness

Attempts to raise media bias awareness may be further complicated by the fact that the detection of media bias and the evaluation of news seem dependent on the political ideology of the beholder [ 37 – 41 ]. However, this partisan effect is not only apparent in neutral reporting: It is supposed that individuals perceive biased content that corresponds to their opinion as less biased [ 38 ] and biased content that contradicts their viewpoints as more biased [ 41 ].

These findings suggest that incongruence between the reader’s position and the news article’s position may increase media bias perception of the article, whereas congruence may decrease it. Thus, partisan media consumers may engage in motivated reasoning to overcome cognitive dissonance experienced when encountering media bias in any news article generally in line with their viewpoints [ 42 ]. According to Festinger [ 18 ], cognitive dissonance is generated when a person has two cognitive elements that are inconsistent with each other. This inconsistency is assumed to produce a feeling of mental discomfort. People who experience dissonance are motivated to reduce the inconsistency because they want to avoid or reduce this negative emotion.

Furthermore, Festinger notes that exposure to messages inconsistent with one’s beliefs could create cognitive dissonance, leading people to avoid or reduce negative emotions. In line with this notion, raising media bias awareness could increase experienced cognitive dissonance and thereby lead to even more partisan ratings of bias. Another explanation of the phenomenon of partisan bias ratings is varying norms about what content is considered appropriate in media coverage dependent on one’s political identity[ 43 ]. Other researchers focus on the inattention to the quality of news and the motive to only support truthful news [ 44 ]. Both approaches lead us to expect opposite results for the partisanship of the media bias ratings with increased media bias awareness as created by our proposed visualizations: Partisanship of ratings should decrease rather than increase as people are reminded of more general norms and accuracy motives [ 27 ].

Study aims and hypotheses

This project aims to contribute to a deeper understanding of effective media bias communication. To this end, we create a set of bias visualizations revealing bias in different ways and test their effectiveness to raise awareness in an online experiment. Following the respective literature elaborated above for each technique, we would expect enhanced media bias awareness by all visualizations:

  • H1a: A forewarning message prior to news articles increases media bias awareness in presented articles.
  • H1b: Annotations in news articles increase media bias awareness in presented articles.
  • H1c: A political classification of news articles increases media bias awareness in presented articles.

Another goal of this study is to understand better the reader’s political orientation in media bias awareness. In line with the findings of partisan media bias perception (hostile media effect; Vallone et al., 1985), we adopt the following hypothesis:

  • H2: Presented material will be rated less biased if consistent with individual political orientation.

Furthermore, we assume, following the attentional and normative explanation of partisanship in ratings rather than cognitive dissonance theory, the following effect:

  • H3: Bias visualizations will mitigate the effects of partisan bias ratings.

Participants

A total of 1002 participants from the US were recruited online via Prolific in August of 2020. A final sample of N = 985 was included in the analysis (51% female; age : M = 32.67; SD = 11.95 ) . The excluded participants did not fully complete the study or indicated that their data might not be trusted in a seriousness check. The target sample size was determined using power analysis, so that small effects ( f = 0.10) could be found with a power of .80 [ 45 ]. The online study was scheduled to last approximately 10 minutes, for which the participants received £1.10 as payment.

Design and procedure

The experiment was conducted online in Qualtrics ( https://www.qualtrics.com ). It operated with fully informed consent, adheres to the Declaration of Helsinki, and was conducted in compliance with relevant laws and institutional guidelines, including the ones of the University of Konstanz ethics board. All participants confirmed their consent in written form and were informed in detail about the study, the aim, data processing, anonymization, and other background information.

After collecting informed consent and demographic information, we conducted an initial attitude assessment which asked for their general perception of the presented topic on three dimensions and personal relevance. Next, participants read one randomly selected biased news article (either liberal or conservative), randomly supplemented by any combination of the visual aids (forewarning message, annotations, political classification). Thus, the study had a 2x2x2 forewarning message (yes/no) x annotations (yes/no) x political map (yes/no) between design. The article also varied between participants in both article position (liberal/conservative) and article topic (gun law/abortion) to determine the results’ partialness and generalizability. Finally, attitudes towards the topic were reassessed, followed by a seriousness check.

Study material

Visual aids.

Forewarning message . The forewarning message consisted of a short warning and was displayed directly before the news article. It reads: " Beware of biased news coverage . Read consciously . Don’t be fooled . The term ’media bias’ refers to , in part , non-neutral tonality and word choice in the news . Media Bias can consciously and unconsciously result in a narrow and one-sided point of view . How a topic or issue is covered in the news can decisively impact public debates and affect our collective decision making ." Besides, an example of one-sided language was shown, and readers were encouraged to consume news consciously.

Annotations . Annotations were directly integrated into the news texts. Biased words or sentences were highlighted [ 46 ], and by hovering over the marked sections, a short explanation of the respective type of bias appeared. For example, if moving the cursor over a very one-sided term, the following annotation would be displayed: " Subjective term : Language that is skewed by feeling , opinion or taste ." Annotations were based on ratings of six members of our research group, where phrases had to be nominated by at least three raters. The final annotations can be found in the supplementary preregistration repository accompanying this article at https://osf.io/e95dh/‌?view_only=d2fb5dc‌2d64741e393b30b9ee6cc7dc1 (Non-anonymous Link is made accessible in case of acceptance). We followed the guidelines applied in existing research to teach annotators about bias and reach higher-quality annotations [ 47 ]. In future work, we will further increase the number of raters, as we address in the discussion.

Political classification . A political classification in the form of a spectrum from left to right indicated the source’s political ideology. It was displayed immediately after the presented article and based on the rating of the webpage Allsides.

We used four biased news articles that varied in topic and political position. Each participant was assigned to one article. The two topics covered were gun law and the debate on abortion, with either a liberal or conservative article position. Topics were selected because we considered them controversial issues in the United States that most people are presumably familiar with. To ensure that articles were biased, they were taken from sources deemed extreme according to the Allsides classification. Conservative texts were taken from Breitbart.com ; liberal articles were from Huffpost.com and Washingtonpost.com . We also conducted a manipulation check to determine whether participants perceived political article positions in line with our assumptions: Just after reading the article, participants were asked to classify its political stance on a visual analogue scale (-5 = very liberal to 5 = very conservative ). To ensure comparability, articles were shortened to approximately the same length, and respective sources were not indicated. All article texts used are listed together with their annotations in the supplementary preregistration repository accompanying this article (we show the link on the previous page).

Media bias awareness

Five semantic differentials assessed media bias awareness on fairness, partialness, acceptableness, trustworthiness, and persuasiveness [ 48 – 50 ] on visual analogue scales (" I think the presented news article was… "). Media bias awareness was established by averaging the five items and recoded to range from -5 = low bias awareness to 5 = high bias awareness ( α = .88).

Political orientation

The variable political orientation was measured on a visual analogue scale ranging from –5 = very conservative to 5 = very liberal ), introduced with the question " Do you consider yourself to be liberal , conservative , or somewhere in between ?" adopted by Spinde and colleagues [ 19 , 51 ]. Likewise, we assessed the perceived stance of the read article on the same scale introduced with the question " I think the presented news article was… ".

Attitudes towards article topic

Attitudes were assessed before and after the article presentation by a three-item semantic differential scale ( wrong - right , unacceptable - acceptable , bad - good ) evaluating the two topics (" Generally , laws restricting abortion/ the use of guns are . . ."; α = .99). The three items were averaged per topic to yield a score from (–5 = very conservative attitude to 5 = very liberal attitude). Besides, we assessed topic involvement by one item before the article presentation (" To me personally , laws restricting the use of guns/ abortions are… irrelevant-relevant") on a scale from –5 to 5.

Statistical analysis

To test effects of the visual aids on media bias perception, we used ANOVAs with effect coded factors in a forewarning message (yes/no) x annotations (yes/no) x political map (yes/no) x2 article position (liberal/conservative) x2 article topic (gun law/abortion) between design. For analyses testing political ideology effects, this was generalized to a GLM with standardized political orientation as an additional interacting variable followed by a simple effects analysis. The same model was applied to the second attitude rating, with first attitude rating and topic involvement as covariates for attitude change. This project and the analyses were preregistered with the DOI https://osf.io/e95dh/?view_only=d2fb5dc2d64741e39‌3b30b9ee6cc7dc1 (Non-anonymous Link is made accessible in case of acceptance). All study materials, code, and data are available there.

Manipulation check and other effects on perceived political stance of the article

Overall, the positions of the political articles were perceived as designed ( article position : F (1, 953) = 528.67, p < .001, η p 2 = .357): Articles assigned a liberal position were perceived more liberal ( M = 1.60, SD = 2.70), whereas conservative articles were rated more conservative ( M = –1.98, SD = 2.26). This difference between the conservative and the liberal article was more pronounced, when a forewarning message ( F (1, 953) = 7.33, p = .007, η p 2 = .008), annotations ( F (1, 953) = 3.96, p = .047, η p 2 = .004), or the political classifications were present ( F (1, 953) = 9.12, p = .003, η p 2 = .009; see Fig 3 ). The combination of forewarning and classification further increased the difference ( F (1, 953) = 5.28, p = .022, η p 2 = .006).

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g003.jpg

Across all conditions, liberal articles were perceived to be more liberal and conservative articles more conservative. The interventions increased the differences between the two ratings. Dots represent means, and lines are standard deviations.

Effects of visual aids on media bias perceptions

Testing the effects of the visual aids on media bias perceptions in general, we found that both the forewarning message ( F (1, 953) = 8.29, p = .004, η p 2 = .009) and the annotations ( F (1, 953) = 24.00, p < .001, η p 2 = .025) increased perceived bias, which we show in Fig 4 . However, we found no effect of the political classification ( F (1, 953) = 2.56, p = .110, η p 2 = .003) and no systematic higher-order interaction involving any of the manipulations ( p ≥ .085, η p 2 ≤ .003). Moreover, there were differences in media bias perceptions of the specific articles ( topic x article position : F (1,953) = 24.44, p < .001, η p 2 = .025). The two found main effects were by and large robust when testing it per item of the media bias perception scale (forewarning had no significant effect on partialness and persuasiveness) or in a MANOVA ( forewarning : F (5, 949) = 5.22, p < .001, η p 2 = .027; annotation : F (5, 949) = 6.25, p < .001, η p 2 = .032).

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g004.jpg

The forewarning message, as well as annotations, increased media bias awareness. Dots represent means, and lines are standard deviations.

Partisan media bias ratings

When considering self-indicated political orientation and its fit to the article position , we found that media bias was perceived less for articles consistent with the reader’s political orientation ( F (1,921) = 113.37, p < .001, η p 2 = .110): For conservative articles, liberal readers rated conservative articles more biased than conservative readers (β = 0.32; p < .001; 95%CI[0.25; 0.38]). Conversely, liberal articles were rated less biased by liberals (β = –0.20; p < .001; 95%CI[–0.27; –0.13]), indicating a partisan bias rating for both political isles, which we show in Fig 5 .

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g005.jpg

Bias awareness increases when the article is not aligned with the persons’ political position. Shades show 95% confidence intervals of the regression estimation.

This partisan rating of articles was unaffected by forewarning ( F (1,921) = 1.52, p = .218, η p 2 = .002), annotations ( F (1,921) = 0.26, p = .612, η p 2 < .001), and political classification ( F (1,921) = 2.72, p = .010, η p 2 = .003). Yet, with increasing liberalness of the reader, the combination of forewarning and annotation was slightly less effective on the detection of bias ( F (1,921) = 4.19, p = .041, η p 2 = .005). Furthermore, there were some topic-related differences irrelevant to the current hypotheses (higher bias was perceived for the gun laws articles ( topic : F (1,921) = 11.32, p < .001, η p 2 = .012) and specifically so for the liberal one ( topic x article position : F (1,921) = 23.86, p < .001, η p 2 = .025) with some uninterpretable minor higher order interaction ( forewarning x annotation x classification x political orientation x topic : F (1,921) = 4.10, p = .043, η p 2 = .004)).

Effects on attitudes

By and large, attitudes on the topics were not affected by the experiment: While attitudes after reading the article were in line with prior attitudes ( F (1,919) = 2415.42, p < .001, η p 2 = .724) and individual political orientation ( F (1,919) = 34.54, p < .001, η p 2 = .036), neither article position ( F (1,919) = 2.63, p = .105, η p 2 = .003) nor any of the visual aids had any general impact ( p ≥ .084, η p 2 ≤ .003). Likewise, neither of the aids interacted with the factor article position ( p ≥ .298, η p 2 ≤ .001). Solely, there were some additional minor topic-specific significant effects of the annotation combined with the forewarning ( F (1,919) = 4.77, p = .0292, η p 2 = .005) and an increased liberalness of attitude with higher topic involvement ( F (1,919) = 4.31, p = .038, η p 2 = .005), that we want to disclose, but deem irrelevant to our hypotheses and research questions.

In this study, we tested different techniques to communicate media bias. Our experiment revealed that presenting a forewarning message and text annotations enhanced awareness of biased reporting, while a political classification did not. All three methods (forewarning, annotation, political classification) impacted the political ideology rating of the presented article. Furthermore, we found evidence for partisan bias ratings: Participants rated articles that agreed with their general orientation to be less biased than articles from the other side of the political spectrum. The positive effect of the forewarning message on media bias ratings, albeit small, is in line with a few other findings of successful appeals to and reminders of accuracy motives [ 30 ]. In addition, it accords with the notion that reflecting on media bias involves some efforts [ 44 , 52 ], so motivating people to engage in this process can help detect bias.

Regarding the effects of in-text annotations, our finding differs from a previous study of a similar design [ 19 ], which did not identify the effect due to a lack of power and less optimal annotations. While news consumers may generally identify outright false or fake [ 53 ] news, detecting subtle biases can profit from such aids. This indicates that bias detection is far from ideal, particularly in more ambiguous cases. As in-text annotation and forewarning message effects were independent of each other, participants seemingly do not profit from the combination of aids.

On the other hand, the political classification could solely improve the detection of the political alignment of the text (which was also achieved by both other methods) but not help detecting biased language. Subsequently, the detection of biased language and media bias itself does not appear to be directly related to an article’s political affiliation.

Our study also replicates findings that the detection of media bias and fake news is affected by individual convictions [ 30 , 40 , 42 ]: We found that participants could detect media bias more readily if there was an incongruence between the participant’s and the article’s political ideology. Such a connection may be particularly true for detecting more subtle media biases and holding an article in high regard compared to successfully identifying outright fake news, for which a reversed effect could be found in some instances (Pennycook & Rand, 2019).

In addition, interventions were ineffective to lower such partisan effects. Similarly, attitudes remained relatively stable and were not affected by any of the visual aids. Making biased language more visible and reminding people of potential biases could apparently not help them overcome their ideology in rating the acceptance of an article when there is no clear indication that the information presented in the article is fake but solely biased. Likewise, the forewarning message successfully altered the motivation to look for biased language, but did not decrease the effects of political identity on the rating: While being able to detect the political affiliation of an article, it seems that participants were not capable of separating the stance of the article from its biased use of language, even when prompted to do so. In the same vein, effects were not more pronounced when the political classification was further visualized, potentially pointing to the notion that the stance is also detected without help (after all, while the manipulations increased the distinction between liberal and conservative articles, the article’s position was reliably identified even without any supporting material) and that partisan ratings are not a deliberate derogatory act. Furthermore, the problem of partisan bias ratings also did not increase with increased media bias awareness via the manipulations, as could have been expected by cognitive dissonance theory.

For future work, we will improve the representativeness of the surveyed sample, which limits far-reaching generalizations at this point. Additionally, we will increase the generalizability by employing articles that are politically neutral or exhibit comparatively low bias. Both forewarning and annotations may have increased ratings in this study, but it is unclear whether they also aid in identifying low-bias articles and leading to lower ratings, respectively. Improving the quality of our annotations by including more annotators is an additional step towards exhausting potential findings. We will also investigate how combinations of the visualizations and strategies work together and conduct expert interviews to determine which applications would be of interest in an applied scenario. Still, the current study shows that two of our interventions raised attention to biased language in media, giving a first insight into the yet sparsely tested field of presenting media bias to news consumers.

Furthermore, there is a great challenge in translating these experimental interventions to applications used by news consumers in the field. While forewarning messages could be implemented quite simply in the context of other media, for instance, as a disclaimer (see [ 30 ]), we hope that automated classifiers on the sentence level will prove to be an effective tool to create instant annotating aids for example as browser add-ons. Even though recent studies show promising accuracy improvements for such classifiers [ 31 , 32 ], we still want to note that much research needs to be devoted to finding stable and reliable markers of biased language. Future work also has great potential to consider these strategies as teaching tools to train users in identifying bias without visual aids. This could offer a framework for a large-scale study in which additional variables measuring previous news consumption habits could be employed.

In the context of our digitalized world, where news and information of differing quality are available everywhere, our results provide important insights for media bias research. In the present study, we were able to show that forewarning messages and annotations increased media bias awareness among readers in selected news articles. Also, we could replicate the well-known hostile media bias that consists of people being more aware of bias in articles from the opposing side of the political spectrum. However, our experiment revealed that the visualizations could not reduce this effect, but partisan ratings rather seemed unaffected. In sum, digital tools uncovering and visualizing media bias may help mitigate the negative effects of media bias in the future.

Funding Statement

This work was supported by the German Research Foundation [DFG] ( https://www.dfg.de/ ) under Grant 441541975, the German Research Foundation Centre ( https://www.dfg.de/ ) of Excellence 2117 "Centre for the Advanced Study of Collective Behaviour" (ID: 422037984). It was also supported by the Hanns-Seidel Foundation ( https://www.hss.de/ ) and the German Academic Exchange Service (DAAD) ( https://www.daad.de/de/ ). None of the funder played any role in the study design or any publication related decisions.

Data Availability

Help | Advanced Search

Computer Science > Computation and Language

Title: the media bias taxonomy: a systematic literature review on the forms and automated detection of media bias.

Abstract: The way the media presents events can significantly affect public perception, which in turn can alter people's beliefs and views. Media bias describes a one-sided or polarizing perspective on a topic. This article summarizes the research on computational methods to detect media bias by systematically reviewing 3140 research papers published between 2019 and 2022. To structure our review and support a mutual understanding of bias across research domains, we introduce the Media Bias Taxonomy, which provides a coherent overview of the current state of research on media bias from different perspectives. We show that media bias detection is a highly active research field, in which transformer-based classification approaches have led to significant improvements in recent years. These improvements include higher classification accuracy and the ability to detect more fine-granular types of bias. However, we have identified a lack of interdisciplinarity in existing projects, and a need for more awareness of the various types of media bias to support methodologically thorough performance evaluations of media bias detection systems. Concluding from our analysis, we see the integration of recent machine learning advancements with reliable and diverse bias assessment strategies from other research areas as the most promising area for future research contributions in the field.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Subscribe to the PwC Newsletter

Join the community, edit social preview.

media bias research paper

Add a new code entry for this paper

Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row.

  • BIAS DETECTION

Remove a task

Add a method, remove a method, edit datasets, the media bias taxonomy: a systematic literature review on the forms and automated detection of media bias.

26 Dec 2023  ·  Timo Spinde , Smi Hinterreiter , Fabian Haak , Terry Ruas , Helge Giese , Norman Meuschke , Bela Gipp · Edit social preview

The way the media presents events can significantly affect public perception, which in turn can alter people's beliefs and views. Media bias describes a one-sided or polarizing perspective on a topic. This article summarizes the research on computational methods to detect media bias by systematically reviewing 3140 research papers published between 2019 and 2022. To structure our review and support a mutual understanding of bias across research domains, we introduce the Media Bias Taxonomy, which provides a coherent overview of the current state of research on media bias from different perspectives. We show that media bias detection is a highly active research field, in which transformer-based classification approaches have led to significant improvements in recent years. These improvements include higher classification accuracy and the ability to detect more fine-granular types of bias. However, we have identified a lack of interdisciplinarity in existing projects, and a need for more awareness of the various types of media bias to support methodologically thorough performance evaluations of media bias detection systems. Concluding from our analysis, we see the integration of recent machine learning advancements with reliable and diverse bias assessment strategies from other research areas as the most promising area for future research contributions in the field.

Code Edit Add Remove Mark official

Tasks edit add remove, datasets edit, results from the paper edit add remove, methods edit add remove.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

How do we raise media bias awareness effectively? Effects of visualizations to communicate bias

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliations Department of Computer and Information Science, University of Konstanz, Konstanz, Germany, School of Electrical, Information and Media Engineering, University of Wuppertal, Wuppertal, Germany

ORCID logo

Roles Methodology, Writing – original draft

Affiliation Department of Psychology, University of Konstanz, Konstanz, Germany

Roles Funding acquisition, Supervision

Roles Formal analysis, Funding acquisition, Methodology, Supervision, Writing – review & editing

  • Timo Spinde, 
  • Christin Jeggle, 
  • Magdalena Haupt, 
  • Wolfgang Gaissmaier, 
  • Helge Giese

PLOS

  • Published: April 13, 2022
  • https://doi.org/10.1371/journal.pone.0266204
  • Reader Comments

Fig 1

Media bias has a substantial impact on individual and collective perception of news. Effective communication that may counteract its potential negative effects still needs to be developed. In this article, we analyze how to facilitate the detection of media bias with visual and textual aids in the form of (a) a forewarning message, (b) text annotations, and (c) political classifiers. In an online experiment, we randomized 985 participants to receive a biased liberal or conservative news article in any combination of the three aids. Meanwhile, their subjective perception of media bias in this article, attitude change, and political ideology were assessed. Both the forewarning message and the annotations increased media bias awareness, whereas the political classification showed no effect. Incongruence between an articles’ political position and individual political orientation also increased media bias awareness. Visual aids did not mitigate this effect. Likewise, attitudes remained unaltered.

Citation: Spinde T, Jeggle C, Haupt M, Gaissmaier W, Giese H (2022) How do we raise media bias awareness effectively? Effects of visualizations to communicate bias. PLoS ONE 17(4): e0266204. https://doi.org/10.1371/journal.pone.0266204

Editor: Rogis Baker, Universiti Pertahanan Nasional Malaysia, MALAYSIA

Received: December 14, 2021; Accepted: March 16, 2022; Published: April 13, 2022

Copyright: © 2022 Spinde et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Data are available at https://osf.io/e95dh/ .

Funding: This work was supported by the German Research Foundation [DFG] ( https://www.dfg.de/ ) under Grant 441541975, the German Research Foundation Centre ( https://www.dfg.de/ ) of Excellence 2117 "Centre for the Advanced Study of Collective Behaviour" (ID: 422037984). It was also supported by the Hanns-Seidel Foundation ( https://www.hss.de/ ) and the German Academic Exchange Service (DAAD) ( https://www.daad.de/de/ ). None of the funder played any role in the study design or any publication related decisions.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The Internet age has a significant impact on today’s news communication: It allows individuals to access news and information from an ever-increasing variety of sources, at any time, on any subject. Regardless of journalistic standards, media outlets with a wide reach have the power to affect public opinion and shape collective decision-making processes [ 1 ]. However, it is well known that the wording and selection of news in media coverage often are biased and provide limited viewpoints [ 2 ], commonly referred to as media bias . According to Domke and colleagues [ 3 ], media bias is a structural, often wilful defect in news coverage that potentially influences public opinion. Labeling named entities with terms that are ambiguous in the concepts they allude to (e.g. "illegal immigrants" and "illegal aliens" [ 4 ] or combining concepts beyond their initial contexts into figurative speech that carry a positive or negative association ("a wave of immigrants flooded the country") can induce bias. Still, the conceptualization of media bias is complex since biased and balanced reporting cannot be distinguished incisively [ 5 ]. Many definitions exist, and media bias, in general, has been researched from various angles, such as psychology [ 6 ], computer science [ 7 ], linguistics [ 8 ], economics [ 9 ], or political science [ 10 ]. Therefore, we believe advancement in media bias communication is relevant for multiple scientific areas.

Previous research shows the effects of media bias on individual and public perception of news events [ 6 ]. Since the media are citizens’ primary source of political information [ 11 ], associated bias may affect the political beliefs of the audience, party preferences [ 12 ] and even alter voting behavior [ 13 ]. Moreover, exposure to biased information can lead to negative societal outcomes, including group polarization, intolerance of dissent, and political segregation [ 14 ]. It can also affect collective decision-making [ 15 ]. The implications of selective exposure theory intensify the severity of biased news coverage: Researchers observed long ago that people prefer to consume information that fits their worldview and avoid information that challenges these beliefs [ 16 ]. By selecting only confirmatory information, one’s own opinion is reaffirmed, and there is no need to re-evaluate existing stances [ 17 ]. In this way, the unpleasant feeling of cognitive dissonance is avoided [ 18 ]. Isolation in one’s own filter bubble or echo chamber confirms internal biases and might lead to a general decrease in the diversity of news consumption [ 14 ]. This decrease is further exacerbated by recent technological developments like personalized overview features of, e.g., news aggregators [ 19 ]. How partisans select and perceive political news is thus an important question in political communication research [ 20 ]. Therefore, this study tries to test ways to increase the awareness of media bias (which might mitigate its negative impact) and the partisan evaluation of the media through transparent bias communication.

Media bias communication

Media bias occurs in various forms, for example, whether or how a topic is reported (D’Alessio & Allen, 2000) and may not always be easy to identify. As a result, news consumers often engage with distorted media but are not aware of it and exhibit a lack of media bias awareness [ 21 ]. To address this issue, revealing the existence and nature of media can be an essential route to attain media bias awareness and promote informed and reflective news consumption [ 19 ]. For instance, visualizations may generally help to raise media bias awareness and lead to a more balanced news intake by warning people of potential biases [ 22 ], highlighting individual instances of bias [ 19 ], or facilitating the comparison of contents [ 2 , 23 ].

Although knowledge of how to communicate media bias effectively is crucial, visualizations and enhanced perception of media bias have only played a minor role in existing research, and several approaches have not yet been investigated. Therefore, this paper tests how effectively different strategies promote media bias awareness and thereby may also help understand common barriers to informed media consumption. We selected three major methods in related work [ 19 , 22 ] on the topic to further investigate them in one combined study: forewarning messages, text annotations, and political classifications. Theoretical foundations of bias messages and visualizations are yet scarce, and neither in visualization theory nor in bias theory, suitable strategies in the domain have been extensively tested.

Forewarning message.

According to socio-psychological inoculation theory [ 24 ], it is possible to pre-emptively confer psychological resistance against persuasion attempts by exposing people to a message of warning character. It is similar to the process of immunizing against a virus by administering a weakened dose of the virus: A so-called inoculation message is expected to protect people from a persuasive attack by exposing them to weakened forms of the persuasion attempt. Due to the perceived threat of the forewarning inoculation message, people tend to strengthen their own position and are thus more resistant to influences of imminent persuasion attacks [ 25 ]. Therefore, one strategy to help people detect bias is to prepare them ahead of media consumption that media bias may occur, thereby "forewarning" them against biased language influences. Such warnings have been widely established in persuasion and shown to be effective in different applied contexts [ 26 ]. Furthermore, such warnings also seem to help not only to protect attitudes against influences but also to determine the quality of a piece of information [ 27 – 29 ] and communicate the information accordingly [ 30 ]. For biased language, this may work specifically by focusing the reader’s attention on a universal motive to evaluate the accuracy of information while relying on the individual’s capacity to detect the bias when encountered [ 30 ]; Bolsen & Druckman, 2015).

Annotations.

Other than informing people in advance about bias occurrence, a further approach is to inform them during reading, thereby increasing their awareness of biased language and providing direct help to detect it in an article. Recently, there has been a lot of research on media bias from information science, but it is mainly concerned with its identification and detection [ 31 – 34 ]. However, whereas some research concerning the effects of visualizations of media bias in news articles to detect bias are promising (here: flagging fake news as debunked [ 35 ]) others did not find such effects, potentially also due to the technical issues in accurately annotating single articles [ 19 ]. Still, they offer a good prospect to enable higher media bias awareness and more balanced news consumption. We show our annotation visualization in Fig 1 .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Example of the bias annotation "subjective term". Boxed annotation appeared by moving the cursor/finger over the highlighted text section.

https://doi.org/10.1371/journal.pone.0266204.g001

Political classification.

Another attempt to raise media bias awareness is a political classification of biased material after readers have dealt with it. An and colleagues [ 36 ] proposed an ideological left-right map where media sources are politically classified. The authors suggest that showing a source’s political leaning helps readers question their attitudes and even promotes browsing for news articles with multiple viewpoints. Likewise, several other studies indicate that feedback on the political orientation of an article or a source may lead to more media bias awareness and a more balanced news consumption [ 19 ]. Additionally, exposing users to multiple diverse viewpoints on controversial topics encourages the development of more balanced viewpoints [ 23 ]. A study of Munson and colleagues (2013) further suggests that a feedback element indicating whether the user’s browsing history consists of biased news consumption modestly leads to a more balanced news consumption. Based on these findings, we will test whether the sole representation of a source’s leaning helps raise bias awareness among users on the condition that the article is classified as politically skewed. We show our political classification bar in Fig 2 .

thumbnail

Example of an article classification as being politically left-oriented.

https://doi.org/10.1371/journal.pone.0266204.g002

Partisan media bias awareness

Attempts to raise media bias awareness may be further complicated by the fact that the detection of media bias and the evaluation of news seem dependent on the political ideology of the beholder [ 37 – 41 ]. However, this partisan effect is not only apparent in neutral reporting: It is supposed that individuals perceive biased content that corresponds to their opinion as less biased [ 38 ] and biased content that contradicts their viewpoints as more biased [ 41 ].

These findings suggest that incongruence between the reader’s position and the news article’s position may increase media bias perception of the article, whereas congruence may decrease it. Thus, partisan media consumers may engage in motivated reasoning to overcome cognitive dissonance experienced when encountering media bias in any news article generally in line with their viewpoints [ 42 ]. According to Festinger [ 18 ], cognitive dissonance is generated when a person has two cognitive elements that are inconsistent with each other. This inconsistency is assumed to produce a feeling of mental discomfort. People who experience dissonance are motivated to reduce the inconsistency because they want to avoid or reduce this negative emotion.

Furthermore, Festinger notes that exposure to messages inconsistent with one’s beliefs could create cognitive dissonance, leading people to avoid or reduce negative emotions. In line with this notion, raising media bias awareness could increase experienced cognitive dissonance and thereby lead to even more partisan ratings of bias. Another explanation of the phenomenon of partisan bias ratings is varying norms about what content is considered appropriate in media coverage dependent on one’s political identity[ 43 ]. Other researchers focus on the inattention to the quality of news and the motive to only support truthful news [ 44 ]. Both approaches lead us to expect opposite results for the partisanship of the media bias ratings with increased media bias awareness as created by our proposed visualizations: Partisanship of ratings should decrease rather than increase as people are reminded of more general norms and accuracy motives [ 27 ].

Study aims and hypotheses

This project aims to contribute to a deeper understanding of effective media bias communication. To this end, we create a set of bias visualizations revealing bias in different ways and test their effectiveness to raise awareness in an online experiment. Following the respective literature elaborated above for each technique, we would expect enhanced media bias awareness by all visualizations:

  • H1a: A forewarning message prior to news articles increases media bias awareness in presented articles.
  • H1b: Annotations in news articles increase media bias awareness in presented articles.
  • H1c: A political classification of news articles increases media bias awareness in presented articles.

Another goal of this study is to understand better the reader’s political orientation in media bias awareness. In line with the findings of partisan media bias perception (hostile media effect; Vallone et al., 1985), we adopt the following hypothesis:

  • H2: Presented material will be rated less biased if consistent with individual political orientation.

Furthermore, we assume, following the attentional and normative explanation of partisanship in ratings rather than cognitive dissonance theory, the following effect:

  • H3: Bias visualizations will mitigate the effects of partisan bias ratings.

Participants

A total of 1002 participants from the US were recruited online via Prolific in August of 2020. A final sample of N = 985 was included in the analysis (51% female; age : M = 32.67; SD = 11.95 ) . The excluded participants did not fully complete the study or indicated that their data might not be trusted in a seriousness check. The target sample size was determined using power analysis, so that small effects ( f = 0.10) could be found with a power of .80 [ 45 ]. The online study was scheduled to last approximately 10 minutes, for which the participants received £1.10 as payment.

Design and procedure

The experiment was conducted online in Qualtrics ( https://www.qualtrics.com ). It operated with fully informed consent, adheres to the Declaration of Helsinki, and was conducted in compliance with relevant laws and institutional guidelines, including the ones of the University of Konstanz ethics board. All participants confirmed their consent in written form and were informed in detail about the study, the aim, data processing, anonymization, and other background information.

After collecting informed consent and demographic information, we conducted an initial attitude assessment which asked for their general perception of the presented topic on three dimensions and personal relevance. Next, participants read one randomly selected biased news article (either liberal or conservative), randomly supplemented by any combination of the visual aids (forewarning message, annotations, political classification). Thus, the study had a 2x2x2 forewarning message (yes/no) x annotations (yes/no) x political map (yes/no) between design. The article also varied between participants in both article position (liberal/conservative) and article topic (gun law/abortion) to determine the results’ partialness and generalizability. Finally, attitudes towards the topic were reassessed, followed by a seriousness check.

Study material

Visual aids..

Forewarning message . The forewarning message consisted of a short warning and was displayed directly before the news article. It reads: " Beware of biased news coverage . Read consciously . Don’t be fooled . The term ’media bias’ refers to , in part , non-neutral tonality and word choice in the news . Media Bias can consciously and unconsciously result in a narrow and one-sided point of view . How a topic or issue is covered in the news can decisively impact public debates and affect our collective decision making ." Besides, an example of one-sided language was shown, and readers were encouraged to consume news consciously.

Annotations . Annotations were directly integrated into the news texts. Biased words or sentences were highlighted [ 46 ], and by hovering over the marked sections, a short explanation of the respective type of bias appeared. For example, if moving the cursor over a very one-sided term, the following annotation would be displayed: " Subjective term : Language that is skewed by feeling , opinion or taste ." Annotations were based on ratings of six members of our research group, where phrases had to be nominated by at least three raters. The final annotations can be found in the supplementary preregistration repository accompanying this article at https://osf.io/e95dh/‌?view_only=d2fb5dc‌2d64741e393b30b9ee6cc7dc1 (Non-anonymous Link is made accessible in case of acceptance). We followed the guidelines applied in existing research to teach annotators about bias and reach higher-quality annotations [ 47 ]. In future work, we will further increase the number of raters, as we address in the discussion.

Political classification . A political classification in the form of a spectrum from left to right indicated the source’s political ideology. It was displayed immediately after the presented article and based on the rating of the webpage Allsides.

We used four biased news articles that varied in topic and political position. Each participant was assigned to one article. The two topics covered were gun law and the debate on abortion, with either a liberal or conservative article position. Topics were selected because we considered them controversial issues in the United States that most people are presumably familiar with. To ensure that articles were biased, they were taken from sources deemed extreme according to the Allsides classification. Conservative texts were taken from Breitbart.com ; liberal articles were from Huffpost.com and Washingtonpost.com . We also conducted a manipulation check to determine whether participants perceived political article positions in line with our assumptions: Just after reading the article, participants were asked to classify its political stance on a visual analogue scale (-5 = very liberal to 5 = very conservative ). To ensure comparability, articles were shortened to approximately the same length, and respective sources were not indicated. All article texts used are listed together with their annotations in the supplementary preregistration repository accompanying this article (we show the link on the previous page).

Media bias awareness.

Five semantic differentials assessed media bias awareness on fairness, partialness, acceptableness, trustworthiness, and persuasiveness [ 48 – 50 ] on visual analogue scales (" I think the presented news article was… "). Media bias awareness was established by averaging the five items and recoded to range from -5 = low bias awareness to 5 = high bias awareness ( α = .88).

Political orientation.

The variable political orientation was measured on a visual analogue scale ranging from –5 = very conservative to 5 = very liberal ), introduced with the question " Do you consider yourself to be liberal , conservative , or somewhere in between ?" adopted by Spinde and colleagues [ 19 , 51 ]. Likewise, we assessed the perceived stance of the read article on the same scale introduced with the question " I think the presented news article was… ".

Attitudes towards article topic.

Attitudes were assessed before and after the article presentation by a three-item semantic differential scale ( wrong - right , unacceptable - acceptable , bad - good ) evaluating the two topics (" Generally , laws restricting abortion/ the use of guns are . . ."; α = .99). The three items were averaged per topic to yield a score from (–5 = very conservative attitude to 5 = very liberal attitude). Besides, we assessed topic involvement by one item before the article presentation (" To me personally , laws restricting the use of guns/ abortions are… irrelevant-relevant") on a scale from –5 to 5.

Statistical analysis

To test effects of the visual aids on media bias perception, we used ANOVAs with effect coded factors in a forewarning message (yes/no) x annotations (yes/no) x political map (yes/no) x2 article position (liberal/conservative) x2 article topic (gun law/abortion) between design. For analyses testing political ideology effects, this was generalized to a GLM with standardized political orientation as an additional interacting variable followed by a simple effects analysis. The same model was applied to the second attitude rating, with first attitude rating and topic involvement as covariates for attitude change. This project and the analyses were preregistered with the DOI https://osf.io/e95dh/?view_only=d2fb5dc2d64741e39‌3b30b9ee6cc7dc1 (Non-anonymous Link is made accessible in case of acceptance). All study materials, code, and data are available there.

Manipulation check and other effects on perceived political stance of the article

Overall, the positions of the political articles were perceived as designed ( article position : F (1, 953) = 528.67, p < .001, η p 2 = .357): Articles assigned a liberal position were perceived more liberal ( M = 1.60, SD = 2.70), whereas conservative articles were rated more conservative ( M = –1.98, SD = 2.26). This difference between the conservative and the liberal article was more pronounced, when a forewarning message ( F (1, 953) = 7.33, p = .007, η p 2 = .008), annotations ( F (1, 953) = 3.96, p = .047, η p 2 = .004), or the political classifications were present ( F (1, 953) = 9.12, p = .003, η p 2 = .009; see Fig 3 ). The combination of forewarning and classification further increased the difference ( F (1, 953) = 5.28, p = .022, η p 2 = .006).

thumbnail

Across all conditions, liberal articles were perceived to be more liberal and conservative articles more conservative. The interventions increased the differences between the two ratings. Dots represent means, and lines are standard deviations.

https://doi.org/10.1371/journal.pone.0266204.g003

Effects of visual aids on media bias perceptions

Testing the effects of the visual aids on media bias perceptions in general, we found that both the forewarning message ( F (1, 953) = 8.29, p = .004, η p 2 = .009) and the annotations ( F (1, 953) = 24.00, p < .001, η p 2 = .025) increased perceived bias, which we show in Fig 4 . However, we found no effect of the political classification ( F (1, 953) = 2.56, p = .110, η p 2 = .003) and no systematic higher-order interaction involving any of the manipulations ( p ≥ .085, η p 2 ≤ .003). Moreover, there were differences in media bias perceptions of the specific articles ( topic x article position : F (1,953) = 24.44, p < .001, η p 2 = .025). The two found main effects were by and large robust when testing it per item of the media bias perception scale (forewarning had no significant effect on partialness and persuasiveness) or in a MANOVA ( forewarning : F (5, 949) = 5.22, p < .001, η p 2 = .027; annotation : F (5, 949) = 6.25, p < .001, η p 2 = .032).

thumbnail

The forewarning message, as well as annotations, increased media bias awareness. Dots represent means, and lines are standard deviations.

https://doi.org/10.1371/journal.pone.0266204.g004

Partisan media bias ratings

When considering self-indicated political orientation and its fit to the article position , we found that media bias was perceived less for articles consistent with the reader’s political orientation ( F (1,921) = 113.37, p < .001, η p 2 = .110): For conservative articles, liberal readers rated conservative articles more biased than conservative readers (β = 0.32; p < .001; 95%CI[0.25; 0.38]). Conversely, liberal articles were rated less biased by liberals (β = –0.20; p < .001; 95%CI[–0.27; –0.13]), indicating a partisan bias rating for both political isles, which we show in Fig 5 .

thumbnail

Bias awareness increases when the article is not aligned with the persons’ political position. Shades show 95% confidence intervals of the regression estimation.

https://doi.org/10.1371/journal.pone.0266204.g005

This partisan rating of articles was unaffected by forewarning ( F (1,921) = 1.52, p = .218, η p 2 = .002), annotations ( F (1,921) = 0.26, p = .612, η p 2 < .001), and political classification ( F (1,921) = 2.72, p = .010, η p 2 = .003). Yet, with increasing liberalness of the reader, the combination of forewarning and annotation was slightly less effective on the detection of bias ( F (1,921) = 4.19, p = .041, η p 2 = .005). Furthermore, there were some topic-related differences irrelevant to the current hypotheses (higher bias was perceived for the gun laws articles ( topic : F (1,921) = 11.32, p < .001, η p 2 = .012) and specifically so for the liberal one ( topic x article position : F (1,921) = 23.86, p < .001, η p 2 = .025) with some uninterpretable minor higher order interaction ( forewarning x annotation x classification x political orientation x topic : F (1,921) = 4.10, p = .043, η p 2 = .004)).

Effects on attitudes

By and large, attitudes on the topics were not affected by the experiment: While attitudes after reading the article were in line with prior attitudes ( F (1,919) = 2415.42, p < .001, η p 2 = .724) and individual political orientation ( F (1,919) = 34.54, p < .001, η p 2 = .036), neither article position ( F (1,919) = 2.63, p = .105, η p 2 = .003) nor any of the visual aids had any general impact ( p ≥ .084, η p 2 ≤ .003). Likewise, neither of the aids interacted with the factor article position ( p ≥ .298, η p 2 ≤ .001). Solely, there were some additional minor topic-specific significant effects of the annotation combined with the forewarning ( F (1,919) = 4.77, p = .0292, η p 2 = .005) and an increased liberalness of attitude with higher topic involvement ( F (1,919) = 4.31, p = .038, η p 2 = .005), that we want to disclose, but deem irrelevant to our hypotheses and research questions.

In this study, we tested different techniques to communicate media bias. Our experiment revealed that presenting a forewarning message and text annotations enhanced awareness of biased reporting, while a political classification did not. All three methods (forewarning, annotation, political classification) impacted the political ideology rating of the presented article. Furthermore, we found evidence for partisan bias ratings: Participants rated articles that agreed with their general orientation to be less biased than articles from the other side of the political spectrum. The positive effect of the forewarning message on media bias ratings, albeit small, is in line with a few other findings of successful appeals to and reminders of accuracy motives [ 30 ]. In addition, it accords with the notion that reflecting on media bias involves some efforts [ 44 , 52 ], so motivating people to engage in this process can help detect bias.

Regarding the effects of in-text annotations, our finding differs from a previous study of a similar design [ 19 ], which did not identify the effect due to a lack of power and less optimal annotations. While news consumers may generally identify outright false or fake [ 53 ] news, detecting subtle biases can profit from such aids. This indicates that bias detection is far from ideal, particularly in more ambiguous cases. As in-text annotation and forewarning message effects were independent of each other, participants seemingly do not profit from the combination of aids.

On the other hand, the political classification could solely improve the detection of the political alignment of the text (which was also achieved by both other methods) but not help detecting biased language. Subsequently, the detection of biased language and media bias itself does not appear to be directly related to an article’s political affiliation.

Our study also replicates findings that the detection of media bias and fake news is affected by individual convictions [ 30 , 40 , 42 ]: We found that participants could detect media bias more readily if there was an incongruence between the participant’s and the article’s political ideology. Such a connection may be particularly true for detecting more subtle media biases and holding an article in high regard compared to successfully identifying outright fake news, for which a reversed effect could be found in some instances (Pennycook & Rand, 2019).

In addition, interventions were ineffective to lower such partisan effects. Similarly, attitudes remained relatively stable and were not affected by any of the visual aids. Making biased language more visible and reminding people of potential biases could apparently not help them overcome their ideology in rating the acceptance of an article when there is no clear indication that the information presented in the article is fake but solely biased. Likewise, the forewarning message successfully altered the motivation to look for biased language, but did not decrease the effects of political identity on the rating: While being able to detect the political affiliation of an article, it seems that participants were not capable of separating the stance of the article from its biased use of language, even when prompted to do so. In the same vein, effects were not more pronounced when the political classification was further visualized, potentially pointing to the notion that the stance is also detected without help (after all, while the manipulations increased the distinction between liberal and conservative articles, the article’s position was reliably identified even without any supporting material) and that partisan ratings are not a deliberate derogatory act. Furthermore, the problem of partisan bias ratings also did not increase with increased media bias awareness via the manipulations, as could have been expected by cognitive dissonance theory.

For future work, we will improve the representativeness of the surveyed sample, which limits far-reaching generalizations at this point. Additionally, we will increase the generalizability by employing articles that are politically neutral or exhibit comparatively low bias. Both forewarning and annotations may have increased ratings in this study, but it is unclear whether they also aid in identifying low-bias articles and leading to lower ratings, respectively. Improving the quality of our annotations by including more annotators is an additional step towards exhausting potential findings. We will also investigate how combinations of the visualizations and strategies work together and conduct expert interviews to determine which applications would be of interest in an applied scenario. Still, the current study shows that two of our interventions raised attention to biased language in media, giving a first insight into the yet sparsely tested field of presenting media bias to news consumers.

Furthermore, there is a great challenge in translating these experimental interventions to applications used by news consumers in the field. While forewarning messages could be implemented quite simply in the context of other media, for instance, as a disclaimer (see [ 30 ]), we hope that automated classifiers on the sentence level will prove to be an effective tool to create instant annotating aids for example as browser add-ons. Even though recent studies show promising accuracy improvements for such classifiers [ 31 , 32 ], we still want to note that much research needs to be devoted to finding stable and reliable markers of biased language. Future work also has great potential to consider these strategies as teaching tools to train users in identifying bias without visual aids. This could offer a framework for a large-scale study in which additional variables measuring previous news consumption habits could be employed.

In the context of our digitalized world, where news and information of differing quality are available everywhere, our results provide important insights for media bias research. In the present study, we were able to show that forewarning messages and annotations increased media bias awareness among readers in selected news articles. Also, we could replicate the well-known hostile media bias that consists of people being more aware of bias in articles from the opposing side of the political spectrum. However, our experiment revealed that the visualizations could not reduce this effect, but partisan ratings rather seemed unaffected. In sum, digital tools uncovering and visualizing media bias may help mitigate the negative effects of media bias in the future.

  • View Article
  • Google Scholar
  • 8. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic Models for Analyzing and Detecting Biased Language. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers) , Association for Computational Linguistics, Sofia, Bulgaria, 1650–1659. Retrieved June 13, 2020 from https://www.aclweb.org/anthology/P13-1162
  • 11. Norris P. 2000. A virtuous circle : Political communications in postindustrial societies . Cambridge University Press. Retrieved from https://doi.org/10.1017/CBO9780511609343
  • 15. Timo Spinde. 2021. An Interdisciplinary Approach for the Automated Detection and Visualization of Media Bias in News Articles. In 2021 IEEE International Conference on Data Mining Workshops (ICDMW) . https://doi.org/10.1109/ICDMW53433.2021.00144
  • 16. Lazarsfeld P. F., Berelson B., and Gaudet H. 1944. The people’s choice . Columbia University Press. Retrieved from https://doi.org/10.1007/978-3-531-90400-9_62
  • PubMed/NCBI
  • 18. Festinger L. 1957. A theory of cognitive dissonance . Stanford University Press.
  • 19. Timo Spinde, Felix Hamborg, Karsten Donnay, Angelica Becerra, and Bela Gipp. 2020. Enabling News Consumers to View and Understand Biased News Coverage: A Study on the Perception and Visualization of Media Bias. In Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020 , ACM, Virtual Event China, 389–392. https://doi.org/10.1145/3383583.3398619
  • 21. Filipe Ribeiro, Lucas Henrique, Fabricio Benevenuto, Abhijnan Chakraborty, Juhi Kulshrestha, Mahmoudreza Babaei, et al. 2018. Media bias monitor: Quantifying biases of social media news outlets at large-scale. In Proceedings of the International AAAI Conference on Web and Social Media .
  • 23. Souneil Park, Seungwoo Kang, Sangyoung Chung, and Junehwa Song. 2009. NewsCube: delivering multiple aspects of news to mitigate media bias. In Proceedings of the 27th international conference on Human factors in computing systems—CHI 09 , ACM Press, Boston, MA, USA, 443. https://doi.org/10.1145/1518701.1518772
  • 31. Wei-Fan Chen, Khalid Al Khatib, Henning Wachsmuth, and Benno Stein. 2020. Analyzing Political Bias and Unfairness in News Articles at Different Levels of Granularity. In Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science , Association for Computational Linguistics, Online, 149–154. https://doi.org/10.18653/v1/2020.nlpcss-1.16
  • 32. Christoph Hube and Besnik Fetahu. 2019. Neural Based Statement Classification for Biased Language. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining , ACM, Melbourne VIC Australia, 195–203. https://doi.org/10.1145/3289600.3291018
  • 33. Timo Spinde, Felix Hamborg, and Bela Gipp. 2020. Media Bias in German News Articles : A Combined Approach. In Proceedings of the 8th International Workshop on News Recommendation and Analytics (INRA 2020) , Virtual event. https://doi.org/10.1007/978-3-030-65965-3_41
  • 34. Timo Spinde, Lada Rudnitckaia, Felix Hamborg, and Bela and Gipp. 2021. Identification of Biased Terms in News Articles by Comparison of Outlet-specific Word Embeddings. In Proceedings of the 16th International Conference (iConference 2021) .
  • 36. J. An, M. Cha, K. Gummadi, J. Crowcroft, and D. Quercia. 2012. Visualizing media bias through Twitter. In Sixth International AAAI Conference on Weblogs and Social Media .
  • 46. Timo Spinde, Kanishka Sinha, Norman Meuschke, and Bela Gipp. 2021. TASSY—A Text Annotation Survey System. In Proceedings of the ACM/IEEE Joint Conference on Digital Libraries (JCDL) .
  • 47. Timo Spinde, Manuel Plank, Jan-David Krieger, Terry Ruas, Bela Gipp, and Akiko Aizawa. 2021. Neural Media Bias Detection Using Distant Supervision With BABE—Bias Annotations By Experts. In Findings of the Association for Computational Linguistics : EMNLP 2021 , Association for Computational Linguistics, Punta Cana, Dominican Republic, 1166–1177. https://doi.org/10.18653/v1/2021.findings-emnlp.101
  • 51. Timo Spinde, Christina Kreuter, Wolfgang Gaissmaier, Felix Hamborg, Bela Gipp, and Helge Giese. 2021. Do You Think It’s Biased? How To Ask For The Perception Of Media Bias. In Proceedings of the ACM/IEEE Joint Conference on Digital Libraries (JCDL) .

We are tackling media bias head on.

AllSides Logo

  • Bias Ratings
  • Media Bias Chart
  • Fact Check Bias
  • Rate Your Bias
  • Types of Bias

AllSides Media Bias Chart

The AllSides Media Bias Chart™ helps you to easily identify different perspectives and political leanings in the news so you can get the full picture and think for yourself.

Knowing the political bias of media outlets allows you to consume a balanced news diet and avoid manipulation, misinformation, and fake news. Everyone is biased, but hidden media bias misleads and divides us. The AllSides Media Bias Chart™ is based on our full and growing list of over 1,400 media bias ratings . These ratings inform our balanced newsfeed .

The AllSides Media Bias Chart™ is more comprehensive in its methodology than any other media bias chart on the Web. While other media bias charts show you the subjective opinion of just one or a few people, our ratings are based on multipartisan, scientific analysis, including expert panels and surveys of thousands of everyday Americans.

Share on Facebook | Share on Twitter | Download PNG

This chart does not rate accuracy or credibility. A publication can be accurate, yet biased. Learn why AllSides doesn't rate accuracy.

Unless otherwise noted, these bias ratings are based on online written content , not TV, radio, or broadcast content.

Here's how the AllSides Media Bias Chart™ differs from other media bias charts:

  • Data is gathered from many people across the political spectrum — not just one biased individual or a very small, elite group. We have a patent on rating bias and use multiple methodologies , not an algorithm. Our methods are : Blind Bias Surveys of Americans, Editorial Reviews by a multipartisan team of panelists who look for common types of media bias , independent reviews, and third party data.
  • Our research spans years — we started rating media bias back in 2012.
  • We give separate bias ratings for the news and opinion sections for some media outlets, giving you a more precise understanding.
  • Transparent methodology: we tell you how we arrived at the bias rating for each outlet. Search for any media outlet here.
  • We consider and review data and research conducted by third parties , like universities and other groups.
  • Your opinion matters: we take into account hundreds of thousands of community votes on our ratings. Votes don't determine our ratings, but are valuable feedback that may prompt more research. We know that a mixed group of experts and non-experts will provide a more accurate result, so we solicit and consider opinions of average people.
  • We don't rate accuracy — just bias. Our ratings help readers to understand that certain facts may be missing if they read only outlets from one side of the political spectrum.

Americans are more polarized than ever — if you’re like us, you see it in the news and on your social media feeds every day. Bias is natural, but hidden bias and fake news misleads and divides us. That’s why AllSides has rated the media bias of over 1,400 sources. and put it into a media bias chart. The AllSides Media Bias Chart™ shows the political bias of some of the most-read sources in America.

The outlets featured on the AllSides Media Bias Cart™ have varying degrees of influence. Read about whether conservative or liberal media outlets are more widely read .

Frequently Asked Questions about the AllSides Media Bias Chart

Why does the bias of a media outlet matter, how does allsides calculate media bias, how did allsides decide which media outlets to include on the chart, what do the bias ratings mean, does a center rating mean neutral, unbiased, and better, why are some media outlets on the chart twice, does allsides rate which outlets are most factual or accurate, where can i see past versions of the chart, where can i learn more, i disagree with your media bias ratings. where can i give you feedback.

News media, social media, and search engines have become so biased, politicized, and personalized that we are often stuck inside filter bubbles , where we’re only exposed to information and ideas we already agree with. When bias is hidden and we see only facts, information, and opinions that confirm our existing beliefs , a number of negative things happen: 1) we become extremely polarized as a nation as we misunderstand or hate the "the other side," believing they are extreme, hateful, or evil; 2) we become more likely to be manipulated into thinking, voting, or behaving a certain way; 3) we become limited in our ability to understand others, problem solve and compromise; 4) we become unable to find the truth.

It feels good to hear from people who think just like us, and media outlets have an incentive to be partisan — it helps them to earn ad revenue, especially if they use sensationalism and clickbait . But when we stay inside a filter bubble, we may miss important ideas and perspectives. The mission of AllSides is to free people from filter bubbles so they can better understand the world — and each other. Making media bias transparent helps us to easily identify different perspectives and expose ourselves to a variety of information so we can avoid being manipulated by partisan bias and fake news. This improves our country long-term, helping us to understand one another, solve problems, know the truth, and make better decisions.

Media bias has contributed to Americans becoming more politically polarized .

At AllSides, we reduce the one-sided information flow by providing balanced news  from both liberal and conservative news sources, and over 1,400 media bias ratings . Our tools help you to better understand diverse perspectives and reduce harmful, hateful polarization in America. By making media bias transparent and consuming a balanced news diet, we can arm ourselves with a broader view — and find the truth for ourselves.

Top of Page

Our media bias ratings are based on multi-partisan, scientific analysis. Our methodologies include Blind Bias Surveys of Americans, Editorial Reviews by a panel of experts trained to spot bias , independent reviews, third party data, and community feedback. Visit our Media Bias Rating Methodology page to learn more.

Some things we took into account include whether the source was a top outlet in terms of traffic according to Pew Research Center and Similarweb . We also took into account how often people search for the bias of that outlet on Google and visit AllSides as a result.

We also include outlets that are good representations a certain perspective or ideology. For example, Jacobin magazine is included because it represents socialist thought, while Reason magazine is included because it represents libertarian thought.

These are subjective judgements made by AllSides and people across the country. Learn our rough approximation for what the media bias ratings mean:

Left - Lean Left - Center - Lean Right - Right

Center doesn't mean better! A Center media bias rating does not mean the source is neutral, unbiased, or reasonable, just as Left and Right do not necessarily mean the source is extreme, wrong, or unreasonable. A Center bias rating simply means the source or writer rated does not predictably publish content that tilts toward either end of the political spectrum — conservative or liberal. A media outlet with a Center rating may omit important perspectives, or run individual articles that display bias, while not displaying a predictable bias. Center outlets can be difficult to determine, and there is rarely a perfect Center outlet: some of our outlets rated Center can be better thought of as Center-Left or Center-Right, something we clarify on individual source pages.

While it may be easy to think that we should only consume media from Center outlets, AllSides believes reading in the Center is not the answer. By reading only Center outlets, we may still encounter bias and omission of important issues and perspectives. For this reason, it is important to consume a balanced news diet across the political spectrum, and to read horizontally across the bias chart. Learn more about what an AllSides Media Bias Rating™ of Center rating means here.

We sometimes provide separate media bias ratings for a source’s news content and its opinion content. This is because some outlets, such as the Wall Street Journal and The New York Times , have a notable difference in bias between their news and opinion sections.

For example, on this chart you will see The New York Times Opinion is rated as a Left media bias, while the New York Times news is rated Lean Left .

When rating an opinion page, AllSides takes into account the outlet's editorial board and its individual opinion page writers. The editorial board’s bias is weighted, and affects the final bias rating by about 60%.

For example, the New York Times has a range of individual Opinion page writers, who have a range of biases. We rate the bias of commentators individually as much as possible. Yet The New York Times Editorial Board has a clear Left media bias. We take into account both the overall biases of the individual writers and the Editorial Board to arrive at a final bias rating of Left for the New York Times opinion section .

See how we provide individual bias ratings for New York Times opinion page writers here .

AllSides does not rate outlets based on accuracy or factual claims — this is a bias chart, not a credibility chart. It speaks to perspective only.

We don't rate accuracy because we don't assume we know the truth on all things. The left and right often strongly disagree on what is truth and what is fiction. Read more about why AllSides doesn't rate accuracy.

We disagree with the idea that the more left or right an outlet is, the less credibility it has. There’s nothing wrong with having bias or an opinion, but hidden bias misleads and divides us. Just because an outlet is credible doesn’t mean it isn’t biased ; likewise, just because an outlet is biased doesn’t mean it isn’t credible . 

Learn more about past versions of the chart on our blog:

  • Version 9.2
  • Version 9.1
  • Version 7.2
  • Version 7.1
  • Version 5.1
  • Version 1.1

Visit the AllSides Media Bias Ratings™ page and search for any outlet for a full summation of our research and how we arrived at the rating.

Visit our company FAQ for more information about AllSides.

You can vote on whether or not you agree with media bias ratings ,  contact us , or sign up to participate in our next Blind Bias Survey .

MediaBiasGroup_Logo_verlauf_neu_2_bold

What do we research and why?

media bias research paper

The original idea initiating our research was to develop a system that would identify and present media bias towards any news consumer. We collected some features and trained some classifiers, but shortly after, we noticed that looking at the problem from a purely computer-scientific perspective is never enough.

Therefore, we extend the research area to other disciplines besides Computer Science, mostly Psychology, Linguistics, Economics, and Political Science. For all areas, we build data sets covering various aspects of media bias. We research how media bias is perceived, how it is represented, how it influences our individual and collective decision-making. We also try to understand how such a complex construct can be visualized understandably and how and where readers are interested in knowing about bias in any news outlet. We have also developed a methodology to build a reliable and detailed data set about media bias, which we currently apply to create the biggest data set on the topic. Many complex questions arise from the overall research background, so we are gathering knowledge from expert partners worldwide. In the following, we will give an overview of the questions we answer, but feel free to always  contact us , take a look at our team and  partners , join us as a  partner, student or Ph.D,  or see  our publications , which, however, are naturally always a little behind the current state of the project due to reviewing processes from journals or conferences. We believe that we will be able to develop a tool that can change the way users consume news.

So far, we cover the following research areas:

Perception & Visualization of Media Bias

Linguistics

Connection between linguistic features & figures and media bias

Computer Science

Automated detection and efficient storage of biased language and contents

Economics/Political Science

Causality of bias in political and economic processes

Media bias data set creation

Developing various data sets on media bias

media bias research paper

From a psychological perspective, we research how bias is perceived, both in real-world and experimental settings. We research how to visualize media bias and try different types of visualizations (such as a left-right bar, in-text annotations, deeper explanations, or inoculation messages) in diverse online surveys or a self-built browser plugin. We also try to identify what role any kind of personal background plays regarding the interpretation of any text as being biased or not. Lastly, we collected over 800 questions from existing text perception research over the last 50 years, reduced them semantically, and are developing a well-tested questionnaire about the perception of media bias and text in general. Our motivation to do so is that we noticed that the style of the question asked about media bias plays a significant role in the answer, which has not been accounted for in existing research. Our first two major publications regarding these issues have been finished and will be published shortly. In case you need resources in advance, contact us.

media bias research paper

From a linguistics perspective, we try to understand which linguistic features and characteristics play in the perception and identification of media bias. How do news consumers react to rhetorical figures, syntactic elements, or specific words? While not all media bias can be explained by linguistics, there is huge power in the descriptive power that the research area offers. In some cases, a specific linguistic figure or pattern strongly increases the probability of biased perception. From our research areas, linguistics is the youngest, even though much of bias research covers related problems, as shown in one of our journal papers  here .

media bias research paper

Computer Science is our largest area of interest. We are developing a method to detect media bias reliably automatically. To achieve that goal, we develop new methods to create data sets (see the last section of this list) for our classifiers. We experiment with classic machine learning as well as deep learning, implement multi-task learning for the case of bias detection, and try to normalize text from different sources (such as Twitter or Wikipedia) to a journalistic style so we can distinguish content and style more easily, to build more powerful data sets quicker. We look into how content can explicitly and implicitly be measured and then compared among outlets to answer the question to which extent different meanings and perceptions of the same words in different situations and outlets can be explained. We also built a graph database prototype containing a few hundred million articles from US news companies to apply our classifiers and modern network analysis techniques. To summarize, our main computer scientific research aims are building powerful classification algorithms to detect bias, developing a database and interface that enables any user to efficiently apply methods on any data set, and identifying the exact meaning of context regarding each word.

Economics & Political Science

media bias research paper

Over the past decade, economists and political scientists have developed a strong interest in media bias. A core question of this strand of research is whether bias is driven by the demand side of the news market or the supply side. Demand-side explanations posit that audiences often prefer to consume content that is compatible with their beliefs, and profit-maximizing media companies can increase their sales if they pander to these beliefs. Supply-side explanations emphasize the role of journalists, editors, and media owners in causing bias. In other cases, bias is a consequence of the influence of political actors and lobbies. Empirical studies in economics and political science aim to disentangle alternative explanations and shed light on the causal arrows. In addition, researchers investigate the consequences of bias, such as effects on voting on belief polarization. For instance, this study uses plausibly exogenous variation in unemployment news to estimate the causal effect of left-digit bias on voting in US gubernatorial elections.

media bias research paper

We create different data sets for various aspects of media bias. To do so, we built our own mixed annotation and survey question platform TASSY, and we developed guidelines to teach bias annotators to understand the concept in more depth. Also, we are developing a game to teach any player to read the news more critically and to return bias annotations simultaneously. Until now, we published two data sets, MBIC and BABE. You can find them here .

There are two different types of media bias. One bias, which we refer to as ideology, reflects a news outlet's desire to affect reader opinions in a particular direction. The second bias, which we refer to as spin, reflects the outlet's attempt to simply create a memorable story. We examine competition among media outlets in the presence of these biases. Whereas competition can eliminate the effect of ideological bias, it actually exaggerates the incentive to spin stories.

  • Acknowledgements and Disclosures

MARC RIS BibTeΧ

Download Citation Data

Published Versions

More from nber.

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

15th Annual Feldstein Lecture, Mario Draghi, "The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone cover slide

  • University of Wisconsin–Madison
  • University of Wisconsin-Madison
  • Research Guides
  • College Undergraduate Research Group
  • Mass Media: An Undergraduate Research Guide

Mass Media: An Undergraduate Research Guide : Media Bias

  • Writing, Citing, & Research Help
  • Advertising
  • Copyright/Intellectual Property
  • Social Media
  • Women in Advertising
  • Newspaper Source Plus Newspaper Source Plus includes 1,520 full-text newspapers, providing more than 28 million full-text articles.
  • Newspaper Research Guide This guide describes sources for current and historical newspapers available in print, electronically, and on microfilm through the UW-Madison Libraries. These sources are categorized by pages: Current, Historical, Local/Madison, Wisconsin, US, Alternative/Ethnic, and International.

Organizations

  • Center for Media and Democracy's PR Watch Madison, WI-based nonprofit organization that focuses on "investigating and exposing the undue influence of corporations and front groups on public policy, including PR campaigns, lobbying, and electioneering"
  • CAMERA The Committee for Accuracy in Middle East Reporting in America describes itself as "a media-monitoring, research and membership organization devoted to promoting accurate and balanced coverage of Israel and the Middle East"
  • Fairness & Accuracy in Reporting (FAIR) "FAIR, the national media watch group, has been offering well-documented criticism of media bias and censorship since 1986"
  • Media Research Center Conservative watch group with a "commitment to neutralizing left-wing bias in the news media and popular culture"

About Media Bias

This guide focuses on bias in mass media coverage of news and current events. It includes concerns of sensationalism, allegations of media bias, and criticism of media's increasingly profit-motivated ethics. It also includes examples of various types of sources coming from particular partisan viewpoints.

Try searching these terms using the resources linked on this page: media bias, sensational* AND (news or media), bias AND media coverage, (liberal or conservative) AND bias, [insert topic] AND media bias, media manipulation, misrepresent* AND media

Overview Resources - Background Information

  • International Encyclopedia of Media Studies This encyclopedia covers the broad field of "media studies” which includes encompassing print journalism, radio, film, TV, photography, computing, mobile phones, and digital media.
  • Opposing Viewpoints Resource Center (OVRC) provides viewpoint articles, topic overviews, statistics, primary documents, links to websites, and full-text magazine and newspaper articles related to controversial social issues.
  • FactCheck.org A nonpartisan, nonprofit "consumer advocate" for voters that aims to reduce the level of deception and confusion in U.S. politics by monitoring the factual accuracy of what is said by major U.S. political players in the form of TV ads, debates, speeches, interviews and news releases.

media bias research paper

Articles - Scholarly and Popular

  • Academic Search Includes scholarly and popular articles on many topics.
  • Communication & Mass Media Complete Includes articles on communication and media topics.
  • ProQuest One Business (formerly ABI Inform) covers a wide range of business topics including accounting, finance, management, marketing and real estate.
  • Project Muse Disciplines covered include art, anthropology, literature, film, theatre, history, ethnic and cultural studies, music, philosophy, religion, psychology, sociology and women's studies.
  • JSTOR: The Scolarly Journal Archive full-text journal database which provides access to articles on many different topics.

Statistics and Data

  • Data Citation Index The Data Citation Index provides a single point of access to quality research data from repositories across disciplines and around the world. Through linked content and summary information, this data is displayed within the broader context of the scholarly research, enabling users to gain perspective that is lost when data sets or repositories are viewed in isolation.
  • << Previous: Copyright/Intellectual Property
  • Next: Social Media >>
  • Last Updated: Apr 24, 2024 3:01 PM
  • URL: https://researchguides.library.wisc.edu/massmediaURG

Media Bias/Fact Check

  • April 29, 2024 | Biden and Trump supporters sharply divided by the media they consume
  • April 29, 2024 | The Latest Fact Checks curated by Media Bias Fact Check 04/29/2024
  • April 29, 2024 | Daily Source Bias Check: Arc Digital
  • April 28, 2024 | The Latest Fact Checks curated by Media Bias Fact Check 04/28/2024 (Weekend Edition)
  • April 28, 2024 | Daily Source Bias Check: Curve Magazine

We are the most comprehensive media bias resource on the internet. There are currently 7800+ media sources, journalists, and politicians listed in our database and growing every day. Don’t be fooled by Questionable sources. Use the search feature above (Header) to check the bias of any source. Use name or URL.

MBFC Media and Fact Check News

Media News

Least Biased , Original

Biden and Trump supporters sharply divided by the media they consume

  Supporters of President Joe Biden and former President Donald Trump diverge sharply in their news sources, according to recent data from an NBC News…

Daily Curated Fact Checks by MBFC

Fact Check , Facts Matter , Least Biased , Original

The Latest Fact Checks curated by Media Bias Fact Check 04/29/2024

Media Bias Fact Check selects and publishes fact checks from around the world. We only utilize fact-checkers who are either a signatory of the International…

Bias and Credibility Ratings by Media Bias Fact Check

Bias Check , Original

Daily Source Bias Check: Arc Digital

LEAST BIASED These sources have minimal bias and use very few loaded words (wording that attempts to influence an audience by using an appeal to…

The Latest Fact Checks curated by Media Bias Fact Check 04/28/2024 (Weekend Edition)

Media Bias Fact Check selects and publishes fact checks from around the world. We only utilize fact-checkers that are either a signatory of the International…

Daily Source Bias Check: Curve Magazine

LEFT BIAS These media sources are moderate to strongly biased toward liberal causes through story selection and/or political affiliation.  They may utilize strong loaded words…

Literacy Quiz

MBFC’s Weekly Media Literacy Quiz Covering the Week of Apr 21st – Apr 27th

Welcome to our weekly media literacy quiz. This quiz will test your knowledge of the past week’s events with a focus on facts, misinformation, bias,…

The Latest Fact Checks curated by Media Bias Fact Check 04/27/2024 (Weekend Edition)

Daily source bias check: crosscut.

LEFT-CENTER BIAS These media sources have a slight to moderate liberal bias.  They often publish factual information that utilizes loaded words (wording that attempts to…

Satirical News Site ‘The Onion’ Acquired by Global Tetrahedron

  The Onion, a satirical news site, has been sold to Global Tetrahedron, a Chicago-based firm formed by four digital media veterans who are fans…

The Latest Fact Checks curated by Media Bias Fact Check 04/26/2024

Daily source bias check: covert geopolitics.

CONSPIRACY-PSEUDOSCIENCE Sources in the Conspiracy-Pseudoscience category may publish unverifiable information that is not always supported by evidence. These sources may be untrustworthy for credible/verifiable information, therefore…

Factual News , Least Biased , Original

Gateway Pundit Declares Bankruptcy Amid Defamation Lawsuits

  Jim Hoft, founder of the conspiracy theory website Gateway Pundit, announced that the company is declaring bankruptcy while battling ongoing defamation lawsuits. The parent…

The Latest Fact Checks curated by Media Bias Fact Check 04/25/2024

Daily source bias check: sault star.

RIGHT-CENTER BIAS These media sources are slight to moderately conservative in bias. They often publish factual information that utilizes loaded words (wording that attempts to…

The Latest Fact Checks curated by Media Bias Fact Check 04/24/2024

Verified factual news from nfn, news facts network verified factual news.

A Republican group critical of former President Trump has taken aim at prominent GOP figures,... The post Anti-Trump Republican Group Targets […]

Hunter Biden’s legal team plans to sue Fox News, alleging defamation, unauthorized commercial use of... The post Hunter Biden’s Legal Team Plans […]

Claim: Social media posts have claimed that Washington state passed a bill offering cash incentives... The post Fact vs. Fiction: The Claim of […]

Rep. Marjorie Taylor Greene (R-Ga.) is determined to remove Speaker Mike Johnson (R-La.), asserting that... The post Rep. Marjorie Taylor Greene […]

The Arizona Republican Party has chosen state Sen. Jake Hoffman, recently indicted for his role... The post Arizona GOP taps ‘fake elector’ for […]

Fox News anchor Maria Bartiromo questioned House Judiciary Committee Chair Jim Jordan (R-Ohio) about the... The post Maria Bartiromo questions Jim […]

Senator Bernie Sanders (I-Vt.) renewed his criticism of Israeli Prime Minister Benjamin Netanyahu, labeling Israel’s... The post Bernie Sanders […]

We are used by:

media bias research paper

Check out a list of Educational Institutions and Media Outlets that use Media Bias Fact Check as a resource.

Subscribe by Email

Enter your email address to subscribe to MBFC and receive notifications of new posts by email. For Ad-Free Subscriptions go here: https://mediabiasfactcheck.com/membership-account/membership-levels/

Email Address

Support our mission - ad-free browsing & exclusive content. If you value our work, consider becoming a member.

New membership plans available.

Every contribution counts

Never see this message again

This paper is in the following e-collection/theme issue:

Published on 23.4.2024 in Vol 26 (2024)

Electronic Media Use and Sleep Quality: Updated Systematic Review and Meta-Analysis

Authors of this article:

Author Orcid Image

  • Xiaoning Han * , PhD   ; 
  • Enze Zhou * , MA   ; 
  • Dong Liu * , PhD  

School of Journalism and Communication, Renmin University of China, Beijing, China

*all authors contributed equally

Corresponding Author:

Dong Liu, PhD

School of Journalism and Communication

Renmin University of China

No. 59 Zhongguancun Street, Haidian District

Beijing, 100872

Phone: 86 13693388506

Email: [email protected]

Background: This paper explores the widely discussed relationship between electronic media use and sleep quality, indicating negative effects due to various factors. However, existing meta-analyses on the topic have some limitations.

Objective: The study aims to analyze and compare the impacts of different digital media types, such as smartphones, online games, and social media, on sleep quality.

Methods: Adhering to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, the study performed a systematic meta-analysis of literature across multiple databases, including Web of Science, MEDLINE, PsycINFO, PubMed, Science Direct, Scopus, and Google Scholar, from January 2018 to October 2023. Two trained coders coded the study characteristics independently. The effect sizes were calculated using the correlation coefficient as a standardized measure of the relationship between electronic media use and sleep quality across studies. The Comprehensive Meta-Analysis software (version 3.0) was used to perform the meta-analysis. Statistical methods such as funnel plots were used to assess the presence of asymmetry and a p -curve test to test the p -hacking problem, which can indicate publication bias.

Results: Following a thorough screening process, the study involved 55 papers (56 items) with 41,716 participants from over 20 countries, classifying electronic media use into “general use” and “problematic use.” The meta-analysis revealed that electronic media use was significantly linked with decreased sleep quality and increased sleep problems with varying effect sizes across subgroups. A significant cultural difference was also observed in these effects. General use was associated with a significant decrease in sleep quality ( P <.001). The pooled effect size was 0.28 (95% CI 0.21-0.35; k =20). Problematic use was associated with a significant increase in sleep problems ( P ≤.001). The pooled effect size was 0.33 (95% CI 0.28-0.38; k =36). The subgroup analysis indicated that the effect of general smartphone use and sleep problems was r =0.33 (95% CI 0.27-0.40), which was the highest among the general group. The effect of problematic internet use and sleep problems was r =0.51 (95% CI 0.43-0.59), which was the highest among the problematic groups. There were significant differences among these subgroups (general: Q between =14.46, P =.001; problematic: Q between =27.37, P <.001). The results of the meta-regression analysis using age, gender, and culture as moderators indicated that only cultural difference in the relationship between Eastern and Western culture was significant ( Q between =6.69; P =.01). All funnel plots and p -curve analyses showed no evidence of publication and selection bias.

Conclusions: Despite some variability, the study overall confirms the correlation between increased electronic media use and poorer sleep outcomes, which is notably more significant in Eastern cultures.

Introduction

Sleep is vital to our health. Research has shown that high sleep quality can lead to improvements in a series of health outcomes, such as an improved immune system, better mood and mental health, enhanced physical performance, lower risk of chronic diseases, and a longer life span [ 1 - 5 ].

Electronic media refers to forms of media or communication that use electronic devices or technology to create, distribute, and display content. This can include various forms of digital media such as smartphones, tablets, instant messaging, phone calls, social media, online games, short video platforms, etc. Electronic media has permeated every aspect of our lives [ 6 ]. Many prefer to use smartphones or tablets before sleep, which can negatively affect sleep in many aspects, including delayed sleep onset, disrupted sleep patterns, shortened sleep duration, and poor sleep quality [ 7 - 10 ]. Furthermore, problematic use occurs when the behavior surpasses a certain limit. In this study, problematic use of electronic media is not solely determined by the amount of time spent on these platforms, but rather by behavioral indicators that suggest an unhealthy or harmful relationship with them.

Smartphones or tablet use can affect sleep quality in many ways. At first, the use of these devices may directly displace, delay, or interrupt sleep time, resulting in inadequate sleep quantity [ 11 ]. The sound of notifications and vibrations of these devices may interrupt sleep. Second, the screens of smartphones and tablets emit blue light, which can suppress the production of melatonin, the hormone responsible for regulating sleep-wake cycles [ 12 ]. Third, consuming emotionally charged content, such as news, suspenseful movies, or engaging in online arguments, can increase emotional arousal, making it harder to relax and fall asleep. This emotional arousal can also lead to disrupted sleep and nightmares [ 13 ]. Finally, the use of electronic devices before bedtime can lead to a delay in bedtime and a shortened sleep duration, as individuals may lose track of time while engaging with their devices. This can result in a disrupted sleep routine and decreased sleep quality [ 14 ].

Some studies have conducted meta-analyses on screen media use and sleep outcomes in 2016, 2019, and 2021 [ 15 - 17 ]. However, these studies had their own limitations. First, the sample size included in their meta-analyses was small (around 10). Second, these studies only focused on 1 aspect of the effect of digital media on sleep quality. For example, Carter et al [ 16 ] focused only on adolescents, and both Alimoradi et al [ 15 ] and Kristensen et al [ 17 ] only reviewed the relationship between problematic use of digital media or devices and sleep quality. Despite of the high heterogeneity found in the meta-analyses, none have compared the effects of different digital media or devices. This study aims to clarify and compare the effects of these different channels.

Literature Search

The research adhered to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines ( Multimedia Appendix 1 ) and followed a predetermined protocol [ 18 , 19 ]. As the idea and scope of this study evolved over time, the meta-analysis was not preregistered. However, the methodology was defined a priori and strictly followed to reduce biases, and the possible influence of post hoc decisions was minimized. All relevant studies in English, published from January 1, 2018, to October 9, 2023, were searched. We searched the following databases: Web of Science, MEDLINE, PsycINFO, PubMed, Science Direct, Scopus, and Google Scholar. The abstracts were examined manually. The keywords used to search were the combination of the following words: “sleep” OR “sleep duration” OR “sleep quality” OR “sleep problems” AND “electronic media” OR “smartphone” OR “tablet” OR “social media” OR “Facebook” OR “Twitter” OR “online gaming” OR “internet” OR “addiction” OR “problematic” ( Multimedia Appendix 2 ). Additionally, the reference lists of relevant studies were examined.

Two trained coders independently screened the titles and abstracts of the identified papers for eligibility, followed by a full-text review of the selected studies. Discrepancies between the coders were resolved through discussion until a consensus was reached. The reference lists of the included studies were also manually screened to identify any additional relevant studies. Through this rigorous process, we ensured a comprehensive and replicable literature search that could contribute to the robustness of our meta-analysis findings.

Inclusion or Exclusion Criteria

Titles and abstracts from search results were scrutinized for relevance, with duplicates removed. Full texts of pertinent papers were obtained, and their eligibility for inclusion was evaluated. We mainly included correlational studies that used both continuous measures of time spent using electronic media use and sleep quality. Studies must have been available in English. Four criteria were used to screen studies: (1) only peer-reviewed empirical studies, published in English, were considered for inclusion in the meta-analysis; (2) the studies should report quantitative statistics on electronic media use and sleep quality, including sample size and essential information to calculate the effect size, and review papers, qualitative studies, case studies, and conference abstracts were excluded; (3) studies on both general use and problematic use of electronic media or devices should be included; and (4) only studies that used correlation, regression, or odds ratio were included to ensure consistency.

Study Coding

Two trained coders were used to code the characteristics of the studies independently. Discrepancies were discussed with the first author of the paper to resolve. Sample size and characteristics of participants were coded: country, female ratio, average age, publication year, and electronic types. Effect sizes were either extracted directly from the original publications or manually calculated. If a study reported multiple dependent effects, the effects were merged into one. If a study reported multiple independent effects from different samples, the effects were included separately. Additionally, to evaluate the study quality, the papers were classified into 3 tiers (high, middle, and low) according to Journal Citation Reports 2022 , a ranking of journals based on their impact factor as reported in the Web of Science. The few unindexed papers were rated based on their citation counts as reported in Google Scholar.

Meta-Analysis and Moderator Analyses

The effect sizes were calculated using the correlation coefficient ( r ) as a standardized measure of the relationship between electronic media or device use and sleep quality across studies. When studies reported multiple effect sizes, we selected the one that best represented the overall association between electronic media use and sleep quality. If studies did not provide correlation coefficients, we converted other reported statistics (eg, standardized regression coefficients) into correlation coefficients using established formulas. Once calculated, the correlation coefficients were transformed into Fisher z scores to stabilize the variance and normalize the distribution.

Previous meta-studies have shown high levels of heterogeneity. Hence, the random effects model was adopted for all analyses. To explore potential factors contributing to the heterogeneity and to further understand the relationship between electronic media use and sleep quality, we conducted moderator analyses. The following categorical and continuous moderators were examined: media types (online gaming, social media, smartphone, or intent), participants’ average age, culture, female ratio, and sleep quality assessment method. For categorical moderators, subgroup analyses were performed, while for continuous moderators, meta-regression analyses were conducted. All analyses were completed in the Comprehensive Meta-Analysis software (version 3.0; Biostat, Inc).

Publication Bias

We used statistical methods such as funnel plots to assess the presence of asymmetry and a p -curve test to test the p -hacking problem, which may indicate publication bias. In case of detected asymmetry, we applied techniques such as the trim-and-fill method to adjust the effect size estimates.

By addressing publication bias, we aimed to provide a more accurate and reliable synthesis of the available evidence, enhancing the validity and generalizability of our meta-analytic findings. Nevertheless, it is essential for readers to interpret the results cautiously, considering the potential limitations imposed by publication bias and other methodological concerns.

Search Findings

A total of 98,806 studies were identified from databases, especially Scopus (n=49,643), Google Scholar (n=18,600), Science Direct (n=15,084), and Web of Science (n=11,689). Upon removing duplicate records and excluding studies that did not meet the inclusion criteria, 754 studies remained for the screening phase. After screening titles, abstracts, and full texts, 703 studies were excluded. A total of 4 additional studies were identified from the references of relevant reviews. Finally, 55 studies [ 20 - 74 ] were included in the meta-analysis. The flow diagram of the selection is shown in Figure 1 .

media bias research paper

Characteristics of Included Studies

In 20 studies, 21,594 participants were included in the analysis of the general use of electronic media and sleep quality. The average age of the sample ranged from 9.9 to 44 years. The category of general online gaming and sleep quality included 4 studies, with 14,837 participants; the category of general smartphone use and sleep quality included 10 studies, with 5011 participants; and the category of general social media use and sleep quality included 6 studies, with 1746 participants.

These studies came from the following countries or areas: Germany, Serbia, Indonesia, India, China, Italy, Saudi Arabia, New Zealand, the United Kingdom, the United States, Spain, Qatar, Egypt, Argentina, and Portugal. The most frequently used measure of electronic media use was the time spent on it. The most frequently used measure of sleep was the Pittsburgh Sleep Quality Index.

In 35 studies, 20,122 participants were included in the analysis of the problematic use of electronic media and sleep quality. The average age of the sample ranged from 14.76 to 65.62 years. The category of problematic online gaming and sleep quality included 5 studies, with 1874 participants; the category of problematic internet use and sleep quality included 2 studies, with 774 participants; the category of problematic smartphone use and sleep quality included 18 studies, with 12,204 participants; and the category of problematic social media use and sleep quality included 11 studies, with 5270 participants. There was a study that focused on both social media and online gaming, which led to its inclusion in the analysis. These studies came from 14 countries or areas: Turkey, the United States, Indonesia, China, France, Taiwan, India, South Korea, Hong Kong, Iran, Poland, Israel, Hungary, and Saudi Arabia. The most frequently used measures of problematic electronic media use were the Internet Gaming Disorder Scale-Short Form, Smartphone Addiction Scale-Short Form, and Bergen Social Media Addiction Scale.

With respect to study quality, the 56 papers were published in 50 journals, 41 of which were indexed in Journal Citation Reports 2022 , while the remaining 9 journals were rated based on their citation counts as reported in Google Scholar. As a result, of the 56 papers included in the study, 22 papers were assigned a high rating, 18 papers were assigned a middle rating, and 16 papers were assigned a low rating. More information about the included studies is listed in Multimedia Appendix 3 [ 20 - 74 ].

Meta-Analysis

The results of the meta-analysis of the relationship between general electronic media use and sleep quality showed that electronic media use was associated with a significant decrease in sleep quality ( P <.001). The pooled effect size was 0.28 (95% CI 0.21-0.35; k =20), indicating that individuals who used electronic media more frequently were generally associated with more sleeping problems.

The second meta-analysis showed that problematic electronic media use was associated with a significant increase in sleep problems ( P ≤.001). The pooled effect size was 0.33 (95% CI 0.28-0.38; k =36), indicating that participants who used electronic media more frequently were more likely to have more sleep problems.

Moderator Analyses

At first, we conducted subgroup analyses for different media or devices. The results are shown in Tables 1 and 2 . The effect of the relationship between general online gaming and sleep problems was r =0.14 (95% CI 0.06-0.22); the effect of the relationship between general smartphone use and sleep problems was r =0.33 (95% CI 0.27-0.40); and the effect of the relationship between general social media use and sleep problems was r =0.28 (95% CI 0.21-0.34). There are significant differences among these groups ( Q between =14.46; P =.001).

The effect of the relationship between problematic gaming and sleep problems was r =0.49, 95% CI 0.23-0.69; the effect of the relationship between problematic internet use and sleep problems was r =0.51 (95% CI 0.43-0.59); the effect of the relationship between problematic smartphone use and sleep problems was r =0.25 (95% CI 0.20-0.30); and the effect of the relationship between problematic social media use and sleep problems was r =0.35 (95% CI 0.29-0.40). There are significant differences among these groups ( Q between =27.37; P <.001).

We also used age, gender, and culture as moderators to conduct meta-regression analyses. The results are shown in Tables 3 and 4 . Only cultural difference in the relationship between Eastern and Western culture was significant ( Q between =6.694; P =.01). All other analyses were not significant.

a Not applicable.

All funnel plots of the analyses were symmetrical, showing no evidence of publication bias ( Figures 2 - 5 ). We also conducted p -curve analyses to see whether there were any selection biases. The results also showed that there were no biases.

media bias research paper

Principal Findings

This study indicated that electronic media use was significantly linked with decreased sleep quality and increased sleep problems with varying effect sizes across subgroups. General use was associated with a significant decrease in sleep quality. Problematic use was associated with a significant increase in sleep problems. A significant cultural difference was also observed by the meta-regression analysis.

First, there is a distinction in the impact on sleep quality between problematic use and general use, with the former exhibiting a higher correlation strength. However, both have a positive correlation, suggesting that the deeper the level of use, the more sleep-related issues are observed. In addressing this research question, the way in which electronic media use is conceptualized and operationalized may have a bearing on the ultimate outcomes. Problematic use is measured through addiction scales, while general use is predominantly assessed by duration of use (time), leading to divergent results stemming from these distinct approaches. The key takeaway is that each measurement possesses unique strengths and weaknesses, and the pathways affecting sleep quality differ. Consequently, the selection of a measurement approach should be tailored to the specific research question at hand. The duration of general use reflects an individual’s comprehensive involvement with electronic media, and its impact on sleep quality is evident in factors such as an extended time to fall asleep and reduced sleep duration. The addiction scale for problematic use illuminates an individual’s preferences, dependencies, and other associations with electronic media. Its impact on sleep quality is evident through physiological and psychological responses, including anxiety, stress, and emotional reactions.

Second, notable variations exist in how different types of electronic media affect sleep quality. In general, the positive predictive effects of smartphone, social media, and online gaming use durations on sleep problems gradually decrease. In the problematic context, the intensity of addiction to the internet and online gaming has the most significant positive impact on sleep problems, followed by social media, while smartphones exert the least influence. On one hand, longitudinal comparisons within the same context reveal that the content and format of electronic media can have varying degrees of negative impact on sleep quality, irrespective of whether it involves general or problematic use. On the other hand, cross-context comparisons suggest that both general and problematic use play a role in moderating the impact of electronic media types on sleep quality. As an illustration, problematic use reinforces the positive impact of online gaming and social media on sleep problems, while mitigating the influence of smartphones. Considering smartphones as electronic media, an extended duration of general use is associated with lower sleep quality. However, during problematic use, smartphones serve as the platform for other electronic media such as games and social media, resulting in a weakened predictive effect on sleep quality. Put differently, in the context of problematic use, the specific type of electronic media an individual consumes on their smartphones becomes increasingly pivotal in shaping sleep quality.

Third, cultural differences were found to be significant moderators of the relationship between electronic media use and sleep problems in both our study and Carter et al [ 16 ]. Kristensen et al [ 17 ], however, did not specifically address the role of cultural differences but revealed that there was a strong and consistent association between bedtime media device use and sleep outcomes across the studies included. Our findings showed that the association between problematic social media use was significantly larger in Eastern culture. We speculate that the difference may be attributed to cultural differences in social media use patterns, perceptions of social norms and expectations, variations in bedtime routines and habits, and diverse coping mechanisms for stress. These speculations warrant further investigation to understand better the underlying factors contributing to the observed cultural differences in the relationship between social media use and sleep quality.

Fourth, it was observed that gender and age had no significant impact on sleep quality. The negative effects of electronic media use are not only confined to the sleep quality of adults, and the association with gender differences remains unclear. Recent studies point out that electronic media use among preschoolers may result in a “time-shifting” process, disrupting their sleep patterns [ 75 ]. Similarly, children and adolescent sleep patterns have been reported to be adversely affected by electronic media use [ 76 - 78 ]. These findings underscore the necessity of considering age group variations in future research, as electronic media use may differently impact sleep quality across age demographics.

In conclusion, our study, Carter et al [ 16 ], and Kristensen et al [ 17 ] collectively emphasize the importance of understanding and addressing the negative impact of electronic media use, particularly problematic online gaming and smartphone use, on sleep quality and related issues. Further research is warranted to explore the underlying mechanisms and specific factors contributing to the relationship between electronic media use and sleep problems.

Strengths and Limitations

Our study, supplemented with research by Carter et al [ 16 ] and Kristensen et al [ 17 ], contributes to the growing evidence supporting a connection between electronic media use and sleep quality. We found that both general and problematic use of electronic media correlates with sleep issues, with the strength of the correlation varying based on the type of electronic media and cultural factors, with no significant relationship observed with age or gender.

Despite the vast amount of research on the relationship between electronic media use and sleep, several gaps and limitations still exist.

First, the inclusion criteria were restricted to English-language, peer-reviewed empirical studies published between January 2018 and October 2023. This may have led to the exclusion of relevant studies published in other languages or before 2018, potentially limiting the generalizability of our findings. Furthermore, the exclusion of non–peer-reviewed studies and conference abstracts may have introduced publication bias, as significant results are more likely to be published in peer-reviewed journals.

Second, although we used a comprehensive search strategy, the possibility remains that some relevant studies may have been missed. Additionally, the search strategies were not linked with Medical Subject Headings headers and may not have captured all possible electronic media types, resulting in an incomplete representation of the effects of electronic media use on sleep quality.

Third, the studies included in our meta-analysis exhibited considerable heterogeneity in sample characteristics, electronic media types, and measures of sleep quality. This heterogeneity might have contributed to the variability in effect sizes observed across studies. Although we conducted moderator analyses to explore potential sources of heterogeneity, other unexamined factors may still have influenced the relationship between electronic media use and sleep quality.

Fourth, our meta-analysis relied on the correlation coefficient ( r ) as the primary effect size measure, which may not fully capture the complex relationships between electronic media use and sleep quality. Moreover, the conversion of other reported statistics into correlation coefficients could introduce additional sources of error. The correlational nature of the included studies limited our ability to draw causal inferences between electronic media use and sleep quality. Experimental and longitudinal research designs would provide stronger evidence for the directionality of this relationship.

Given these limitations, future research should aim to include a more diverse range of studies, examine additional potential moderators, and use more robust research designs to better understand the complex relationship between electronic media use and sleep quality.

Conclusions

In conclusion, our updated meta-analysis affirms the consistent negative impact of electronic media use on sleep outcomes, with problematic online gaming and smartphone use being particularly impactful. Notably, the negative effect of problematic social media use on sleep quality appears more pronounced in Eastern cultures. This research emphasizes the need for public health initiatives to increase awareness of these impacts, particularly for adolescents. Further research, including experimental and longitudinal studies, is necessary to delve deeper into the complex relationship between electronic media use and sleep quality, considering potential moderators like cultural differences.

Acknowledgments

This research was supported by the Journalism and Marxism Research Center, Renmin University of China (MXG202215), and by funds for building world-class universities (disciplines) of Renmin University of China (23RXW195).

A statement on the use of ChatGPT in the process of writing this paper can be found in Multimedia Appendix 4.

Data Availability

The data sets analyzed during this study are available from the corresponding author on reasonable request.

Conflicts of Interest

None declared.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 checklist.

Search strategies.

Characteristics of included studies.

Large language model statement.

  • Brink-Kjaer A, Leary EB, Sun H, Westover MB, Stone KL, Peppard PE, et al. Age estimation from sleep studies using deep learning predicts life expectancy. NPJ Digit Med. 2022;5(1):103. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Killgore WDS. Effects of sleep deprivation on cognition. Prog Brain Res. 2010;185:105-129. [ CrossRef ] [ Medline ]
  • Lee S, Mu CX, Wallace ML, Andel R, Almeida DM, Buxton OM, et al. Sleep health composites are associated with the risk of heart disease across sex and race. Sci Rep. 2022;12(1):2023. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Prather AA. Sleep, stress, and immunity. In: Grandner MA, editor. Sleep and Health, 1st Edition. Cambridge. Academic Press; 2019;319-330.
  • Scott AJ, Webb TL, Martyn-St James M, Rowse G, Weich S. Improving sleep quality leads to better mental health: a meta-analysis of randomised controlled trials. Sleep Med Rev. Dec 2021;60:101556. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guttmann A. Statista. 2023. URL: https://www.statista.com/topics/1536/media-use/#topicOverview [accessed 2023-06-10]
  • Hysing M, Pallesen S, Stormark KM, Jakobsen R, Lundervold AJ, Sivertsen B. Sleep and use of electronic devices in adolescence: results from a large population-based study. BMJ Open. Feb 02, 2015;5(1):e006748. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lavender RM. Electronic media use and sleep quality. Undergrad J Psychol. 2015;28(1):55-62. [ FREE Full text ]
  • Exelmans L, Van den Bulck J. Bedtime mobile phone use and sleep in adults. Soc Sci Med. 2016;148:93-101. [ CrossRef ] [ Medline ]
  • Twenge JM, Krizan Z, Hisler G. Decreases in self-reported sleep duration among U.S. adolescents 2009-2015 and association with new media screen time. Sleep Med. 2017;39:47-53. [ CrossRef ] [ Medline ]
  • Exelmans L. Electronic media use and sleep: a self-control perspective. Curr Sleep Med Rep. 2019;5:135-140. [ CrossRef ]
  • Jniene A, Errguig L, El Hangouche AJ, Rkain H, Aboudrar S, El Ftouh M, et al. Perception of sleep disturbances due to bedtime use of blue light-emitting devices and its impact on habits and sleep quality among young medical students. Biomed Res Int. 2019;2019:7012350. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Munezawa T, Kaneita Y, Osaki Y, Kanda H, Minowa M, Suzuki K, et al. The association between use of mobile phones after lights out and sleep disturbances among Japanese adolescents: a nationwide cross-sectional survey. Sleep. 2011;34(8):1013-1020. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Smith LJ, Gradisar M, King DL, Short M. Intrinsic and extrinsic predictors of video-gaming behaviour and adolescent bedtimes: the relationship between flow states, self-perceived risk-taking, device accessibility, parental regulation of media and bedtime. Sleep Med. 2017;30:64-70. [ CrossRef ] [ Medline ]
  • Alimoradi Z, Lin CY, Broström A, Bülow PH, Bajalan Z, Griffiths MD, et al. Internet addiction and sleep problems: a systematic review and meta-analysis. Sleep Med Rev. 2019;47:51-61. [ CrossRef ] [ Medline ]
  • Carter B, Rees P, Hale L, Bhattacharjee D, Paradkar MS. Association between portable screen-based media device access or use and sleep outcomes: a systematic review and meta-analysis. JAMA Pediatr. 2016;170(12):1202-1208. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kristensen JH, Pallesen S, King DL, Hysing M, Erevik EK. Problematic gaming and sleep: a systematic review and meta-analysis. Front Psychiatry. 2021;12:675237. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Akçay D, Akçay BD. The effect of computer game playing habits of university students on their sleep states. Perspect Psychiatr Care. 2020;56(4):820-826. [ CrossRef ] [ Medline ]
  • Alahdal WM, Alsaedi AA, Garrni AS, Alharbi FS. The impact of smartphone addiction on sleep quality among high school students in Makkah, Saudi Arabia. Cureus. 2023;15(6):e40759. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alam A, Alshakhsi S, Al-Thani D, Ali R. The role of objectively recorded smartphone usage and personality traits in sleep quality. PeerJ Comput Sci. 2023;9:e1261. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Almeida F, Marques DR, Gomes AA. A preliminary study on the association between social media at night and sleep quality: the relevance of FOMO, cognitive pre-sleep arousal, and maladaptive cognitive emotion regulation. Scand J Psychol. 2023;64(2):123-132. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alshobaili FA, AlYousefi NA. The effect of smartphone usage at bedtime on sleep quality among Saudi non-medical staff at King Saud University Medical City. J Family Med Prim Care. 2019;8(6):1953-1957. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alsulami A, Bakhsh D, Baik M, Merdad M, Aboalfaraj N. Assessment of sleep quality and its relationship to social media use among medical students. Med Sci Educ. 2019;29(1):157-161. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Altintas E, Karaca Y, Hullaert T, Tassi P. Sleep quality and video game playing: effect of intensity of video game playing and mental health. Psychiatry Res. 2019;273:487-492. [ CrossRef ] [ Medline ]
  • Asbee J, Slavish D, Taylor DJ, Dietch JR. Using a frequentist and Bayesian approach to examine video game usage, substance use, and sleep among college students. J Sleep Res. 2023;32(4):e13844. [ CrossRef ] [ Medline ]
  • Bae ES, Kang HS, Lee HN. The mediating effect of sleep quality in the relationship between academic stress and social network service addiction tendency among adolescents. J Korean Acad Community Health Nurs. 2020;31(3):290-299. [ FREE Full text ] [ CrossRef ]
  • Chatterjee S, Kar SK. Smartphone addiction and quality of sleep among Indian medical students. Psychiatry. 2021;84(2):182-191. [ CrossRef ] [ Medline ]
  • Chung JE, Choi SA, Kim KT, Yee J, Kim JH, Seong JW, et al. Smartphone addiction risk and daytime sleepiness in Korean adolescents. J Paediatr Child Health. 2018;54(7):800-806. [ CrossRef ] [ Medline ]
  • Demir YP, Sumer MM. Effects of smartphone overuse on headache, sleep and quality of life in migraine patients. Neurosciences (Riyadh). 2019;24(2):115-121. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dewi RK, Efendi F, Has EMM, Gunawan J. Adolescents' smartphone use at night, sleep disturbance and depressive symptoms. Int J Adolesc Med Health. 2018;33(2):20180095. [ CrossRef ] [ Medline ]
  • Eden A, Ellithorpe ME, Meshi D, Ulusoy E, Grady SM. All night long: problematic media use is differentially associated with sleep quality and depression by medium. Commun Res Rep. 2021;38(3):143-149. [ CrossRef ]
  • Ellithorpe ME, Meshi D, Tham SM. Problematic video gaming is associated with poor sleep quality, diet quality, and personal hygiene. Psychol Pop Media. 2023;12(2):248-253. [ CrossRef ]
  • Elsheikh AA, Elsharkawy SA, Ahmed DS. Impact of smartphone use at bedtime on sleep quality and academic activities among medical students at Al -Azhar University at Cairo. J Public Health (Berl.). Jun 15, 2023.:1-10. [ FREE Full text ] [ CrossRef ]
  • Gaya AR, Brum R, Brites K, Gaya A, de Borba Schneiders L, Duarte Junior MA, et al. Electronic device and social network use and sleep outcomes among adolescents: the EHDLA study. BMC Public Health. 2023;23(1):919. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gezgin DM. Understanding patterns for smartphone addiction: age, sleep duration, social network use and fear of missing out. Cypriot J Educ Sci. 2018;13(2):166-177. [ CrossRef ]
  • Graham S, Mason A, Riordan B, Winter T, Scarf D. Taking a break from social media improves wellbeing through sleep quality. Cyberpsychol Behav Soc Netw. 2021;24(6):421-425. [ CrossRef ] [ Medline ]
  • Guerrero MD, Barnes JD, Chaput JP, Tremblay MS. Screen time and problem behaviors in children: exploring the mediating role of sleep duration. Int J Behav Nutr Phys Act. 2019;16(1):105. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hamvai C, Kiss H, Vörös H, Fitzpatrick KM, Vargha A, Pikó BF. Association between impulsivity and cognitive capacity decrease is mediated by smartphone addiction, academic procrastination, bedtime procrastination, sleep insufficiency and daytime fatigue among medical students: a path analysis. BMC Med Educ. 2023;23(1):537. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Herlache AD, Lang KM, Krizan Z. Withdrawn and wired: problematic internet use accounts for the link of neurotic withdrawal to sleep disturbances. Sleep Sci. 2018;11(2):69-73. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Huang Q, Li Y, Huang S, Qi J, Shao T, Chen X, et al. Smartphone use and sleep quality in chinese college students: a preliminary study. Front Psychiatry. 2020;11:352. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hussain Z, Griffiths MD. The associations between problematic social networking site use and sleep quality, attention-deficit hyperactivity disorder, depression, anxiety and stress. Int J Ment Health Addict. 2021;19:686-700. [ FREE Full text ] [ CrossRef ]
  • Imani V, Ahorsu DK, Taghizadeh N, Parsapour Z, Nejati B, Chen HP, et al. The mediating roles of anxiety, depression, sleepiness, insomnia, and sleep quality in the association between problematic social media use and quality of life among patients with cancer. Healthcare (Basel). 2022;10(9):1745. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jeong CY, Seo YS, Cho EH. The effect of SNS addiction tendency on trait-anxiety and quality of sleep in university students'. J Korean Clin Health Sci. 2018;6(2):1147-1155. [ CrossRef ]
  • Karaş H, Küçükparlak İ, Özbek MG, Yılmaz T. Addictive smartphone use in the elderly: relationship with depression, anxiety and sleep quality. Psychogeriatrics. 2023;23(1):116-125. [ CrossRef ] [ Medline ]
  • Kater MJ, Schlarb AA. Smartphone usage in adolescents: motives and link to sleep disturbances, stress and sleep reactivity. Somnologie. 2020;24(4):245-252. [ CrossRef ]
  • Kharisma AC, Fitryasari R, Rahmawati PD. Online games addiction and the decline in sleep quality of college student gamers in the online game communities in Surabaya, Indonesia. Int J Psychosoc Rehabil. 2020;24(7):8987-8993. [ FREE Full text ] [ CrossRef ]
  • Kumar VA, Chandrasekaran V, Brahadeeswari H. Prevalence of smartphone addiction and its effects on sleep quality: a cross-sectional study among medical students. Ind Psychiatry J. 2019;28(1):82-85. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lee Y, Blebea J, Janssen F, Domoff SE. The impact of smartphone and social media use on adolescent sleep quality and mental health during the COVID-19 pandemic. Hum Behav Emerg Technol. 2023;2023:3277040. [ FREE Full text ] [ CrossRef ]
  • Li L, Griffiths MD, Mei S, Niu Z. Fear of missing out and smartphone addiction mediates the relationship between positive and negative affect and sleep quality among Chinese university students. Front Psychiatry. 2020;11:877. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Li Y, Mu W, Sun C, Kwok SYCL. Surrounded by smartphones: relationship between peer phubbing, psychological distress, problematic smartphone use, daytime sleepiness, and subjective sleep quality. Appl Res Qual Life. 2023;18:1099-1114. [ CrossRef ]
  • Luo X, Hu C. Loneliness and sleep disturbance among first-year college students: the sequential mediating effect of attachment anxiety and mobile social media dependence. Psychol Sch. 2022;59(9):1776-1789. [ CrossRef ]
  • Luqman A, Masood A, Shahzad F, Shahbaz M, Feng Y. Untangling the adverse effects of late-night usage of smartphone-based SNS among university students. Behav Inf Technol. 2021;40(15):1671-1687. [ CrossRef ]
  • Makhfudli, Aulia A, Pratiwi A. Relationship intensity of social media use with quality of sleep, social interaction, and self-esteem in urban adolescents in Surabaya. Sys Rev Pharm. 2020;11(5):783-788. [ CrossRef ]
  • Ozcan B, Acimis NM. Sleep quality in Pamukkale university students and its relationship with smartphone addiction. Pak J Med Sci. 2021;37(1):206-211. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Peltz JS, Bodenlos JS, Kingery JN, Abar C. Psychological processes linking problematic smartphone use to sleep disturbance in young adults. Sleep Health. 2023;9(4):524-531. [ CrossRef ] [ Medline ]
  • Pérez-Chada D, Bioch SA, Schönfeld D, Gozal D, Perez-Lloret S, Sleep in Adolescents Collaborative Study Group. Screen use, sleep duration, daytime somnolence, and academic failure in school-aged adolescents. PLoS One. 2023;18(2):e0281379. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Przepiorka A, Blachnio A. The role of Facebook intrusion, depression, and future time perspective in sleep problems among adolescents. J Res Adolesc. 2020;30(2):559-569. [ CrossRef ] [ Medline ]
  • Rudolf K, Bickmann P, Froböse I, Tholl C, Wechsler K, Grieben C. Demographics and health behavior of video game and eSports players in Germany: the eSports study 2019. Int J Environ Res Public Health. 2020;17(6):1870. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sami H, Danielle L, Lihi D, Elena S. The effect of sleep disturbances and internet addiction on suicidal ideation among adolescents in the presence of depressive symptoms. Psychiatry Res. 2018;267:327-332. [ CrossRef ] [ Medline ]
  • Scott H, Woods HC. Fear of missing out and sleep: cognitive behavioural factors in adolescents' nighttime social media use. J Adolesc. 2018;68:61-65. [ CrossRef ] [ Medline ]
  • Spagnoli P, Balducci C, Fabbri M, Molinaro D, Barbato G. Workaholism, intensive smartphone use, and the sleep-wake cycle: a multiple mediation analysis. Int J Environ Res Public Health. 2019;16(19):3517. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Stanković M, Nešić M, Čičević S, Shi Z. Association of smartphone use with depression, anxiety, stress, sleep quality, and internet addiction. empirical evidence from a smartphone application. Pers Individ Differ. Jan 2021;168:110342. [ CrossRef ]
  • Tandon A, Kaur P, Dhir A, Mäntymäki M. Sleepless due to social media? investigating problematic sleep due to social media and social media sleep hygiene. Comput Hum Behav. Dec 2020;113:106487. [ FREE Full text ] [ CrossRef ]
  • Wang PY, Chen KL, Yang SY, Lin PH. Relationship of sleep quality, smartphone dependence, and health-related behaviors in female junior college students. PLoS One. 2019;14(4):e0214769. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wang Q, Zhong Y, Zhao G, Song R, Zeng C. Relationship among content type of smartphone use, technostress, and sleep difficulty: a study of university students in China. Educ Inf Technol. Aug 02, 2022;28(2):1697-1714. [ CrossRef ]
  • Wong HY, Mo HY, Potenza MN, Chan MNM, Lau WM, Chui TK, et al. Relationships between severity of internet gaming disorder, severity of problematic social media use, sleep quality and psychological distress. Int J Environ Res Public Health. 2020;17(6):1879. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Xie X, Dong Y, Wang J. Sleep quality as a mediator of problematic smartphone use and clinical health symptoms. J Behav Addict. 2018;7(2):466-472. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yang SY, Chen KL, Lin PH, Wang PY. Relationships among health-related behaviors, smartphone dependence, and sleep duration in female junior college students. Soc Health Behav. 2019;2(1):26-31. [ FREE Full text ] [ CrossRef ]
  • Yıldırım M, Öztürk A, Solmaz F. Fear of COVID-19 and sleep problems in Turkish young adults: mediating roles of happiness and problematic social networking sites use. Psihologija. 2023;56(4):497-515. [ FREE Full text ] [ CrossRef ]
  • Zhai X, Ye M, Wang C, Gu Q, Huang T, Wang K, et al. Associations among physical activity and smartphone use with perceived stress and sleep quality of Chinese college students. Mental Health and Physical Activity. Mar 2020;18:100323. [ CrossRef ]
  • Zhang MX, Wu AMS. Effects of smartphone addiction on sleep quality among Chinese university students: the mediating role of self-regulation and bedtime procrastination. Addict Behav. 2020;111:106552. [ CrossRef ] [ Medline ]
  • Zhang MX, Zhou H, Yang HM, Wu AMS. The prospective effect of problematic smartphone use and fear of missing out on sleep among Chinese adolescents. Curr Psychol. May 24, 2021;42(7):5297-5305. [ CrossRef ]
  • Beyens I, Nathanson AI. Electronic media use and sleep among preschoolers: evidence for time-shifted and less consolidated sleep. Health Commun. 2019;34(5):537-544. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mazurek MO, Engelhardt CR, Hilgard J, Sohl K. Bedtime electronic media use and sleep in children with autism spectrum disorder. J Dev Behav Pediatr. 2016;37(7):525-531. [ CrossRef ] [ Medline ]
  • King DL, Delfabbro PH, Zwaans T, Kaptsis D. Sleep interference effects of pathological electronic media use during adolescence. Int J Ment Health Addict. 2014;12:21-35. [ CrossRef ]
  • Kubiszewski V, Fontaine R, Rusch E, Hazouard E. Association between electronic media use and sleep habits: an eight-day follow-up study. Int J Adolesc Youth. 2013;19(3):395-407. [ FREE Full text ] [ CrossRef ]

Abbreviations

Edited by G Eysenbach, T Leung; submitted 20.04.23; peer-reviewed by M Behzadifar, F Estévez-López, R Prieto-Moreno; comments to author 18.05.23; revised version received 15.06.23; accepted 26.03.24; published 23.04.24.

©Xiaoning Han, Enze Zhou, Dong Liu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 23.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. The Impact of Media Bias

    media bias research paper

  2. The Impact of Media Bias

    media bias research paper

  3. 😊 Media bias research paper outline. Argumentative media bias essay

    media bias research paper

  4. News Media Evaluation Tools

    media bias research paper

  5. Chart Of Media Bias

    media bias research paper

  6. ≫ Media Bias and the Current Situation of Reporting News and Facts in

    media bias research paper

VIDEO

  1. Brexit Paper Wants Freedom of Movement Back?

  2. Media Bias Exposed

  3. 01

  4. Observer Bias in Research || Research Aptitude New Topics || Net Exam 2023 Paper 1 by Naveen Sakh ||

  5. Podcast Episode 13: Media on Political Narratives and Public Discourse

  6. The Danger of Bias in the Media

COMMENTS

  1. Review A systematic review on media bias detection: What is media bias, how it is expressed, and how to detect it

    In this paper, we have presented a theorical framework capable of comparing different disinformation problems such as media bias, propaganda or fake news; we have also defined, classified and characterized media bias; lastly, we have reviewed the current state of automated media bias detection research.

  2. (PDF) Media Bias Analysis

    Chapter 2. Media Bias Analysis. Abstract This chapter provides the first interdisciplinary literature review on. media bias analysis, thereby contrasting manual and automated analysis approaches ...

  3. How do we raise media bias awareness effectively? Effects of

    Therefore, this paper tests how effectively different strategies promote media bias awareness and thereby may also help understand common barriers to informed media consumption. We selected three major methods in related work [ 19 , 22 ] on the topic to further investigate them in one combined study: forewarning messages, text annotations, and ...

  4. PDF The Media Bias Taxonomy: A Systematic Literature Review on the Forms

    detect media bias by systematically reviewing 3140 research papers published between 2019 and 2022. To structure our review and support a mutual understanding of bias across research domains, we introduce the Media Bias Taxonomy, which provides a coherent overview of the current state of research on media bias from different perspectives.

  5. Identification and Analysis of Media Bias in News Articles

    One of the main e ffects of media bias is the change of people's awareness and. perception of topics (Siemens, 2014), which becomes critical for public issues, such as elections (Bernhardt ...

  6. Measuring dynamic media bias

    We focus on measuring the media bias in cable TV news coverage over the last decade. Leveraging outlets' explicit endorsements, issue coverage, or linguistic patterns to measure media bias works well when we assume both the drivers of media bias—ranging from the ideological leanings of the media firms to the market structure—and the bias itself are fixed attributes.

  7. The Media Bias Taxonomy: A Systematic Literature Review on the Forms

    The way the media presents events can significantly affect public perception, which in turn can alter people's beliefs and views. Media bias describes a one-sided or polarizing perspective on a topic. This article summarizes the research on computational methods to detect media bias by systematically reviewing 3140 research papers published between 2019 and 2022. To structure our review and ...

  8. Papers with Code

    This article summarizes the research on computational methods to detect media bias by systematically reviewing 3140 research papers published between 2019 and 2022. To structure our review and support a mutual understanding of bias across research domains, we introduce the Media Bias Taxonomy, which provides a coherent overview of the current ...

  9. A Theory of Media Bias and Disinformation

    Abstract. The digital revolution has fundamentally transformed the news industry. To capture these developments, we build a model of media bias in which consumers with heterogeneous beliefs can choose between a variety of news outlets, biased outlets may spread disinformation, including the possibility of fabrication, and consumers in turn can engage in (informal) fact-checking.

  10. (PDF) Changes in perceptions of media bias

    Changes in perceptions of media bias. Kirby Goidel 1, Nicholas T. Davis 2 and Spencer Goidel 1. Abstract. In this paper, we utilize a module from the Cooperative Congressional Election Study to ...

  11. Toward a Fuller Understanding of Media Bias: The Role of Centrist

    Abstract. This working paper examines the highly fraught question of bias in the U.S. news media, and contends that rather than a simplistic "liberal" or "conservative" bias the inclination of the mainstream media is "centrist," in line with the author's definition of the term in his prior working paper, "'What is Centrism?':

  12. Resources & Publications

    2023. Introducing MBIB - the first Media Bias Identification Benchmark Task and Dataset Collection Proceedings Article. In: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23), ACM, New York, NY, USA, 2023, ISBN: 978-1-4503-9408-6/23/07.

  13. PDF Perceptions of Media Bias: viewing the news through ideological cues

    Regarding the topic of media bias, my research focuses on how individual perceptions of media bias affect their ability to properly assess bias in the news content. ... this paper does not wish to prove if media bias exists in this outlet or another, nor do I wish to study whether the media as a whole is ideologically liberal or conservative ...

  14. How do we raise media bias awareness effectively? Effects of ...

    Media bias has a substantial impact on individual and collective perception of news. Effective communication that may counteract its potential negative effects still needs to be developed. In this article, we analyze how to facilitate the detection of media bias with visual and textual aids in the form of (a) a forewarning message, (b) text annotations, and (c) political classifiers. In an ...

  15. PDF Media Bias and Reputation

    media bias and reputation 283 ments of the distorting firm's quality. We also show that if all firms in a market are jointly owned, bias can remain unchanged even as the number of firms gets large. At the end of the paper, we present empirical evidence on the de- terminants of bias.

  16. Media Bias and Reputation by Matthew Gentzkow, Jesse M. Shapiro

    The model predicts that bias will be less severe when consumers receive independent evidence on the true state of the world, and that competition between independently owned news outlets can reduce bias. We present a variety of empirical evidence consistent with these predictions. Suggested Citation: Gentzkow, Matthew Aaron and Shapiro, Jesse M ...

  17. PDF PERSISTENT MEDIA BIAS

    this paper provides a supply-side explanation for the existence and persistence of media bias based on incomplete information and the career concerns and preferences of journalists for influence. Granting discretion to journalists allows them to bias stories and also allow a news organization 1 This research was supported by NSF Grant No. SES ...

  18. Media Bias Chart

    The AllSides Media Bias Chart™ is based on our full and growing list of over 1,400 media bias ratings. These ratings inform our balanced newsfeed. The AllSides Media Bias Chart™ is more comprehensive in its methodology than any other media bias chart on the Web. While other media bias charts show you the subjective opinion of just one or a ...

  19. Research

    From a psychological perspective, we research how bias is perceived, both in real-world and experimental settings. We research how to visualize media bias and try different types of visualizations (such as a left-right bar, in-text annotations, deeper explanations, or inoculation messages) in diverse online surveys or a self-built browser plugin.

  20. Media Bias

    Working Paper 9295. DOI 10.3386/w9295. Issue Date October 2002. There are two different types of media bias. One bias, which we refer to as ideology, reflects a news outlet's desire to affect reader opinions in a particular direction. The second bias, which we refer to as spin, reflects the outlet's attempt to simply create a memorable story.

  21. Mass Media: An Undergraduate Research Guide : Media Bias

    This guide focuses on bias in mass media coverage of news and current events. It includes concerns of sensationalism, allegations of media bias, and criticism of media's increasingly profit-motivated ethics. It also includes examples of various types of sources coming from particular partisan viewpoints.

  22. Media Bias/Fact Check

    We are the most comprehensive media bias resource on the internet. There are currently 7800+ media sources, journalists, and politicians listed in our database and growing every day. Don't be fooled by Questionable sources. Use the search feature above (Header) to check the bias of any source.

  23. Journal of Medical Internet Research

    Background: This paper explores the widely discussed relationship between electronic media use and sleep quality, indicating negative effects due to various factors. However, existing meta-analyses on the topic have some limitations. Objective: The study aims to analyze and compare the impacts of different digital media types, such as smartphones, online games, and social media, on sleep quality.