share this!

April 27, 2022

Millions of research papers are published in a year. How do scientists keep up?

by Eva Botkin-Kowacki, Northeastern University

Millions of research papers are published in a year. How do scientists keep up?

If you want to be a scientist, you're going to have to do a lot of reading.

Science is an endeavor focused on building and sharing knowledge. Researchers publish papers detailing their discoveries, breakthroughs, and innovations in order to share those revelations with colleagues. And there are millions of scientific papers each year.

Keeping up with the latest developments in their field is a challenge for researchers at all points of their careers, but it especially affects early-career scientists, as they also have to read the many papers that represent the foundation of their field.

"It's impossible to read everything. Absolutely impossible," Ajay Satpute, director of the Affective and Brain Science Lab and an assistant professor of psychology at Northeastern. "And if you don't know everything that has happened in the field, there's a real chance of reinventing the wheel over and over and over again." The challenge, he says, is to figure out how to train the next generation of scientists economically, balancing the need to read all the seminal papers with training them as researchers in their own right.

That task is only getting more difficult, says Alessia Iancarelli, a Ph.D student studying affective and social psychology in Satpute's lab. "The volume of published literature just keeps increasing," she says. "How are scientists able to develop their scholarship in a field given this huge amount of literature?" They have to pick and choose what to read.

But common approaches to that prioritization, Iancarelli says, can incorporate biases and leave out crucial corners of the field. So Iancarelli, Satpute and colleagues developed a machine learning approach to find a better—and less biased—way to make a reading list. Their results, which were published last week in the journal PLOS One , also help reduce gender bias.

"There really is a problem about how we develop scholarship," Satpute says. Right now, scientists will often use a search tool like Google Scholar on a topic and start from there, he says. "Or, if you're lucky, you'll get a wonderful instructor and have a great syllabus. But that's going to be basically the field through that person's eyes. And so I think that this really fills a niche that might help create balance and cross-disciplinary scholarship without necessarily having access to a wonderful instructor, because not everyone gets that."

The problem with something like Google Scholar, Iancarelli explains, is that it will give you the most popular papers in a field, measured by how many other papers have cited them. If there are subsets of that field that aren't as popular but are still relevant, the important papers on those topics might get missed with such a search.

Take, for example, the topic of aggression (which is the subject the researchers focused on to develop their algorithm). Media and video games are a particularly hot topic in aggression research, Iancarelli says, and therefore there are a lot more papers on that subset of the field than on other topics, such as the role of testosterone, and social aggression.

So Iancarelli decided to group papers on the topic of aggression into communities. Using citation network analysis, she identified 15 research communities on aggression. Rather than looking at the raw number of times a paper has been cited in another research paper , the algorithm determines a community of papers that tend to cite each other or the same core set of papers. The largest communities it revealed were media and video games, stress, traits and aggression, rumination and displaced aggression, the role of testosterone, and social aggression. But there were also some surprises, such as a smaller community of research papers focused on aggression and horses.

"If you use community detection, then you get this really rich, granular look at the aggression field," Satpute says. "You have sort of a bird's-eye-view of the entire field rather than [it appearing that] the field of aggression is basically media, video games, and violence."

In addition to diversifying the topics featured by using this community approach, the researchers also found that the percentage of articles with women first authors dubbed influential by the algorithm doubled in comparison to when they focused only on total citation counts. (Iancarelli adds there might be some biases baked into that result, as the team couldn't ask the authors directly about their gender identity and instead had to rely on assumptions based on the author's name, picture, and any pronouns used to refer to them.)

The team has released the code behind this algorithm so that others can use it and replicate their citation network analysis approach in other fields of research.

For Iancarelli, there's another motivation: "I would love to use this work to create a syllabus and teach my own course on human aggression. I would really love to base the syllabus on the most relevant papers from each different community to give a true general view of the human aggression field."

Journal information: PLoS ONE

Provided by Northeastern University

Explore further

Feedback to editors

how many research articles are published each year

Artificial intelligence helps scientists engineer plants to fight climate change

4 hours ago

how many research articles are published each year

Ultrasensitive photonic crystal detects single particles down to 50 nanometers

5 hours ago

how many research articles are published each year

Scientists map soil RNA to fungal genomes to understand forest ecosystems

6 hours ago

how many research articles are published each year

Researchers show it's possible to teach old magnetic cilia new tricks

how many research articles are published each year

Mantle heat may have boosted Earth's crust 3 billion years ago

how many research articles are published each year

Study suggests that cells possess a hidden communication system

how many research articles are published each year

Researcher finds that wood frogs evolved rapidly in response to road salts

how many research articles are published each year

Imaging technique shows new details of peptide structures

7 hours ago

how many research articles are published each year

Cows' milk particles used for effective oral delivery of drugs

how many research articles are published each year

New research confirms plastic production is directly linked to plastic pollution

Relevant physicsforums posts, interesting anecdotes in the history of physics, cover songs versus the original track, which ones are better.

11 hours ago

Great Rhythm Sections in the 21st Century

21 hours ago

Biographies, history, personal accounts

Apr 23, 2024

History of Railroad Safety - Spotlight on current derailments

Apr 21, 2024

For WW2 buffs!

Apr 20, 2024

More from Art, Music, History, and Linguistics

Related Stories

how many research articles are published each year

Prize winning topics found to deliver more science papers and citations than non-prize-winning topics

Oct 6, 2021

how many research articles are published each year

Women are undercited and men are overcited in communication

Aug 6, 2021

how many research articles are published each year

Women more likely to enjoy aggression in porn: study

Feb 11, 2022

how many research articles are published each year

Analysis suggests China has passed US on one research measure

Mar 8, 2022

how many research articles are published each year

Successful research papers cite young references

Apr 15, 2019

how many research articles are published each year

Seattle AI lab's free search engine aims to accelerate scientific breakthroughs

Oct 31, 2019

Recommended for you

how many research articles are published each year

Saturday Citations: Irrationality modeled; genetic basis for PTSD; Tasmanian devils still endangered

how many research articles are published each year

Training of brain processes makes reading more efficient

Apr 18, 2024

how many research articles are published each year

Researchers find lower grades given to students with surnames that come later in alphabetical order

Apr 17, 2024

how many research articles are published each year

Saturday Citations: Listening to bird dreams, securing qubits, imagining impossible billiards

Apr 13, 2024

how many research articles are published each year

Earth, the sun and a bike wheel: Why your high-school textbook was wrong about the shape of Earth's orbit

Apr 8, 2024

how many research articles are published each year

Saturday Citations: AI and the prisoner's dilemma; stellar cannibalism; evidence that EVs reduce atmospheric CO₂

Apr 6, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Oxford Martin School logo

Interactive visualization requires JavaScript

Related research and data

  • Annual articles published in scientific and technical journals
  • Annual patent applications
  • Annual patent applications per million people
  • Annual patents filed for carbon capture and storage technologies
  • Annual patents filed for electric vehicle technologies
  • Annual patents filed for energy storage technologies
  • Annual patents filed for renewable energy technologies
  • Annual patents filed in sustainable energy
  • Annual patents granted in Great Britain during the Industrial Revolution
  • Cumulative RCTs published in high-ranked medical journals
  • International finance received for clean energy
  • Invention patents granted annually in the United States since 1790
  • Invention patents granted annually in the United States since 1840 By category
  • Level of implementation of sustainable procurement policies and plans
  • Number of R&D researchers per million people
  • Ocean science and research funding
  • R&D researchers per million people vs. GDP per capita
  • Research & development spending as a share of GDP
  • Share of clinical trials that report results within a year By country
  • Share of government expenditure going to interest payments

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

More journal articles and fewer books: Publication practices in the social sciences in the 2010’s

Roles Conceptualization, Data curation, Investigation, Methodology, Writing – original draft, Writing – review & editing

Affiliation Academic Analytics Research Center (AARC), Columbus, Ohio, United States of America

Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Visualization, Writing – review & editing

* E-mail: [email protected]

ORCID logo

  • William E. Savage, 
  • Anthony J. Olejniczak

PLOS

  • Published: February 3, 2022
  • https://doi.org/10.1371/journal.pone.0263410
  • Peer Review
  • Reader Comments

Fig 1

The number of scholarly journal articles published each year is growing, but little is known about the relationship between journal article growth and other forms of scholarly dissemination (e.g., books and monographs). Journal articles are the de facto currency of evaluation and prestige in STEM fields, but social scientists routinely publish books as well as articles, representing a unique opportunity to study increased article publications in disciplines with other dissemination options. We studied the publishing activity of social science faculty members in 12 disciplines at 290 Ph.D. granting institutions in the United States between 2011 and 2019, asking: 1) have publication practices changed such that more or fewer books and articles are written now than in the recent past?; 2) has the percentage of scholars actively participating in a particular publishing type changed over time?; and 3) do different age cohorts evince different publication strategies? In all disciplines, journal articles per person increased between 3% and 64% between 2011 and 2019, while books per person decreased by at least 31% and as much as 54%. All age cohorts show increased article authorship over the study period, and early career scholars author more articles per person than the other cohorts in eight disciplines. The article-dominated literatures of the social sciences are becoming increasingly similar to those of STEM disciplines.

Citation: Savage WE, Olejniczak AJ (2022) More journal articles and fewer books: Publication practices in the social sciences in the 2010’s. PLoS ONE 17(2): e0263410. https://doi.org/10.1371/journal.pone.0263410

Editor: Joshua L. Rosenbloom, Iowa State University, UNITED STATES

Received: August 27, 2021; Accepted: January 18, 2022; Published: February 3, 2022

Copyright: © 2022 Savage, Olejniczak. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All anonymized data, scripts, and statistical results files are available from the OSF database ( https://osf.io/2x4uf/ , DOI 10.17605/OSF.IO/2X4UF ).

Funding: WES and AJO received data and computing resources from Academic Analytics, LLC ( http://www.academicanalytics.com ). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: AJO and WES are paid employees of Academic Analytics, LLC. None of the authors has an equity interest in Academic Analytics, LLC. The results presented reflect the authors’ opinions, and do not necessarily reflect the opinions or positions of Academic Analytics, LLC. Academic Analytics, LLC management had no oversight or involvement in the project and were not involved in preparation or review of the manuscript. All work was done as part of the respective authors’ research, with no additional or external funding. This does not alter our adherence to PLOS ONE policies on sharing data and materials.

Introduction

The number of scientific and scholarly journal articles published each year has been increasing for some time. Kyvik [ 1 ] estimated there was a 30% increase in scientific and scholarly publishing between 1980 and 2000. In a later study, Kyvik and Aksnes [ 2 ] noted that Web of Science records increased from 500,000 indexed articles in 1981 to 1.5 million indexed articles in 2013. In 2018 Johnson et al. [ 3 ] estimated that the number of scholarly journals grew 5–6% per year over the past decade, and that there were 33,100 peer-reviewed English language journals distributing approximately 3 million articles each year. Much less attention has been given to the scholarly production of books, and the potential relationship between increased journal article publishing and book publishing practices. The social sciences in particular are comprised of several disciplines in which scholars regularly publish both journal articles and books, representing a unique opportunity to explore the growth of journal article publications in a sample of disciplines where journal articles are not the sole preferred mode of knowledge dissemination.

In this study, we examine how disciplinary publishing practices in the social sciences have changed over a recent nine-year period at Ph.D. granting universities in the United States. We begin by reviewing two of several proposed factors that may underlie the increase in journal article publication in the social sciences. We then quantify changes in total and per capita journal article and book publication output of each discipline, as well as the rate of participation in both modes of dissemination (i.e., the percentage of scholars who have participated in journal article and book authorship over time). Both the rate of publication and the rate of participation are compared across academic age groups (early career, mid-career, and senior scholars). Specifically, we address the following questions using a large database of scholarly activity spanning several social sciences disciplines and including tens of thousands of individual scholars:

  • Has the publication strategy of social science disciplines changed such that more or fewer books and journal articles are being written now than in the recent past (in total and in the context of faculty population changes in these disciplines over time)?
  • Has the percentage of scholars actively participating in a particular publishing type changed over time (e.g., are fewer scholars authoring books, or are fewer books being published per scholar, or both)?
  • Do different faculty age groups show different publication strategies?

Changes in the social science research environment and performance-based measures

Within the context of overall growth in scientific and scholarly article publication rates, changes in social science journal article publishing have been studied by several researchers. Warren [ 4 ] observed increased publication rates among newly hired social science scholars, finding that publishing expectations are now twice as great as they were in the early 1990’s for graduates seeking an assistant professor position or assistant professors seeking promotion to associate professor. In a prior study, Bauldry [ 5 ] examined 403 new hires in sociology at 98 research universities and found assistant professors hired in either 2011 or 2012 had a median number of publications two to three times greater than new assistant professors hired in 2007. Increasing rates of co-authorship of social science journal articles have also been studied. In a study of 56 subject categories in the Web of Science: Social Science Citation Index, Henriksen [ 6 ] observed that larger increases in the share of co-authorships occur in disciplines using quantitative research methods, large data sets, and team research, as well as those with greater research specialization.

Among the suggestions that Kyvik and Aksnes [ 2 ] offered as contributing factors to the growth of scholarly publishing was the improvement of research environments and external funding. However, they focused primarily on the impact of external funding, simply noting that research environments had benefitted from the introduction of personal computers, databases, and the internet. In 2009, Lazer et al. [ 7 ] described the emergent field of computational social science “that leverages the capacity to collect and analyze data with an unprecedented breadth and depth and scale.” King [ 8 ] describes the dramatic methodological changes in computational social sciences as “from isolated scholars toiling away on their own to larger scale, collaborative, interdisciplinary, lab-style research teams; and from a purely academic pursuit to having a major impact on the world.” Since those early days, data repositories such as the Inter-university Consortium for Political and Social Research (ICPSR) and the Harvard-MIT Data Center now make large datasets and technical services available to researchers. On-campus resources for social science researchers at American universities have become widespread. For example, every member institution of the Association of American Universities (AAU) now has at least one center, institute, or program exploring computational social science research. The availability of large research grants further demonstrates the growing importance of quantitative social sciences. The U.S. National Science Foundation (NSF) Directorate of Social, Behavioral and Economic Science, for example, funds projects through a program called Human Networks and Data Science [ 9 ]. The emergence and expansion of quantitative social sciences has had a clear impact on the social science research environment. Assembly and analysis of massive databases, data verification, statistical modeling, and visualization require levels of expertise beyond a single researcher. Jones’ [ 10 ] recent analysis of all published articles in economics from 1950–2018 estimates that single-author articles ceased to be the majority of economics papers in 2005, and that co-authored papers now constitute 74% of all articles in the discipline.

Another potential factor in the growth of publishing offered by Kyvik and Aksnes [ 2 ] is the emergence and spread of performance-based research funding systems in which published output has become an important parameter in evaluation of individual scholars, their departments, and their universities. Hermanowicz [ 11 ] observed that both the university and the individual faculty member have become entrepreneurs for whom “research and publication have become the main currency in which prestige is traded.” In short, research evaluation and the conferral of prestige share the same currency: scholarly publication. The UK’s Research Excellence Framework (REF), for example, determines a large proportion of national funding for institutional research in the United Kingdom [ 12 ]. Fry et al. [ 13 ] conducted interviews in December, 2008 aimed at understanding how research assessment may influence scholarly and scientific publication in the UK. They reported a near-universal view among respondents that the publication of peer-reviewed journal articles was a fundamental disciplinary and institutional expectation, and that there was increasing institutional pressure to publish more frequently. Across disciplines, institutional emphasis was placed on peer-reviewed journal articles as the preferred output which would most contribute to their institution’s REF submissions. Additionally, emphasis was placed on collaborative research, suggesting that collaborative team projects were best-suited to REF submissions. In their review of the evolution of UK economics under the REF, Lee et al. [ 14 ] note that over the course of four research assessment exercises, 1992, 1996, 2001, and 2008, the proportion of all journal submissions appearing in Diamond’s [ 15 ] 27 core prestigious economics journals increased from 31% to 44%. Further, the percentage of journal titles in all economics departmental submissions increased from 53% in 1992 to 91% in 2008 [ 14 ].

Another analysis of REF submissions for the 1996, 2001, and 2008 REF cycles found that the volume of articles grew from 62% of submitted publications in 1996 to 75% in 2008 [ 16 ]. The increase in articles came at the expense of other publishing types: engineering submissions included fewer conference proceedings and social sciences submissions included fewer books. Evidence from the REF demonstrates that performance-based evaluation can catalyze more collaborative research and more frequent journal publication to the exclusion of other publication types. The United States does not have a national assessment framework tied to research funding, but widely consulted research evaluation data sources in the US also favor journal articles over books; the most recent National Research Council report on US doctoral programs, for example, did not include books in its tally of social science publications [ 17 ]. Influential public university rankings also omit books (e.g., QS World University Rankings) or minimize books’ weight relative to journal articles (e.g., US News and World Report) [ 18 , 19 ].

Materials and methods

Data source.

We mined the Academic Analytics, LLC (AcA) commercial database for the names, departmental affiliation(s), and year of terminal degree of tenured and tenure-track scholars (Assistant Professor, Associate Professor, and Professor titles) over 9 years (2011–2019) in the following 12 social and behavioral science fields at Ph.D. granting universities in the United States:

  • Anthropology
  • Criminal Justice and Criminology
  • Educational Psychology
  • International Affairs and Development
  • Political Science
  • Public Administration
  • Public Policy
  • Social Work/Social Welfare

The AcA database compiles information on faculty members associated with academic departments at 380 Ph.D.-granting universities in the United States. AcA faculty rosters are updated at least annually by manual collection from publicly available resources, supplemented by verification and submission of faculty lists from some institutions. Each academic department is manually assigned to one or more of 170 subject classifications based on the National Center for Education Statistics (NCES) Classification of Instructional Programs (CIP) code classifications [ 20 ]. A complete list of the departments included in this study and their subject classifications is publicly available ( https://osf.io/2x4uf/ ). AcA matches scholarly publications to their authors using a semi-automated matching process. All journal articles indexed in CrossRef ( https://www.crossref.org/ ) are ingested into AcA’s data warehouse and matched to their author(s); our study includes only the peer-reviewed journal articles, other article types that are also assigned DOIs but do not necessarily represent original research are excluded (e.g., book reviews, obituaries). Harzing [ 21 ] found that CrossRef has “similar or better coverage” of publications than Web of Science and Scopus, but are less comprehensive than Google Scholar and Microsoft Academic. A CrossRef API query performed in January 2022 reveals 6,200,221 works of all types are indexed in CrossRef with a publication date in 2019. Of these works, AcA identified 367,883 unique peer-reviewed journal articles (co-)authored by faculty members at the Ph.D. granting universities in their database (i.e., in 2019, about 5.9% of the works indexed in CrossRef represent peer-reviewed journal articles by authors at the institutions covered by AcA). Works indexed by CrossRef that were not matched to scholars in the AcA database are either non-journal article types (e.g., conference proceedings, book chapters, working papers), or they were authored by scholars outside the United States or at non-Ph.D. granting US universities.

AcA also matches academic book publications from Baker & Taylor ( https://www.baker-taylor.com/ ) to their respective author(s), editor(s), and translator(s). Baker & Taylor is among the most widely used book vendors among public libraries; scholarly books from 5,774 publishers catalogued by Baker & Taylor are matched to faculty members in the Academic Analytics database (the list of academic publishers is available at https://osf.io/2x4uf/ ). For both publication types, a 5-year window of authored publications was extracted (e.g., for the 2011 database, publications authored between 2007–2011 were extracted). All faculty members in departments assigned to one of the 12 social sciences disciplines were included in the sample, including those with zero articles or books in the previous 5 years.

The earliest iteration of the AcA database we extracted (2011) contains 27,447 unique faculty members affiliated with 1,476 social science departments at 267 universities that offered a social science Ph.D. degree in 2011. The most recent database (2019) contains 28,928 unique faculty members affiliated with 1,561 departments at 290 universities that offered a social science Ph.D. degree in 2019. Anonymized raw data, including faculty and department lists for each of the nine years studied, journal titles, book presses, and the crosswalk of university departments to scholarly disciplines are publicly available ( https://osf.io/2x4uf/ ).

Data analysis

All post-extract data handling, computations and statistical tests were performed in R v1.4.0 [ 22 ]. The total publication output of each discipline in each database year was tabulated as the unique number of articles and books published by scholars whose academic departments are classified within that discipline category. Each publication is counted only once per discipline, even if more than one scholar in that discipline shared authorship of that work. For each discipline and each year, we calculated the number of articles per faculty, the number of books per faculty, and the number of books per article. Changes in the number of departments in each discipline over the 9-year period may reflect the creation of new departments at the universities studied, but it may also reflect an increased scope of data collection in the AcA database. We attempted to control for the creation (or dissolution) of departments (and the possibility of increased faculty roster collection efforts) by calculating the same totals and ratios as above for the subset of departments that appear in the AcA database in all nine database years. Likewise, to explore whether changes in article and book publication are related to changing demographics within disciplines or due to changes in the publication practices of individual scholars, we calculated the same totals and ratios as above for the subset of faculty members who appear in all nine database years. The median number of authors on each article was also tabulated for each discipline in each year, to explore the growth of team authorship.

In each database year, the proportion of each discipline’s population actively engaged in a particular publication type was calculated as the percent of scholars who published at least one book in the previous five years, and the percent who published at least one journal article in the previous five years. Significant differences in the proportion of scholars who have published at least one book (or journal article) between the 2011 and 2019 years was tested using the Chi-squared test.

The AcA database includes the year of terminal degree for each faculty member (typically the Ph.D., but sometimes MBA, MFA, etc.), from which we defined three academic age cohorts following [ 23 ]: early career researchers (ECR) earned their terminal degree 0–10 years before the year in which the database was compiled; mid-career researchers (MCR) earned their degree between 11 and 30 years before the database compilation year; and senior career researchers (SCR) earned their degree 31 or more years before the database compilation year. For each discipline, year, and age cohort we calculated the number of articles per faculty, the number of books per faculty, and the number of books per article. The proportion of each age cohort participating in both publication types was also calculated, and differences in the percent of the population actively engaged in each publication type was tested using the Chi-squared test. When comparisons are made with disciplines among age cohorts, the unique number of books or articles authored by that age cohort was used.

Population, academic department count, and publication count

The number of faculty members and academic departments in each discipline in the 2011 and 2019 database years, as well as the percent change between 2011 and 2019, appears in Table 1 . Data for all years is available at https://osf.io/2x4uf/ . In total, scholars in the 2011 dataset published 158,104 unique journal articles in 8,706 journals between 2007 and 2011. Over the same five-year period, the 2011 scholars published 17,101 unique books. Scholars in the 2019 dataset published 215,540 unique journal articles in 11,480 journals between 2015 and 2019. Over the same five-year period, the 2019 scholars published 13,102 unique books. The counts of unique journal articles and books by authors in each discipline are presented in Table 2 . While the overall number of social science scholars increased 5.4% between 2011 and 2019, the number of journal articles they produced increased at a much faster rate: 36.3%. The increase in articles published is associated with a 31.9% increase in the number of unique journal titles in which these works appear. Conversely, the overall number of books published decreased by 23.4% over the nine-year period. The declining ratio of books per journal article in each discipline over the study period is shown in Fig 1 .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Both books and journal articles represent five-year sums, such that the 2011 data point represents the total books published between 2007 and 2011 divided by the total articles published between 2007 and 2011, the 2012 data point represents the total books published between 2008 and 2012 divided by the total articles published between 2008 and 2012, and so on.

https://doi.org/10.1371/journal.pone.0263410.g001

thumbnail

https://doi.org/10.1371/journal.pone.0263410.t001

thumbnail

https://doi.org/10.1371/journal.pone.0263410.t002

Growth in the number of social science faculty members over the study period (5.4%) is not uniform across disciplines. Table 1 reveals that one discipline (Educational Psychology) evinced a slight decline in the number of faculty members and the number of academic departments (-1.2%), while large population increases were observed in International Affairs and Development (39.1%), Public Administration (22.3%), and Public Policy (10.1%). The number of academic departments classified as International Affairs and Development and Public Administration also increased substantially between 2011 and 2019 ( Table 1 ). A different pattern is observed among Criminal Justice and Criminology departments. The number of departments in this discipline increased by more than 10% over the study period, while the number of faculty members in this field increased by 5.4% on par with the overall social sciences population increase.

The results presented in Table 2 are not substantively different than those performed on a subset of the data limited to only those departments appearing in all nine years of the dataset. The limited subset of departments differed from the complete set by about 1%, on average, in terms of articles per person, books per person, and books per article. A table showing results for the limited subset of departments is available at https://osf.io/2x4uf/ . Similarity between the full sample of departments and the limited subset suggest that growth in the number of departments (or expanded AcA data collection efforts) does not account for the trend towards increased article publication and decreased book publication we observed in each discipline.

Changing publishing practices

In every discipline, the 5-year total of journal article publications increased more rapidly than the population of faculty members, and in all but one discipline the 5-year total of book publications decreased between 2011 and 2019 (in Public Administration, book publishing increased by 7.1%). Book publishing in International Affairs and Development also increased after 2011 but began to decrease again after 2014 and by 2019 the number of books published was identical to the 2011 number.

Table 2 shows the number of journal articles and books in 2011 and 2019 for each discipline, as well as the following ratios: articles per faculty, books per faculty, and books per article. In seven of the 12 disciplines we studied, journal articles per person increased by more than 30% from 2011 to 2019, including a more than 60% increase in both Educational Psychology and Social Work/Social Welfare. The increase in articles per person in Economics is notably lower than the other disciplines, with only a 3% increase over the study period. The next lowest increase is in International Affairs and Development (11%); unlike other fields, the year-by-year population growth in International Affairs and Development largely mirrors growth in the discipline’s journal article output over time.

A decline in books per person over the study period characterizes all twelve disciplines ( Table 2 ). Economics shows the greatest decline in books per person (-46%), while the lowest declines are in Public Administration and Educational Psychology (-12% and -14%, respectively). In all disciplines, the ratio of books per article declined by at least 31% from 2011 to 2019. In Sociology, for example, there was one book published for every 6.3 journal articles in the 2011 dataset; in the 2019 dataset, there was one book published for every 11.1 articles. The largest decline in books per article appears in Geography and Social Work/Social Welfare (-54% in both disciplines). We visualized the ratio “books per journal article” in each discipline throughout the study period ( Fig 1 ); book publications constitute a steadily decreasing portion of the total publication output in each social science discipline over this timeframe.

Table 3 shows 2011 and 2019 articles per person and books per person by age cohort. In eight of twelve disciplines, the largest increase in articles per person from 2011 to 2019 is observed among the youngest age cohort (ECR). ECRs in Economics showed the smallest increase in journal articles per person among the ECR cohorts. Book publications per person declined among SCRs in all disciplines, and books per person also declined among MCRs in all but one discipline (Educational Psychology). Books per person among ECRs decreased slightly in all but one discipline, Geography.

thumbnail

https://doi.org/10.1371/journal.pone.0263410.t003

Analysis of a subset of data containing only individual faculty members that appear in all nine years of the study is available at https://osf.io/2x4uf/ . Data show that the number of journal articles authored by faculty who were present throughout the study period did increase in most disciplines, but this increase was much less than that observed for the overall study sample (e.g., faculty present throughout the study timeframe in Anthropology authored 15.4% more articles in 2019 than in 2011, but when the entire sample of faculty members is included, the increase was more than twice as great, 34.1%). This result indicates that while all faculty members are contributing to the increase in journal article production, those who joined the faculty at the research institutions in our study after 2011 (i.e., new hires) contributed disproportionately to the overall increase in journal article authorships over the study period.

Percent of faculty actively engaged in publishing books and articles

We quantified the rate of participation in each publishing type in each of the nine years studied. Results for 2011 and 2019 appear in Table 4 along with significance values from Chi-squared tests for differences in proportions of faculty who participate in a particular publishing type; results for each of the nine years are available at https://osf.io/2x4uf/ . The rate of participation was defined as the percent of all scholars in each discipline within a data year who authored at least one of that type of publication over the previous 5-year period, divided by the total number of scholars in that discipline in that year. In every discipline, the rate of participation in journal article publishing increased, while the rate of book publishing participation decreased. The changes in participation rate between 2011 and 2019 observed in Table 4 are generally less than the changes in books and articles per person ( Table 2 ). For example, although Geography saw a 54% decline in the number of books published per person, the number of faculty who have published at least one book decreased by only 5.9%. Likewise, in Psychology the number of journal articles per person increased 34% over the study period, but the percent of the population engaged in the production of journal articles increased only 2.9%.

thumbnail

https://doi.org/10.1371/journal.pone.0263410.t004

Table 5 shows the percent of scholars in 2011 and 2019 who published at least one book or journal article by age cohort. In all but one discipline (Economics) the youngest cohort (ECR) had the greatest rate of participation in journal article publication in both 2011 and 2019. Likewise, in every discipline except Educational Psychology, the oldest cohort (SCR) had the greatest participation in book publication. These findings are broadly consistent with our previous analysis of publishing behavior among age cohorts [ 23 : Fig 6], where senior scholars were observed to publish more books than their younger colleagues. Table 5 also shows that rates of participation in journal article publication increased in all age cohorts in all disciplines, with three exceptions: SCRs in International Affairs and Development, Public Administration, and Public Policy all showed non-significant decreases in journal article publication participation. In six disciplines SCRs showed the greatest increase in journal article publication participation, in four disciplines the greatest increase was among ECRs, and in the remaining two disciplines MCRs showed the greatest increase in journal article publication participation.

thumbnail

https://doi.org/10.1371/journal.pone.0263410.t005

Thirty-five of the 36 comparisons of book publication participation by age cohort revealed a decrease in participation rate between 2011 and 2019 (among MCRs in Educational Psychology the increase was not statistically significant; Table 5 ). In six disciplines MCRs showed the greatest decrease in journal article publication participation, in four disciplines the greatest decrease was among ECRs, and in the remaining two disciplines SCRs showed the greatest increase in journal article publication participation. ECRs universally show the lowest rate of book publication participation. This is most extreme in Economics where only 3.7% of ECRs published at least one book in the 5-year period leading to 2011, and only 2.0% of Economics ECRs published a book in the 5-year period leading to 2019.

Individual scholars are members of various communities: academic departments, colleges, universities, and disciplines, among others. Placing the individual author in this complex social context, Nygaard [ 24 ] used an academic literacies framework to analyze research and writing. In this model, research, writing, and publishing are social practices embedded within a community, and communities create expectations for individual behavior. The researcher must decide the genre of the artifact to be produced, whether to involve other researchers in a collaborative effort, the quality of the work, the appropriate audience, and the process of how the scholarship is done. The community (department, university, discipline, etc.) establishes the parameters for those individual decisions. One of the most consequential decisions early career faculty face is deciding the venue for publishing their research. Clemens et al. [ 25 ] observe that access to book publishers is usually through cumulative advantage which accrues to senior faculty who have established a record of successful publications. Journal article publication, on the other hand, is more egalitarian, relying more on the author’s tenacity to submit their work multiple times until accepted. Thus, as Harley, et al. [ 26 ], Tenopir, et al. [ 27 ], and Wakeling, et al. [ 28 ] suggest, early career faculty members often recognize that the most advantageous strategy is to first establish their research reputations through the publication of journal articles in prestigious journals. With this background to the decisions the publishing researcher makes and the choices available, we suggest that the growing pressure to publish more—and more frequently—amidst the backdrop of community, reputation, and career stage requirements has altered the publication practices of social scientists.

Journal articles are the de facto “currency” of research in many physical, mathematical, biological, biomedical, and engineering fields [e.g., 29 ], and our data show that the social sciences are becoming more like those STEM disciplines in terms of publication practices. King [ 8 ] prefaced his comments on how computational research is restructuring the social sciences by noting “The social sciences are in the midst of an historic change, with large parts moving from the humanities to the sciences in terms of research style, infrastructural needs, data availability, empirical methods, substantive understanding, and the ability to make swift and dramatic progress.” Thus, the research methodologies of large parts of the social sciences are contributing to more collaborative research and an increased emphasis on journal article publication. Our analyses suggest that the emphasis on journal article publication may come at the expense of book publication and may be driven by increasing article publishing expectations on the youngest age cohort. While increased rates of journal article publication are not limited to the ECR cohort, in all but one discipline, the youngest cohort (ECR) had the greatest rate of participation in journal article publication in both 2011 and 2019. Our finding that the increase in articles per faculty member among those who appear in all nine years of the study is less than the overall increase in article per author may be partially explained by increasing pressure to write more papers among ECRs, perhaps as a corollary to the increasingly competitive job market for professorships.

The influence of performance-based research assessment systems on faculty publishing and research decisions is also likely related to the increase in journal article production and the de-emphasis on book publication. Hicks [ 30 ] notes “….it is the form of social science scholarly publication that is evolving in response to the imposition of national research evaluation… Research evaluation and publishing in the social sciences and humanities are co-evolving.” Our data indicate that this co-evolution in the social sciences likely results in greater emphasis on large research programs conducted by teams and increased frequency of journal article publication. In every discipline we examined, the rate of participation in journal article publishing increased, while the rate of book publishing participation decreased. In general, books take more time to produce than journal articles and their impact on the community is difficult to ascertain in the short term due in part to a dearth of comprehensive book citation databases. We posit that the increased need for rapidly produced research artifacts, the growth of quantitatively focused modes of inquiry in social science disciplines, and the increasingly greater number of journal articles produced by ECRs is likely to continue favoring journal article publication in the social sciences over book publications.

There are several potential ramifications of the decrease in book publications for social sciences as a whole and individual social science disciplines. The U.S. market for scholarly monographs has been shrinking for several years [ 31 ]. Book publishers used to see successful print runs and sales of 2,000 copies of new books. Now, annual sales of 200 copies of a new book is considered successful by some publishers [ 32 ]. Some book publishers have responded to this decline in revenues by increasing book prices as much as three-or four-fold [ 32 ]. The declines in book publications may provide some relief for acquisition librarians stretching their already depleted funds.

Declining book publications may have detrimental effects for social sciences disciplines most closely related to the humanities. Long-form scholarly publishing provides the place and space to explore a topic in detail, analyzing subjects with greater contextualization than shorter-form journal articles typically allow. Crossik [ 33 ] observes that “Writing a monograph allows the author to weave a complex and reflective narrative, tying together a body of research in a way that is not possible with journal articles or other shorter outputs.” Hill [ 34 ] further suggests that “ways of knowing” and “forms of telling” are entwined; “reducing one may diminish the richness of the other.”

Future directions and study limitations

Collaborative research and publication have become commonplace in most disciplines in the social sciences, and further studies of scholarly collaboration are likely to provide more context and depth to understanding the behaviors involved in this phenomenon. Our study aimed to quantify the disciplinary literature as a whole, rather than the number of authorships attributable to individual scholars. If an article was co-authored by more than one scholar within a discipline, the article was counted only once in the article total for that discipline. Articles per person (as reported, e.g., in Table 2 ) was calculated as the number of unique articles authored by scholars in that discipline, divided by the number of unique scholars in that discipline. In this way, our study design is not suited to directly address whether increased co-authorship has a causal relationship with increased journal article authorships overall. We did, however, calculate the median number of authors for each article in each calendar year between 2007 and 2019 (a table with these data appears as supplemental information ( https://osf.io/2x4uf/ ). The median number of authors increased by 1.0 in all 12 disciplines we studied (e.g., from 2.0 to 3.0 authors per article in Anthropology, and from 3.0 to 4.0 authors per article in Geography). Increasing numbers of authors per article (see also [ 6 , 10 ]), in light of our result that the number of unique articles per person is also increasing, suggests a fruitful avenue for future research to explore the relationship between “teaming” and disciplinary article production.

Our study sample was limited to research universities that offer the Ph.D. degree in the United States. Faculty employed by many non-Ph.D. granting universities also routinely publish research results, and are likely also influenced by community practices, prestige, external evaluation, and the increasing use of quantitative research methods. Future research may seek to expand the sample of universities to include those institutions. Further, we did not consider non-traditional forms of scientific and scholarly communication such as blog authorship, zine authorship, newsletters, op-ed pieces, listserv posts, performances, musical compositions, choreography, etc., nor did we consider conference proceedings and book chapters. It is possible that the decline we observed in books published (and the increase in journal articles published) does not completely capture the fullness of the shift in social science research dissemination strategies. Bibliometric data aggregation would benefit from the inclusion of more of the diversity of dissemination strategies now available to scholars.

Acknowledgments

We thank the following individuals for thoughtful comments and suggestions: R. Berdahl, P. Lange, M. Matier, T. Stapleton, G. Walker, R. Wheeler, and C. Whitacre.

  • View Article
  • Google Scholar
  • 3. Johnson R, Watkinson A, Mabe M. The STM Report: an overview of scientific and scholarly publishing—fifth Edition. STM: International Association of Scientific, Technical and Medical Publishers; 2018 Oct. Available: https://www.stm-assoc.org/2018_10_04_STM_Report_2018.pdf
  • PubMed/NCBI
  • 9. National Science Foundation. Human Networks and Data Science (HNDS): PROGRAM SOLICITATION NSF 21–514. 2021. Available: https://www.nsf.gov/publications/pub_summ.jsp?WT.z_pims_id=505702&ods_key=nsf21514
  • 17. Ostriker JP, Holland PW, Kuh CV, Voytuk JA, editors. A Guide to the Methodology of the National Research Council Assessment of Doctorate Programs. Washington, D.C.: National Academies Press; 2009. p. 12676. https://doi.org/10.17226/12676
  • 18. Morse R, Castonguay A. How U.S. News Calculated the Best Global Universities Rankings. 25 Oct 2021. Available: https://www.usnews.com/education/best-global-universities/articles/methodology#:~:text=How%20U.S.%20News%20Calculated%20the%20Best%20Global%20Universities,%20%202.5%25%20%209%20more%20rows%20
  • 19. Understanding the Methodology—QS World University Rankings. In: QS Top Universities [Internet]. Jun 2021 [cited 25 Oct 2021]. Available: https://www.topuniversities.com/university-rankings-articles/world-university-rankings/understanding-methodology-qs-world-university-rankings
  • 20. NCES. Introduction to the Classification of Instructional Programs: 2010 Edition (CIP-2010). 2013. Available: https://doi.org/10.4135/9781412957403.n289
  • 22. R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2020. Available: https://www.R-project.org/
  • 26. Harley D, Acord SK, Earl-Novell S, Lawrence S, King CJ. Assessing the future landscape of scholarly communication: an exploration of faculty values and needs in seven disciplines. Berkeley: Univ Of California Press; 2010.
  • 33. Crossik G. Monographs and Open Access: a report to HEFCE. HEFCE; 2015 Jan. Available: https://dera.ioe.ac.uk/21921/1/2014_monographs.pdf

Academics Write Papers Arguing Over How Many People Read (And Cite) Their Papers

Studies about reading studies go back more than two decades

Rose Eveleth

Rose Eveleth

Contributor

papers

There are a lot of scientific papers out there. One estimate  puts the count at 1.8 million articles published each year, in about 28,000 journals. Who actually reads those papers? According to one 2007 study , not many people: half of academic papers are read only by their authors and journal editors, the study's authors write. 

But not all academics accept that they have an audience of three. There's a heated dispute around academic readership and citation—enough that there have been studies about reading studies going back for more than two decades.

In the  2007 study , the authors introduce their topic by noting that  “as many as 50% of papers are never read by anyone other than their authors, referees and journal editors.” They also claim that 90 percent of papers published are never cited. Some academics are unsurprised by these numbers. “ I distinctly remember focusing not so much on the hyper-specific nature of these research topics, but how it must feel as an academic to spend so much time on a topic so far on the periphery of human interest,” writes Aaron Gordon at Pacific Standard . “Academia’s incentive structure is such that it’s better to publish something than nothing,” he explains, even if that something is only read by you and your reviewers. 

But not everybody agrees these numbers are fair. The claim that half of papers are never cited comes first from a paper from 1990. “ Statistics compiled by the Philadelphia-based Institute for Scientific Information (ISI) indicate that 55% of the papers published between 1981 and 1985 in journals indexed by the institute received no citations at all in the 5 years after they were published,” David P. Hamilton wrote in Science . 

In 2008, a team found that the problem is likely getting worse . “ As more journal issues came online, the articles referenced tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles.” But some researchers took issue with that study, arguing that using different methods you could get quite different results. “Our own extensive investigations on this phenomenon… show that Evans’ suggestions that researchers tend to concentrate on more recent and more cited papers does not hold at the aggregate level in the biomedical sciences, the natural sciences and engineering, or the social sciences,” the authors write. This group of researchers found that plenty of old papers, for instance, were racking up readers over time.

It seems like this should be an easy question to answer: all you have to do is count the number of citations each paper has. But it’s harder than you might think. There are entire papers themselves dedicated to figuring out how to do this efficiently and accurately. The point of the 2007 paper wasn’t to assert that 50 percent of studies are unread. It was actually about citation analysis and the ways that the internet is letting academics see more accurately who is reading and citing their papers. “Since the turn of the century, dozens of databases such as Scopus and Google Scholar have appeared, which allow the citation patterns of academic papers to be studied with unprecedented speed and ease,” the paper's authors wrote.

Hopefully, someone will figure out how to answer this question definitively, so academics can start arguing about something else. 

Get the latest stories in your inbox every weekday.

Rose Eveleth

Rose Eveleth | | READ MORE

Rose Eveleth was a writer for Smart News and a producer/designer/ science writer/ animator based in Brooklyn. Her work has appeared in the New York Times , Scientific American , Story Collider , TED-Ed and OnEarth .

PublishingState.com

How many journal articles have been published?

How Many Journal Articles Have Been Published?

Table of contents, introduction, academic journal publishing models, the number of journal articles published, trends in journal publications, high-impact papers, high-impact journals, impacts on metrics and evaluation.

How many journal articles have been published so far?

It’s a very intriguing question, isn’t it?

The world of academic publishing is vast and ever-expanding. The global academic landscape is teeming with many scientific and academic papers. Every year, more than 5 million research documents are published, encompassing the following materials:

  • Academic books
  • Journal articles
  • Conference papers
  • White papers and technical papers
  • Theses and dissertations
  • Academic reports

A study found that since 1996, more than 64 million academic papers have been published. These numbers continue to rise every year.

How many journal articles have been published?

The staggering numbers highlight the explosive growth of scientific research and academic publishing over the past few decades. With new journals launching yearly and existing journals publishing more papers than ever, the total output of scholarly literature seems to increase exponentially.

But who reads all these papers? It’s estimated that, on average, slightly more than ten people read a research paper. What about journal papers?

While the published body of scientific knowledge is growing, the question remains whether it is being absorbed and utilized to its full potential.

With millions of research and academic materials published annually across thousands of academic niches, even experts in a field can struggle to keep up with the latest developments. This massive volume of published research raises important questions about the evolving world of journal publishing .

How can scholars and professionals stay current in their areas of expertise? What incentives drive the publication of so many articles? And how can we ensure that meaningful discoveries are findable amidst the sea of published papers? As publishing metrics and numbers continue to rise, it’s worth examining if growth matches the growth in knowledge and impact.

How Many Academic Journals Are There Today?

There are more than 30,000 academic journals globally, with the number projected to increase to 33,080-34,050 journals by 2025. The journals publish on various fields and scopes, including science, engineering, humanities, social sciences, medical, law, management, information technology, mathematics, business, accounting, education, and psychology.

The United Kingdom and the United States lead the world with a combination of over 10,000 academic journals published annually, accounting for over 20% of the global scholarly journal publishing.

Hundreds of new ranked journals seem to crop up yearly to accommodate the ballooning mass of publishable research.

The world of academic publishing has a variety of models that journals use to publish and disseminate their content. The two most common models are the subscription model and the open access model.

The subscription model is the traditional model of academic publishing. In this model, readers (or, more often, their institutions like universities or libraries) pay a subscription fee to access the journal’s content. The subscription can be for a single journal, a bundle of journals from the same publisher, or even a package of journals across multiple publishers.

The advantage of the subscription model is that it provides a steady stream of revenue for the publisher. However, the downside is that it restricts access to those who can afford the subscription fees, which can be quite high.

In contrast to the subscription model, open access journals allow anyone with internet access to read the articles for free. The authors (or their institutions or funders) cover the publishing costs and pay article processing charges (APCs) once their paper is accepted for publication.

The benefit of the open access model is that it promotes wider dissemination and accessibility of research findings. However, it can place a financial burden on researchers, particularly those from low-income countries or less-funded research areas. There are also hybrid models where a journal operates mainly under the subscription model but offers authors the option to make their articles open access for a fee. This model combines elements of both the subscription and open access models. Moreover, there’s a growing movement towards “diamond” or “platinum” open access, where neither the reader nor the author pays. These journals are usually funded by an organization, institution, or consortium that values the free dissemination of research. It’s important to note that regardless of the publishing model, all these journals strive to ensure rigorous peer-review processes to maintain the quality and integrity of the academic literature they publish.

Each year, over 2 million new research articles are published in more than 30,000 peer-reviewed journals across all fields of study.

With more than 2 million journal articles, the number of academic papers published yearly is staggering. With new journals constantly emerging, this immense literature volume can be hard to wrap your head around.

The rate of publication output has also seen an upward trajectory in recent years. The compound annual growth rate of publication output increased by 5% over the four years from 2017 to 2020, a slight uptick from the 4% growth rate observed over the longer 11-year period from 2010 to 2020.

Moreover, established journals publish way more papers yearly than they used to. Overall, there’s been a sharp increase in the number of articles published per journal. Many publish thousands of papers annually – a major change from old publishing patterns.

The world of academic literature is expanding rapidly. Making sense of millions of journal articles is no small feat. But staying informed about the research happening in your field is crucial, even in the face of information overload.

The number of academic journals being published each year continues to rise staggeringly. Hundreds of newly ranked journals enter publication annually. This proliferation of journals provides more opportunities for researchers to disseminate their work but also makes it challenging to keep up with the latest publications.

In addition to more journals, existing publications are increasing the number of papers they publish each year.

According to the data from the Scimago Journal Rank and Microsoft Academic Graph (MAG) datasets, the number of published papers per journal increased sharply, from a mean of 74.2 papers in 1999 to a mean of 99.6 papers in 2016. While a more updated study is needed, the growth pattern will continue for at least another decade.

Of all active academic journals globally, more than 100 journals publish over 1,000 papers annually, and the number continues to increase. Reputable journals that once published hundreds of articles annually are now releasing thousands.

Journals with massive resources, such as PLOS One, publish over 20,000 articles annually. This indicates a shift in publishing norms, with journals taking on higher volumes of articles and researchers producing work at faster rates.

Some experts argue the increasing trend diminishes the value of individual publications, overwhelms readers, and encourages academic publishers to make more money . Academic publishing has been labeled “ greedy ” for this practice.

The rise in the number of new journals and papers per journal has exponentially increased the scientific literature. While more research is being conducted and shared than ever, the overabundance of publications raises questions about the efficacy of academic metrics and best practices for disseminating quality research.

Changes in Metrics Over Time

The world of academic publishing has undergone significant transformations in recent decades. Two notable trends are the doubling of papers published in top-tier journals and the exponential growth in total papers published annually.

Over the past two decades, the number of papers published and indexed in the top database quartiles (Q1 and Q2) has also increased. This dramatic increase reflects the rise in submissions to top journals as more researchers enter the field.

What makes a quality, high-impact scholarly paper ?

A quality, high-impact scholarly paper possesses several key characteristics. These include: Originality and novelty . High-quality research should contribute something new to the field of study. It could be a new theory, method, data set, or empirical findings that have not been previously published. Rigorous methodology . The research methodology should be sound and appropriate for the research question. This includes the design of the study, the data collection process, and the analysis of the results. The methodology should be detailed so that other researchers can replicate the study. Significance and relevance . The research should be important and relevant to the field. It should address a significant problem or gap in the existing literature. Clear and concise writing . The paper should be well-written and easy to understand. It should clearly state the research question, explain the methodology, present the results, and discuss the implications of the findings. Valid and reliable results . The study’s findings should be valid (i.e., they accurately represent what they are supposed to measure) and reliable (i.e., they would be the same if the study were conducted again under the same conditions). Peer-reviewed . High-quality papers typically undergo a rigorous peer-review process, where other experts in the field review the paper for its validity, significance, and originality. Citation potential . High-impact papers are those that other researchers frequently cite. The number of citations a paper receives is often used to indicate its impact on the field. Ethical considerations . Any ethical issues related to the research should be addressed. This includes obtaining informed consent from participants, ensuring confidentiality and anonymity, and avoiding conflicts of interest. Transparency and openness . High-quality papers make their data, code, and other research materials available. This allows other researchers to verify the results and build upon the research. Contribution to knowledge . The paper should significantly contribute to the existing body of knowledge. It should advance our understanding of a particular topic or issue. It’s important to note that while these are general characteristics of high-quality, high-impact papers, the specific criteria can vary depending on the field of study.

However, it also suggests that these journals have become less selective in what they publish, possibly due to commercial pressures. Some experts argue this inflation has diluted the value of appearing in elite journals.

These trends require rethinking how we evaluate academic success. Metrics like publication count and journal impact factor are becoming less meaningful amid the journal publishing boom and the continued rise in journal articles. Citation-based measures are also problematic when citing references is increasingly performative.

How many journal articles have been published?

More holistic evaluation criteria emphasize research quality and contribution over quantity. This may involve peer assessments, evidence of real-world impact, and valuing datasets/code as research outputs. Reform is critical to ensure scholarship retains meaning and purpose.

We have learned that there are more than 30,000 academic journals globally, estimated to grow to 33,080-34,050 journals by 2025, publishing in various fields and scopes.

More than 60 million academic papers have been published in the past decade.

How many journal articles have been published out of this 60 million?

As of the write-up, 2 million articles are published yearly, a truly massive number. Is it good or bad? Well, that depends.

The continued growth in journal numbers and articles published significantly impacts metrics and evaluation in academia. The increase in papers published in top-tier journals suggests that they have become less selective in their publication. This could be due to commercial pressures as more researchers enter the field and submit their work to these prestigious journals. As a result, the value of appearing in elite journals may have been diluted. Furthermore, the exponential growth in total papers published across thousands of journals has led to an overabundance of publications. This makes it difficult for scholars to keep up with the literature in their field. It has also enabled the rise of predatory journals that publish anything for a fee, undermining quality control in academic publishing. These trends in academic publishing require reevaluating how academic success is evaluated. Traditional metrics like publication count and journal impact factor are becoming less meaningful in the face of the publishing boom. Citation-based measures are also problematic, as citing references may be more performative than indicative of quality or impact. Instead, more holistic evaluation criteria that emphasize research quality and contribution over quantity are needed. This could involve peer assessments, evidence of real-world impact, and valuing datasets and code as research outputs. Such reforms are critical to ensure that scholarship retains its meaning and purpose in the age of information overload.

4 thoughts on “How Many Journal Articles Have Been Published?”

  • Pingback: A ridiculous AI-generated rat penis made it into a peer-reviewed journal - News 247
  • Pingback: Who Cares if You Plagiarise? - Insite Media
  • Pingback: How to Organize Technical Research?
  • Pingback: How to Organize Technical Research? - SoatDev IT Consulting

Leave a comment Cancel reply

National Science Foundation logo.

SCIENCE & ENGINEERING INDICATORS

Publications output: u.s. trends and international comparisons.

  • Report PDF (807 KB)
  • Report - All Formats .ZIP (3.9 MB)
  • Supplemental Materials - All Formats .ZIP (35.6 MB)
  • MORE DOWNLOADS OPTIONS
  • Share on X/Twitter
  • Share on Facebook
  • Share on LinkedIn
  • Send as Email

R&D

Publication Output by Country, Region, or Economy and Scientific Field

Publication output reached 2.9 million articles in 2020 with over 90% of the total from countries with high-income and upper middle-income economies ( Figure PBS-1 ). predatory journals (NSB Indicators 2018 : Bibliometric Data Filters sidebar )." data-bs-content="Publication output only includes those indexed in the Scopus database. The publication output discussion uses fractional counting, which credits coauthored publications according to the collaborating institutions or countries based on the proportion of their participating authors. Country assignments refer to the institutional address of authors, with partial credit given for each international coauthorship. As part of our data analysis, we employ filters on the raw Scopus S&E publication data to remove publications with questionable quality, which appear in what are sometimes called predatory journals (NSB Indicators 2018 : Bibliometric Data Filters sidebar )." data-endnote-uuid="2ca4e1b6-71b9-46cd-95e8-d823ad890cb3">​ Publication output only includes those indexed in the Scopus database. The publication output discussion uses fractional counting, which credits coauthored publications according to the collaborating institutions or countries based on the proportion of their participating authors. Country assignments refer to the institutional address of authors, with partial credit given for each international coauthorship. As part of our data analysis, we employ filters on the raw Scopus S&E publication data to remove publications with questionable quality, which appear in what are sometimes called predatory journals (NSB Indicators 2018 : Bibliometric Data Filters sidebar ). Since 1996, output has consistently grown for countries with high-income economies, such as the United States, Germany, and the United Kingdom (UK), expanding from a large base number of publications ( Table SPBS-2 ). https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups ." data-bs-content="This report uses the World Bank (2021) country income classifications accessed in March 2021. The World Bank updates the classifications each year on 1 July. The World Bank income classifications are assigned using the gross national income per capita as measured in current U.S. dollars. This report uses the rankings. More information is available at https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups ." data-endnote-uuid="aba70396-9a04-4a9b-a470-d2458891074f">​ This report uses the World Bank (2021) country income classifications accessed in March 2021. The World Bank updates the classifications each year on 1 July. The World Bank income classifications are assigned using the gross national income per capita as measured in current U.S. dollars. This report uses the rankings. More information is available at https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups . Countries with upper-middle-income economies, such as China, Iran, Russia, and Brazil, have had a more rapid pace of growth since 1996, expanding from a relatively smaller base number of publications. Overall, the publication compound annual growth rates of countries with upper middle-income and high-income economies have been 10% and 3%, respectively, for the 25-year period covering 1996–2020 ( Figure PBS-1 ).

  • For grouped bar charts, Tab to the first data element (bar/line data point) which will bring up a pop-up with the data details
  • To read the data in all groups Arrow-Down will go back and forth
  • For bar/line chart data points are linear and not grouped, Arrow-Down will read each bar/line data points in order
  • For line charts, Arrow-Left and Arrow-Right will move to the next set of data points after Tabbing to the first data point
  • For stacked bars use the Arrow-Down key again after Tabbing to the first data bar
  • Then use Arrow-Right and Arrow-Left to navigate the stacked bars within that stack
  • Arrow-Down to advance to the next stack. Arrow-Up reverses

S&E articles, by income group: 1996–2020

Article counts refer to publications from a selection of conference proceedings and peer-reviewed journals in S&E fields from Scopus. Articles are classified by their year of publication and are assigned to a region, country, or economy on the basis of the institutional address(es) of the author(s) listed in the article. Articles are credited on a fractional count basis (i.e., for articles produced by authors from different countries, each country receives fractional credit on the basis of the proportion of its participating authors). Data are not directly comparable to Science and Engineering Indicators 2020 ; see the Technical Appendix for information on data filters. Low-income economies are not included in this figure because of their low publication output. Data by country and income groups are available in Table SPBS-2 .

National Center for Science and Engineering Statistics; Science-Metrix; Elsevier, Scopus abstract and citation database, accessed May 2021; World Bank Country and Lending Groups, accessed March 2021.

Science and Engineering Indicators

More recently, the compound annual growth in publication output for the world was 4% from 2010 to 2020 ( Table PBS-1 ). Country-specific growth rates vary widely by country. Among the 15 largest publication producers, countries with compound annual growth rates above the world average were Russia (10%), Iran (9%), India (9%), China (8%), and Brazil (5%); those with the lower growth rates were Japan (-1%), France (-0.3%), the United States (1%), the UK (1%), and Germany (1%). Table SPBS-17 ." data-bs-content="It is possible that the growth rates could be influenced by fractional counting. For example, the compound annual growth rate for France using whole counting is 1%. Publication output using whole counting is available in Table SPBS-17 ." data-endnote-uuid="0e263707-7950-46cf-831d-cf85eab318f1">​ It is possible that the growth rates could be influenced by fractional counting. For example, the compound annual growth rate for France using whole counting is 1%. Publication output using whole counting is available in Table SPBS-17 . The countries with low growth rates are those that built their scientific capacity decades ago and continue to maintain their scientific research. The worldwide growth of publication output, from 1.9 million in 2010 to 2.9 million in 2020, was led by four geographically large countries. China (36%), India (9%), Russia (6%), and the United States (5%) together accounted for about half the increase in publications over this time period.

S&E articles in all fields for 15 largest producing regions, countries, or economies: 2010 and 2020

na = not applicable.

The countries or economies are ranked based on the 2020 total. Article counts refer to publications from conference proceedings and peer-reviewed journal articles in S&E and indexed in Scopus (see Technical Appendix for more details). Articles are classified by their year of publication and are assigned to a region, country, or economy on the basis of the institutional address(es) of the author(s) listed in the article. Articles are credited on a fractional count basis (i.e., for articles from multiple countries or economies, each country or economy receives fractional credit on the basis of the proportion of its participating authors). Detail may not add to total because of countries or economies that are not shown. Proportions are based on the world total excluding unclassified addresses (data not presented). Details and other countries are available in Table SPBS-2 .

National Center for Science and Engineering Statistics; Science-Metrix; Elsevier, Scopus abstract and citation database, accessed May 2021.

Collectively, the top 15 countries produced 76% of the world’s publication output of 2.9 million articles in 2020 ( Table PBS-1 ). Figure PBS-2 and Table PBS-1 , or whole counting, as in Table SPBS-17 . There is a slight difference between the United States and China when looking at the whole counting total production numbers. Using whole counting for 2020, the United States had 600,053 articles, while China had 742,431. A whole counting measure allocates one full count to each country with an author contributing to the article; in fractional counting, each country receives a proportion of the count based on the number of authors from that country. For example, if an article had four authors—with two from the United States, one from China, and one from Brazil—the fractional scores would be 2/4 for the United States, 1/4 for China, and 1/4 for Brazil. In this example, the difference between whole and fractional counting indicates that the United States had more authors on the example paper, compared to the number of authors in China or Brazil." data-bs-content="The proportion of output attributable to the large producers is consistent whether using fractional counting, as in Figure PBS-2 and Table PBS-1 , or whole counting, as in Table SPBS-17 . There is a slight difference between the United States and China when looking at the whole counting total production numbers. Using whole counting for 2020, the United States had 600,053 articles, while China had 742,431. A whole counting measure allocates one full count to each country with an author contributing to the article; in fractional counting, each country receives a proportion of the count based on the number of authors from that country. For example, if an article had four authors—with two from the United States, one from China, and one from Brazil—the fractional scores would be 2/4 for the United States, 1/4 for China, and 1/4 for Brazil. In this example, the difference between whole and fractional counting indicates that the United States had more authors on the example paper, compared to the number of authors in China or Brazil." data-endnote-uuid="795522b2-013c-415e-8335-55e7fc42059c">​ The proportion of output attributable to the large producers is consistent whether using fractional counting, as in Figure PBS-2 and Table PBS-1 , or whole counting, as in Table SPBS-17 . There is a slight difference between the United States and China when looking at the whole counting total production numbers. Using whole counting for 2020, the United States had 600,053 articles, while China had 742,431. A whole counting measure allocates one full count to each country with an author contributing to the article; in fractional counting, each country receives a proportion of the count based on the number of authors from that country. For example, if an article had four authors—with two from the United States, one from China, and one from Brazil—the fractional scores would be 2/4 for the United States, 1/4 for China, and 1/4 for Brazil. In this example, the difference between whole and fractional counting indicates that the United States had more authors on the example paper, compared to the number of authors in China or Brazil. The two countries producing the most S&E publications in 2020 were China (669,744, or 23%) and the United States (455,856, or 16%) ( Figure PBS-2 ). With the exception of Iran replacing Taiwan beginning in 2014, the top 15 producers of S&E articles have been the same over the last 10 years (NSB 2016).

S&E articles, by selected region, country, or economy and rest of world: 1996–2020

Article counts refer to publications from a selection of conference proceedings and peer-reviewed journals in S&E fields from Scopus. Articles are classified by their year of publication and are assigned to a region, country, or economy on the basis of the institutional address(es) of the author(s) listed in the article. Articles are credited on a fractional count basis (i.e., for articles produced by authors from different countries, each country receives fractional credit on the basis of the proportion of its participating authors). Data for all regions, countries, and economies are available in Table SPBS-2 .

The U.S. trend of moderate but increasing publication output varies by state. The National Science Board’s (NSB’s) State Indicators data tool provides state-level data based on each state’s doctorate population and R&D funding, including academic S&E article output per 1,000 science, engineering, and health doctorate holders in academia (NSB 2021a) and academic S&E article output per $1 million of academic S&E R&D (NSB 2021b).

The U.S. trend of publication output varies across race or ethnicity and sex, which impacts R&D careers (see sidebar Publication Output by Underrepresented Groups and Impact on R&D Careers and Indicators 2022 report “ The STEM Labor Force of Today: Scientists, Engineers, and Skilled Technical Workers ”).

Publication Output by Underrepresented Groups and Impact on R&D Careers

The National Science Board stated in its Vision 2030 report that “women and underrepresented minorities remain inadequately represented in S&E relative to their proportions in the U.S. population” (NSB 2020). These disparities have also been found in the publication of peer-reviewed articles (Hopkins et al. 2013). The National Center for Science and Engineering Statistics (NCSES) has undertaken research to examine linkages between publication output and careers in research (Chang, White, and Sugimoto forthcoming).

Matching publication output data to demographic survey data provides a key to understanding publication output in conjunction with authors’ demographic, training, and career information. Prior researchers have attempted to add author demographics using various methods, such as sex and race disambiguation algorithms (e.g., NamSor, Ginni, Ethnicolr, OriginsInfo), that estimate the probability of race or sex from given names (or, in the case of Face ++ , from images). The accuracy of these matches varies dramatically by country and field; sex disambiguation algorithms perform better for western countries and poorly for other countries, specifically in Asia and South America (Karimi et al. 2016). In addition, some scientific fields, such as astronomy and astrophysics, generally use initials rather than given names. Despite these limitations, researchers have observed sex and race disparities in publication output (Hopkins et al. 2013; Larivière et al. 2013; Marschke et al. 2018; and NSB Indicators 2018 : S&E Publication Patterns, by Gender ).

The limitations associated with the earlier approaches can be overcome using data directly collected from the authors. One such source is the NCSES Survey of Doctorate Recipients (SDR), * which provides demographic, education, and career history information from a sample of individuals with a U.S. research doctoral degree in a science, engineering, or health field (NCSES 2021). Clarivate, the architect of Web of Science (WoS), † matched SDR respondents to publication records in the WoS publication output database. The results provide demographic information, such as sex and race or ethnicity of publication authors.

These data shed light on publication output differences between groups defined by race or ethnicity and sex, by discipline, and by impacts to R&D career paths (Chang, White, and Sugimoto forthcoming). ‡ The point estimates in Figure PBS-A show the odds of pre-doctorate student publishing by ethnic group or sex relative to White students (or men, for the sex comparison) while the error bars show the confidence around that point estimate (95% confidence interval). The confidence interval is closely linked to the size of the sample. In the SDR-WoS data, the number of minorities and women receiving degrees in the population influences the sample size—and, consequently, the ability to measure odds ratios. For example, there are 3,750 women who received mathematics or statistics PhDs compared to 10,450 men ( Table SPBS-32 ). A similar issue arises for mathematics or statistics PhDs by race or ethnicity ( Table SPBS-33 ). Overall, compared to White graduates, Asian, Black, or Hispanic graduates are less likely to publish before their doctorate in biological, agricultural, and other life sciences; engineering; health sciences; and social sciences.

S&E pre-doctorate publishing odds ratio, by sex and selected race or ethnicity: 1995–2006

S&E doctorates include science, engineering, and health PhD candidates at U.S. research doctorate institutions. Computer sciences is not included in the figure because the odds ratio and confidence interval show no conclusive results for any demographic group or sex. Table shows the estimated odds ratios of publishing at least one article or conference proceeding during the five years before receiving a doctorate in the combined Web of Science and Survey of Doctorate Recipients database. For more detail, see Table SPBS-32 and Table SPBS-33 .

National Center for Science and Engineering Statistics, Survey of Doctorate Recipients; Clarivate, Web of Science.

Compared to men, women are less likely to publish before graduation in the biological sciences, agriculture, engineering, health sciences, physics, and social sciences. Pre-doctorate publications appear to factor into obtaining a job in which research is the primary activity. § For those with at least one pre-PhD publication, 56% reported that their first job has research as its primary activity compared to 37% of those without a publication (Chang, White, and Sugimoto forthcoming).

* A machine learning approach matches the SDR respondents to the authors of publications indexed by the Web of Science (WoS). The matching algorithm incorporates name commonality, research field, education, employment affiliations, coauthorship network, and self-citations to predict matches from the SDR respondents to the WoS.

† WoS is a bibliometric database of conference proceedings and peer-reviewed literature with English-language titles and abstracts.

‡ To predict pre-doctorate publishing propensity, separate models were fitted for each doctoral field, and the following factors from the NCSES’s Survey of Earned Doctorates were controlled: doctorate award year, type of PhD-awarding institution, source of primary support, community college experience, U.S. citizenship status at the time of degree award, level of parental education, marital status, dependents under 18 years old, disability, graduate debt, and name commonality.

§ The model controls for critical factors, such as the PhD institutions ranking as a high research institution, year of graduation, citizenship, parental degree, and student debt. The model does not measure article submissions or rejections.

Distribution of publications by field of science and region, country, or economy can indicate research priorities and capabilities. Health sciences is the largest field of science globally (25% of publications in 2020) ( Table SPBS-2 and Table SPBS-10 ). Likely due to COVID-19, health sciences publications grew 16%, and biological and biomedical sciences publications grew 15% from 2019 to 2020, far surpassing their previous 2009–19 compound annual growth rates of 3% for each ( Table SPBS-5 and Table SPBS-10 ). In the United States, the European Union (EU-27), the UK, and Japan, health sciences publication output far exceeds that of any other field ( Figure PBS-3 ). Table SPBS-17 through Table SPBS-31 )." data-bs-content="There is little difference between whole or fractional counting of publications for the large producing countries. Whole counting shows a difference for small countries with high collaboration rates because they only receive a fraction of a point for each article, while whole counting awards them a full point ( Table SPBS-17 through Table SPBS-31 )." data-endnote-uuid="f8a4e196-8e1b-4fb8-8538-837999b38d46">​ There is little difference between whole or fractional counting of publications for the large producing countries. Whole counting shows a difference for small countries with high collaboration rates because they only receive a fraction of a point for each article, while whole counting awards them a full point ( Table SPBS-17 through Table SPBS-31 ). The United States, the UK, and the EU-27 have the highest proportions of articles in the social sciences of the six countries and regions shown. In China, the largest research area is engineering (24%), followed by health sciences (15%) and computer and information sciences (12%). The largest scientific field for publication output in India is computer sciences (18%). Japan has a portfolio with health sciences (32%) at the top, followed by biological and biomedical sciences (13%) and engineering (13%).

S&E research portfolios, by eight largest fields of science and by selected region, country, or economy: 2020

EU = European Union.

Articles refer to publications from a selection of conference proceedings and peer-reviewed journals in S&E fields from Scopus. Articles are classified by their year of publication and are assigned to a region, country, or economy on the basis of the institutional address(es) of the author(s) listed in the article. Articles are credited on a fractional count basis (i.e., for articles from multiple countries, each country receives fractional credit on the basis of the proportion of its participating authors). See Table SPBS-1 for countries included in the EU; beginning in 2020, the United Kingdom was no longer a member of the EU. See Table SPBS-2 for all fields of science. See Table SPBS-2 through Table SPBS-16 for data on all regions, countries, and economies and all fields of science.

There is increasing interest in measuring publication output that crosses or combines the standard scientific fields for solving boundary-defying issues, such as climate change or poverty reduction (NRC 2014, NASEM 2021). While publication output provides a potential avenue for measuring cross-disciplinary research output, there are challenges for national-level measures. (See sidebar Measuring Cross-Disciplinarity Using Publication Output .)

Measuring Cross-Disciplinarity Using Publication Output

This sidebar uses cross-disciplinarity as an envelope term that includes convergent, multidisciplinary, and interdisciplinary research because the measurement techniques for examining publication output are similar. Cross-disciplinary research includes the following:

  • Convergent research that is driven by a specific and compelling problem requiring deep integration across disciplines (NSF 2019). Convergent science is a team-based approach to problem solving cutting across fields of inquiry and institutional frontiers to integrate areas of knowledge from multiple fields to address specific scientific and societal challenges.
  • Multidisciplinary research (MDR) that “juxtaposes two or more disciplines focused on a question … [where] the existing structure of knowledge is not questioned” (NRC 2014:44).
  • Interdisciplinary research (IDR) that “integrates information, data, methods, tools, concepts, and/or theories from two or more disciplines focused on a complex question, problem, topic, or theme” (NRC 2014:44).

Efforts using publication output to measure cross-disciplinary research yields results that are not suitable for comparing at the country level (Wagner et al. 2011; Wang and Schneider 2020). This finding is similar to sidebars in previous Indicators reports (NSB 2010; NSB 2016; Wagner et al. 2009). This sidebar explains the ongoing methodological issues with measuring convergence, MDR, and IDR at the country level and provides potential directions for future research.

For measurement at the country level, researchers have analyzed cross-disciplinary research using various bibliometric measures. Some have used article citations (Campbell et al. 2015; Porter and Chubin 1985), coauthor fields of specialization (Porter et al. 2007), text mining of abstracts or keywords listed on each article (Del Rio et al. 2001), or network analysis (Leydesdorff and Rafols 2011). An analysis of various approaches for measuring interdisciplinarity revealed a lack of consistent measurement outcomes across scientific fields, over time, and for countries or economies (Digital Science 2016).

Measuring cross-disciplinarity is challenging because indicators that are valid by one measure (e.g., citation counts), are not stable in another scientific area. For example, looking within the broad field of health sciences, health economics uses fewer citations, while biomedicine uses many more. When attempting to measure cross-disciplinarity for health sciences, the differences between health economics and biomedicine are, at least in part, related to different citation habits and not necessarily to differences in the cross-disciplinarity of the research.

Although research has not uncovered robust cross-disciplinary measures for countries, there are insights into the growth and influence of convergence, IDR, and MDR. Measured broadly, researchers find growth in cross-disciplinarity: “from about the mid-1980s, both natural sciences and engineering (NSE) and medical fields (MED) raised their level of interdisciplinarity at the expense of a focus on specialties” (Larivière and Gingras 2014:197). The team also found that the social sciences, as well as the arts and humanities, were the most open to collaborating with other disciplines. While cross-disciplinarity has grown, citation lags are associated with cross-disciplinary research papers. Specifically, they garner fewer than the normal number of citations for the first 3 years but pick up more citations than normal over 13 years (Wang, Thijs, and Glänzel 2015).

Recently, Digital Science prepared a report for the Research Councils of the United Kingdom (RCUK) that scanned the current literature and measurement approaches (Digital Science 2016). RCUK concluded that “no single indicator of interdisciplinarity (either MDR or IDR) analysed here should, used alone, satisfy any stakeholder. They show diverse inconsistency—in terms of change over time, difference between disciplines and trajectory for countries—that raises doubts as to their specific relevance” (Digital Science 2016:8). The RCUK report suggested that combining bibliometric IDR measures with other data, such as award information, could create a framework for expert analysis of IDR. Among the recommendations were continued exploration of text analysis and the inclusion of departmental affiliations in award information.

Similarly, the 2021 National Academies of Sciences, Engineering, and Medicine workshop on Measuring Convergence in Science and Engineering found that “using a single or even a few atomistic indicators to measure complex research activities capable of addressing societal problems is misguided” (NASEM 2021:49). Workshop participant Ismael Rafols suggested shifting from an atomistic to a portfolio approach, investigating the entire landscape that makes convergence possible.

Related Content

Advertisement

Issue Cover

  • Previous Article
  • Next Article

PEER REVIEW

1. introduction, 3. results and discussion, 4. limitations, 5. conclusions, author contributions, competing interests, funding information, data availability, scopus 1900–2020: growth in articles, abstracts, countries, fields, and journals.

ORCID logo

Handling Editor: Ludo Waltman

  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Mike Thelwall , Pardeep Sud; Scopus 1900–2020: Growth in articles, abstracts, countries, fields, and journals. Quantitative Science Studies 2022; 3 (1): 37–50. doi: https://doi.org/10.1162/qss_a_00177

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Scientometric research often relies on large-scale bibliometric databases of academic journal articles. Long-term and longitudinal research can be affected if the composition of a database varies over time, and text processing research can be affected if the percentage of articles with abstracts changes. This article therefore assesses changes in the magnitude of the coverage of a major citation index, Scopus, over 121 years from 1900. The results show sustained exponential growth from 1900, except for dips during both world wars, and with increased growth after 2004. Over the same period, the percentage of articles with 500+ character abstracts increased from 1% to 95%. The number of different journals in Scopus also increased exponentially, but slowing down from 2010, with the number of articles per journal being approximately constant until 1980, then tripling due to megajournals and online-only publishing. The breadth of Scopus, in terms of the number of narrow fields with substantial numbers of articles, simultaneously increased from one field having 1,000 articles in 1945 to 308 fields in 2020. Scopus’s international character also radically changed from 68% of first authors from Germany and the United States in 1900 to just 17% in 2020, with China dominating (25%).

https://publons.com/publon/10.1162/qss_a_00177

Science is not static, with the number of active journals increasing at a rate of 3.3%–4.7% per year between 1900 and 1996 ( Gu & Blackmore, 2016 ; Mabe & Amin, 2001 ). Bibliometric studies covering a substantial period need to choose a start year and be aware of changes and any anomalies during the time covered. Citations over a long period are needed in bibliometric studies of the evolution of journal ( Jayaratne & Zwahlen, 2015 ), field ( Pilkington & Meredith, 2009 ), author ( Maflahi & Thelwall, 2021 ), or national ( Fu & Ho, 2013 ; Luna-Morales, Collazo-Reyes et al., 2009 ) research impact over time. Unless constrained by a research question, the logical start year for bibliometric studies covering many years might be either the most recent date when there was a change in the character of the bibliometric database used, or the earliest year when sufficient articles were indexed according to some criteria. It is therefore useful to assess the temporal characteristics of bibliometric databases to aid decisions by researchers about when to start, particularly as some facets, including narrow fields, average citations and the presence of abstracts, are not currently straightforward to obtain from the web interfaces of citation indexes. This article focuses on one of the major citation indexes, Scopus.

Little is known about the historical coverage of the major citation indexes, other than the information reported by their owners. This typically gives overall totals rather than yearly breakdowns (e.g., Clarivate, 2021 ; Dimensions, 2021 ; Elsevier, 2021 ). Scopus currently has wider coverage of the academic literature than the Web of Science (WoS) and CrossRef open DOI-to-DOI citations, similar coverage to Dimensions, but much lower coverage than Google Scholar and Microsoft Academic ( Martín-Martín, Thelwall et al., 2021 ; Singh, Singh et al., 2021 ; Thelwall, 2018 ). Lower coverage than Google Scholar and Microsoft Academic is a logical outcome of the standards that journals must meet to be indexed by Scopus (e.g., Baas, Schotten et al., 2020 ; Gasparyan & Kitas, 2021 ; Pranckutė, 2021 ; Schotten, Meester et al., 2017 ) and WoS ( Birkle, Pendlebury et al., 2020 ). Nevertheless, non-English journals seem to be underrepresented in both Scopus and WoS ( Mongeon & Paul-Hus, 2016 ). One source of difference between WoS and Scopus is that WoS aims to generate a balanced set of journals to support the quality of citation data used for impact evaluations ( Birkle et al., 2020 ). While a larger set of journals would be better for information retrieval, a more balanced set would help citation data that is field normalized or norm-referenced within its field (e.g., adding many rarely cited journals to a single field would push existing journals into higher journal impact factor quartiles and increase the field normalized citation score of cited articles in the existing journals). Even if two databases cover the same journals they can index different numbers of articles from them, due to errors or different rules for categorizing a document as an article ( Liu, Huang, & Wang, 2021 ). Overall, while Dimensions provides the most free support for researchers ( Herzog, Hook, & Konkiel, 2020 ), Scopus seems to be the largest quality-controlled citation index and also covers substantially more years than Dimensions or the WoS Core Collection: It is therefore a logical choice for long-term investigations. No study seems to have analyzed the historical coverage of any citation index, however, with the partial exception of the WoS Century of Science specialist offering ( Wallace, Larivière, & Gingras, 2009 ).

Some date-specific information is known about Scopus. It was developed by Elsevier from 2002, released in 2004 ( Schotten et al., 2017 ), and has since incorporated many articles from before its start date. In the absence of systematic evidence of Scopus coverage changes over time, Scopus-based studies needing long-term data have often chosen 1996 as a starting point in the originally correct belief ( Li, Burnham et al., 2010 ) that there was a change in Scopus in this year (e.g., Budimir, Rahimeh et al., 2021 ; Subbotin & Aref, 2021 ; many Thelwall papers). In 2015, Scopus recognized 1996 as a watershed year for coverage and added 4 million earlier articles and associated references into the system ( Beatty, 2015 ). Because of this update, 1996 may no longer be a critical year. The current article explores whether 1996 or any other year represents a shift in Scopus coverage and reports a selection of more fine-grained information to help researchers using Scopus for historical data, by allowing them to pick a starting year with sufficient data for their study.

The indexing of abstracts is also important. Abstracts in academic articles typically summarise the parts of an article, usually reusing sentences from the main body ( Atanassova, Bertin, & Larivière, 2016 ). Some journals require a structured format, ensuring that background, methods, results, and implications are all covered in a simple format ( Nakayama, Hirai et al., 2005 ). They are needed for studies that attempt to predict future citation counts ( Stegehuis, Litvak, & Waltman, 2015 ), or to map the development of fields or their evolution based on the terms in article titles, abstracts, and keywords (e.g., Anwar, Bibi, & Ahmad, 2021 ; Blatt, 2009 ; Kallens & Dale, 2018 ; Porturas & Taylor, 2021 ). The proportion of articles with abstracts is also relevant for the scope of keyword-based literature searches that cover many decades (e.g., Sweileh, Al-Jabi et al., 2019 ), since the searches will be less effective for articles without abstracts, if these are more common in some years. Abstracts have been mainly studied for their informational role (e.g., Jimenez, Avila et al., 2020 ; Jin, Duan et al., 2021 ) or writing style ( Abdollahpour & Gholami, 2018 ; Kim & Lee, 2020 ).

Abstracts are known to have changed in format over time and individual journal policies have evolved. For instance, although Scopus has indexed Landscape History since 1979, the first abstract from this journal was in 1989 for the article, “Cairns and ‘cairn fields’; evidence of early agriculture on Cefn Bryn, Gower, West Glamorgan,” although this seemed to be an author innovation, starting their article with a short section entitled “Summary” rather than a journal-required or optional abstract. From browsing the journal, 1997 seems to be the year when abstracts were first mandatory, representing a policy change. In some fields, abstracts were published separately to articles in dedicated abstracting periodicals (e.g., Biological Abstracts ) so that potential readers would have a single paper source to help them quickly scan the contents of multiple journals ( Manzer, 1977 ). For example, early mathematics papers tended not to have abstracts, but very short summaries were instead posted by independent reviewers in publications such as Zentralblatt MATH ( Teschke, Wegner, & Werner, 2011 ) and Mathematical Reviews ( Price, 2017 ). Some journals also had sections dedicated to summarizing abstracts of other journals’ contents (e.g., Hollander, 1954 ). Despite the value and different uses of abstracts, no study seems to have assessed the historical prevalence or length of abstracts associated with articles in any major database.

The coverage of bibliometric databases is a separate issue to their citations, although the two are connected. Nothing is known about trends in average citation counts for Scopus, but a study of references in the WoS Century of Science 1900–2006 found an increasing number of citations per document, from less than 1 in 1900 to an arithmetic mean of 8 (Social Sciences), 10 (Natural Sciences and Engineering), and 22 (Medicine) in 2006, based on a 10-year citation window ( Wallace et al., 2009 ). Changes over time in the types of journals cited by articles in the WoS also been investigated, showing reduced concentration ( Larivière, Gingras, & Archambault, 2009 ).

Driven by the above issues, the goal of the current paper is to present a descriptive analysis of Scopus 1900–2020 in terms of the annual numbers of articles published as well as its field coverage, citation counts, and abstracts.

SUBJMAIN(1213) AND DOCTYPE(ar) AND SRCTYPE(j)

Article records for 1900–1995 were downloaded in September 2021.

Article records for 1996–2013 were downloaded in November–December 2018.

Article records for 2014–2020 were downloaded in January 2021.

The data was checked for consistency by generating time series for the number of articles per year, per narrow field. Some gaps were identified due to software errors, and these were filled by redownloading the missing data for the narrow field and year within 2 months of the original download date.

2.2. Analysis

Purely descriptive data is presented, matching the purpose of this article. Since some bibliometric studies use article abstracts, statistics are reported for articles containing abstracts as well as for all articles.

It is not straightforward to identify whether an article has an abstract, so a rule was generated to estimate this. Some articles in Scopus have abstracts indexed as part of their record, although they may not always be called “abstract” in the published article (e.g., “Summary”). These abstracts typically include copyright statements and sometimes only a copyright statement is present in the abstract field. As in previous papers from the authors’ research group (e.g., Fairclough & Thelwall, 2021 ), a heuristically chosen 500 character minimum (about 80 words) was set as indicative of a reasonably substantial abstract that is unlikely to be purely a copyright statement.

As an indicator of field breadth, data is reported for the number of narrow fields containing a given number of articles. Since a field may be large due to a single journal, data is also reported for the number of fields containing a given number of journals, as a rough indicator of diversity of content (although individual megajournals can also have diverse content: Siler, Larivière, & Sugimoto, 2020 ).

Average citations per year are reported with both the traditional arithmetic mean and the more precise geometric mean for typical highly skewed citation count data ( Fairclough & Thelwall, 2015 ; Thelwall, 2016 ). The citation count data is not symmetrical (e.g., equally distributed on either side of the mean) but is highly skewed: While most articles have zero or few citations, so that their citation counts are slightly less than the mean, the citation counts of a small number of highly cited articles are far greater than the mean ( Price, 1976 ; Seglen, 1992 ). For example, the skewness is enormous at 107 for the 2004 citation counts and even larger for recent years (387 in 2020), whereas the skewness of the normal distribution is 0.

The main results are introduced and discussed below. Additional graphs and brief discussion are in the online supplement and the full data behind all graphs is also online, both on FigShare at https://doi.org/10.6084/m9.figshare.16834198 .

3.1. Total Number of Articles

The number of articles in Scopus shows exponential growth from 1900 to at least 2020 ( Figure 1 ). The extent to which the trend reflects the technical limitations of Scopus and its indexing policy rather than the amount of scholarly publishing is unclear because not all journals qualify for indexing (e.g., Mabe & Amin, 2001 ). The kink in the logarithmic line in the year that Scopus launched, 2004, suggests that its expansion accelerated more quickly after then. More specifically, the initial release in 2004 and subsequent backfilling projects were surpassed by subsequent expansions of additional journals. The graph from 1970 can be compared to the equivalent WoS volume of coverage in response to the DT=(Article) query. WoS does not have a kink in 2004, suggesting that this is a Scopus phenomenon (WoS has a similar exponentially increasing shape, with sudden increases in 1996, 2015, and 2019: See the online supplement for a graph).

Number of documents of type journal article indexed in Scopus, as retrieved by its API. There are separate lines for all articles, all articles in each field (i.e., counting an article n times if it is in n narrow fields) and the number of articles with 500+ word abstracts.

Number of documents of type journal article indexed in Scopus, as retrieved by its API. There are separate lines for all articles, all articles in each field (i.e., counting an article n times if it is in n narrow fields) and the number of articles with 500+ word abstracts.

Both world wars resulted in decreases in coverage, presumably due to many scientists and journal staff switching to unpublished military research or service (e.g., Hyland, 2017 ). Conditions were described as “extremely difficult” for journal publishing in the second world war ( Anonymous, 1944 ) and there would also have been problems with international transport for printed journals. For example, the number of Nature articles per year indexed in Scopus decreased temporarily during both world wars. At the same time, war created the need for new types of research, leading to the emergence of new fields, such as occupational medicine ( Smith, 2009 ) and operational research ( Molinero, 1992 ), but this did not immediately translate into expanded academic publishing overall.

3.2. The Proportion of Articles with Abstracts

“© 2019 Brill Academic Publishers. All rights reserved. This paper presents the new and actually the first diplomatic publication of the unique 16th-century copy of the Church Slavonic Song of Songs translated from a Jewish original, most likely not the proper Masoretic Text but apparently its Old Yiddish translation. This Slavonic translation is extremely important for Judaic-Slavic relations in the context of literature and language contacts between Jews and Slavs in medieval Slavia Orthodoxa.” ( Grishchenko, 2019 ).

The percentage of Scopus journal articles with abstract length at least 500, 1,000, or 2,000 characters.

The percentage of Scopus journal articles with abstract length at least 500, 1,000, or 2,000 characters.

A 1,000-character abstract has about 160 words and these longer abstracts have become increasingly common. In contrast, long abstracts with at least 2,000 characters and about 320 words are still rare, accounting for only 10% of articles in 2020.

The increasing percentage of articles with nontrivial abstracts presumably reflects their increasing necessity in scientific research due to their role in attracting readers (and hence citations for the publishing journal). The trend found here may also partly reflect Scopus ingesting early sources that omitted abstracts, although no evidence was found for this as a cause. In contrast, some of the few early abstracts indexed by Scopus were not part of the original article. For example, some early psychology articles (e.g., Pressey, 1917 ) had abstracts attached to them in Scopus that apparently originated from APA Psycnet (e.g., https://doi.org/10.1037/h0070284 ) and may have been extracted by PsycInfo from early psychology abstracting journals (e.g., Psychological Abstracts ). Thus, the early results may partly reflect retrospective attempts to add abstracts. One early journal with genuine abstracts was the Journal of the American Chemical Society , which allowed articles to have a separate section at the end entitled Summary . While this could be interpreted as part of the article, it has a different heading format and could reasonably be classed as an abstract. At least one author conceived the summary as being separate from the article, stating, “The foregoing article may be summarized as follows:” ( Clark, 1918 ).

In 2020 the median abstract length was 1,367 characters or 200 words. This median is presumably partly due to some journals having a 200-word abstract length limit in their guidelines for authors (e.g., Quantitative Science Studies , Nature Scientific Reports , most Royal Society journals, many Wiley journals).

3.3. Narrow Field Coverage

Scopus has over 100 narrow fields with some articles for 1900, with the number of narrow fields increasing over time ( Figure 3 ). The increasing shapes of the lines reflect Scopus narrow fields having uneven sizes, with most growing in size as the database grows overall. The number of narrow fields in Scopus is relevant for studies that attempt to present a broad picture of science. It is not clear whether the increasing number of substantial narrow fields reflects the greater coverage of Scopus or increased specialization in science, however. Thus, the analysis of long-term cross-science trends is a particularly difficult issue.

Number of Scopus narrow fields with specified minimum numbers of articles.

Number of Scopus narrow fields with specified minimum numbers of articles.

Almost all Scopus narrow fields include few journals (<10) until after the Second World War, when the number of narrow fields with at least 10 different journals increases from 25 ( Figure 4 ). By 2020, most narrow fields included at least 100 different journals.

Number of Scopus narrow fields with specified minimum numbers of different journals.

Number of Scopus narrow fields with specified minimum numbers of different journals.

3.4. Number of Journals and Average Journal Size

Scopus indexed few journals in 1900, with growth starting after the Second World War or the end of the 1960s, if only articles with 500+ character abstracts are included ( Figure 5 ). Surprisingly, the growth in the number of journals slowed and then stopped by 2020, perhaps due to the increasing number of general or somewhat general megajournals ( Siler et al., 2020 ) adequately filling spaces that new niche journals might previously have occupied. The journal count for 2020 may also increase as back issues of new journals are added in 2021 and afterwards.

Number of different journals in Scopus by year.

Number of different journals in Scopus by year.

The number of articles per journal fluctuated considerably between 1900 and 1980, with apparently thinner journals during both world wars ( Figure 6 ). From 1980, journals seemed to grow in average size, perhaps aided by online-only journals without print journal limits on the annual numbers of articles. The apparent accelerated growth after 2010 is presumably due to increases in the number and size of online-only megajournals, starting in 2006 with PLOS ONE ( Domnina, 2016 ), which had 230,518 articles in Scopus by 2020. The 10 largest journals in Scopus in 2020 were all arguably megajournals ( Scientific Reports , IEEE Access , PLOS ONE , Sustainability , International Journal of Environmental Research and Public Health , Applied Sciences , International Journal of Molecular Sciences , Science of the Total Environment , Sensors , Energies ), with only Science of the Total Environment existing before PLOS ONE . Megajournals have expanded to cover multiple more specialist roles, impinging on multiple fields ( Siler et al., 2020 ). These combined factors seem likely to be the cause of tripling the average number of articles per journal between 1980 and 2020.

The average (mean) number of articles per journal in Scopus.

The average (mean) number of articles per journal in Scopus.

3.5. International Coverage (Authorship)

The national character of Scopus has changed dramatically over the 121 years covered ( Figure 7 ). Initially, over two-thirds of first authors with known country affiliations were from the United States and Germany ( Figure 8 ), but by 2020 China had substantially more articles than these two combined, and India had the third most articles ( Figure 8 ). The number of articles with country affiliations dropped substantially during the Second World War, although the cause is unknown (e.g., Scopus indexing discrepancies, journal policy changes, or scientists omitting affiliations). Germany’s contribution to the international literature dropped dramatically during both world wars, presumably by cutting it off from the publishing houses of the United Kingdom and United States. Germany’s decline in the 1930s may have been partly due to the anti-Semitic policies of the Nazi party disrupting scholarship and causing a mass exodus of skilled researchers (e.g., in maths: Siegmund-Schultze, 1994 ).

The percentage of Scopus articles with first author from the 12 countries with the most articles. Articles in multiple narrow fields are counted once for each narrow field.

The percentage of Scopus articles with first author from the 12 countries with the most articles. Articles in multiple narrow fields are counted once for each narrow field.

The percentage of Scopus articles with first author from the 12 countries with the most articles, excluding articles where the first author country is unknown (i.e., changing only the denominator from the previous graph). Articles in multiple narrow fields are counted once for each narrow field.

The percentage of Scopus articles with first author from the 12 countries with the most articles, excluding articles where the first author country is unknown (i.e., changing only the denominator from the previous graph). Articles in multiple narrow fields are counted once for each narrow field.

3.6. Average Citation Counts

shorter reference lists in older papers;

a tendency to cite newer research in the digital age due to electronic searching, online first, and preprint archives;

fewer references in older papers mentioning journal articles; and

greater technical difficulty in matching citations to articles in older journals.

Average (arithmetic and geometric mean) citation counts for Scopus journal articles, by year. Citation data for 1900–1995 is from September 2021, for 1996–2013 is from December 2018, and for 2013–2020 is from January 2021.

Average (arithmetic and geometric mean) citation counts for Scopus journal articles, by year. Citation data for 1900–1995 is from September 2021, for 1996–2013 is from December 2018, and for 2013–2020 is from January 2021.

The results are limited by the dates of the searches conducted and will be changed by any Scopus retrospective coverage increase. There is a small discrepancy between the total number of journal articles analyzed here (56,029,494) and the 56,391,519 reported by the Scopus web interface for the corresponding query, DOCTYPE(ar) AND SRCTYPE(j) AND PUBYEAR>1899 AND PUBYEAR<2021. The missing 362,025 journal articles seem too few (0.6%) to influence the analysis. The difference may derive partly from minor expansions of Scopus 1996–2013 after 2018, such as by adding the back catalogues of journals first indexed after 2018, especially megajournals, or by fixing indexing inconsistencies, such as reclassifying some documents as journal articles. There may also be technical issues with the API availability or processing that the consistency checks did not find.

An interpretation limitation for the analysis of abstracts is that is it not clear whether Scopus is comprehensive in its indexing of article abstracts, when they exist. No tests were performed to check whether articles without abstracts in Scopus had abstracts elsewhere, so this is unknown. One case of the opposite was accidentally found: an abstract in Scopus for the correct article that appeared to have been written afterwards and attached to it by a service that presumably informed Scopus, PsycInfo. Other sources of abstracts that could be compared with Scopus to check for this include Crossref (only publisher-supplied information, not always including abstracts; Waltman, Kramer et al., 2020 ), PubMed (biomedical science; e.g., Frandsen, Eriksen et al., 2019 ), and Microsoft Academic (soon to be discontinued; Tay, Martín-Martín, & Hug, 2021 ).

Choose a starting year that is a watershed for the field(s) investigated, if relevant, and report any anomalies identified above during the period that might influence the results. All data is online for this https://doi.org/10.6084/m9.figshare.16834198 .

Set thresholds for the minimum number of articles, articles with abstracts, or average citation counts for the purposes of the study and use the graphs above to select the earliest year above the thresholds.

If conducting a science-wide or international study, set thresholds for internationality or national field coverage and use the graphs above to select the earliest year above the thresholds. Also carefully consider the implications of the increasingly wide coverage of Scopus for more recent years.

Explicitly acknowledge that the nature of the journal literature has changed during the years of the study in ways that cannot fully be considered, such as constantly expanding numbers and (for most periods) sizes of journals, and the international composition of authors.

If using citation counts from before 2004, acknowledge that long-term trends will be influenced by lower average citations for earlier years, whether using a fixed citation window or counting citations to date. Lower level biases may also influence other years, however, as the publishing process evolves (e.g., speed, indexing).

Mike Thelwall: Methodology, Writing—Original draft, Writing—Review & editing. Pardeep Sud: Writing—Review & editing.

The authors have no competing interests.

This research was not funded.

The counts underlying the graphs are in the Supplementary material: https://doi.org/10.6084/m9.figshare.16834198 .

Author notes

Email alerts, related articles, affiliations.

  • Online ISSN 2641-3337

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Advertisement

Advertisement

The growth of scientific publications in 2020: a bibliometric analysis based on the number of publications, keywords, and citations in orthopaedic surgery

  • Published: 01 August 2021
  • Volume 45 , pages 1905–1910, ( 2021 )

Cite this article

how many research articles are published each year

  • Jing Sun 1 ,
  • Andreas F. Mavrogenis 2 &
  • Marius M. Scarlat 3  

3612 Accesses

9 Citations

Explore all metrics

Avoid common mistakes on your manuscript.

Introduction

Science has grown since the mid-1600’s. Specifically, three essential growth phases in the development of science have been identified; less than 1% up to the middle of the eighteenth century, to 2% to 3% up to the period between the two world wars, and 8% to 9% to 2010 [ 1 ]. Growth in science is driven by the publication of novel ideas and experiments, most usually in peer-reviewed journals. Currently, the number of published papers in different journals, social and mass media is increasing exponentially and the growth rates are significantly higher every decade.

Surgeons perform operations, complete hospital paperwork and additionally do research to improve clinical practice and the well-being of their patients, as well as to promote their own career, personal reputation, income and institutional/university position. Therefore, publication activity is time consuming and leads to overwhelming anxiety, and may be seen as a burden by young doctors who would enjoy performing surgery more. In orthopaedics, surgeons need to refocus some of their time and energies to communication and constructive research.

The pandemic time was a special period when the medical administration, governments, health-care payers were overwhelmed by the public medicine priorities and therefore the “unnecessary” surgery or medical care was postponed.

By observation of the activity of research processing within medical journals in 2020, we realised that the number of submissions increased dramatically. The media played a key role in promoting public health and influencing debate regarding health issues. Mass media coverage of COVID-19 pandemic has been exceptional with more than 180,000 articles published each day in 70 languages from March 8 to April 8, 2020. One may well wonder if this massive media attention ever happen in the past and if it has been finally proven to be beneficial or even just appropriate [ 2 ].

Before 2020, International Orthopaedics was receiving less than 3000 papers per year for consideration; approximately 400 were published. The submissions number rose to 3600 papers in 2020. A large number of papers analysed the new sanitary condition as perceived in orthopaedic surgery and traumatology. Other papers were retrospective clinical studies based on register data or on radiologic evidence, studies that did not require the physical presence of the patients.

This unusual rise in the volume of submissions encouraged us to perform this study measuring the dynamic and growth of the orthopaedic literature in 2020 based on the published papers, their specific keywords and citations.

Material and methods

Production analysis of orthopaedic literature during the pandemic.

A database-based literature search was done on June 7, 2021. We observed and ran the PubMed and Embase search engines. Only journal articles were included. Recentfour year publications were retrieved and obtained from the databases, and the metadata were pooled and merged together by removing the duplicates using the software “Endnote 20” (Camelot UK Bidco Limited—Clarivate, UK). The results were sorted by publication year, and the number of the papers was counted for analysis.

Characteristics and thematic analysis of orthopaedic literature during the pandemic

The Web of Science (WOS; Clarivate Analytics, Philadelphia, USA) platform (database: SCI expanded) was adopted to perform the literature search on June 7, 2021. Eighty two (82) journal titles under the category “orthopedics” and “orthopaedics” were selected from the Journal Citation Reports (JCR) for the year 2019, [ 3 ] and were used as the searching terms by limiting to publication name. The journal titles using “OR” operator were placed in the searching window of platform with the index selecting “Publication Name”, and then all articles from the 82 journals were identified.

The papers were included if (i) they were published in the 82 orthopaedic journals mentioned above and (ii) they were published from 2020 to date. Editorials, meeting abstracts, letters, corrections, proceedings, biographical productions, book reviews, news, retraction announcements, and reprints were excluded from the present analysis.

After literature retrieval, the metadata was downloaded and analysed by using “biblioshiny” that is an application that provides a web-interface of R package (Bibliometrix 3.1, University of Naples Federico II, Via Cintia, I-80126, Naples, Italy). It performs science mapping analysis using the main functions of the bibliometrix package, and supports scholars in easy use of the main features of bibliometrix. The data was imported to the software and converted to frame collection, and then the converted metadata was analysed in terms of documents, sources and conceptual structure to reveal the trends of topics. The keywords used were selected in the MeSH thesaurus. MeSH (Medical Subject Headings) is the National Library of Medicine controlled vocabulary used for indexing articles for PubMed. Subgroup analyses on “pandemics”, “sports and arthroscopy”, “arthritis”, “shoulder and elbow”, and “Spine” were performed by the same method.

Rise in production of orthopaedic literature

A total of 68,311 orthopaedic papers were retrieved in PubMed for the years 2017 (15,528 papers), 2018 (16,159 papers), 2019 (17,371 papers), and 2020 (19,253 papers). A total of 133,765 orthopaedic papers were retrieved in Embase for the years 2017 (29,001 papers), 2018 (30,167 papers), 2019 (33,401 papers), and 2020 (41,196 papers). The data from the two databases were merged by removing duplicates ( n  = 39,757); this returned 35,846 papers related to orthopaedics in 2017, 36,983 papers in 2018, 40,234 papers in 2019, and 49,256 papers in 2020. The growth rate of 2018 is 3.1%; it is 8.8% for 2019 and 22.4% for 2020. There is a significant rise in orthopaedic publications in 2020 (Fig.  1 ).

figure 1

Number of papers and growth rate of orthopaedic publications from 2017 to 2020

Characteristics of the orthopaedic publications from 2020 to date

A total of 22,399 articles were retrieved in WOS from 2020 to date, including 19,008 original articles and 2391 reviews. The average citations per documents were 0.9894. The number of references cited by these publications was 354,775, and the documents contained 32,316 keywords as defined by the authors.

Global citations measure the number of citations a document has received from documents included in the entire database (all disciplines). The most global cited document with 129 cites was the paper entitled “Physiotherapy management for COVID-19 in the acute hospital setting: clinical practice recommendations” published in the Journal of Physiotherapy , and the top ten most global cited documents ranged from 129 to 43 citations (Table 1 ).

Local citations measure the number of citations a document has received from papers included in the analysed collection (same discipline). The most local cited document with 31 cites was the paper entitled “Lateral extra-articular tenodesis reduces failure of hamstring tendon autograft anterior cruciate ligament reconstruction: two year outcomes from the STABILITY study randomized clinical trial” published in the American Journal of Sports Medicine , and the top ten most local cited documents ranged from 31 to 16 citations (Table 2 ).

Among the 82 journals, the one that contributed most to the orthopaedic literature was the BMC Musculoskeletal Disorders. The number of publications for the top 20 most relevant journals ranged from 1230 to 388 (Table 3 ). The most local cited source was the Journal of Bone and Joint Surgery . The local citations of the top 20 journals ranged from 34,669 to 5081 (Table 4 ).

Thematic trend of orthopaedic publications from 2020 to date

A tree map was applied to analyse the main topics according to the paper counts. The topics discussed the most were total knee arthroplasty ( n  = 926 papers, 9%), osteoarthritis ( n  = 745 papers, 7%), and knee ( n  = 693 papers, 7%) (Fig.  2 ).

figure 2

Tree map of 30 prominent themes with orthopaedic papers counts and percentage

To detect the thematic trend of orthopaedic publications, we applied thematic map to position the importance and development of the research themes based on density and centrality. The themes “Covid-19”, “hip arthroscopy”, and “femoroacetabular impingement” were relatively new themes that are expected to be emerging or declining (Fig.  2 ). The themes “spine”, “low back pain”, “osteoarthritis”, “knee”, and “MRI” were hot and essential. The themes “shoulder”, “arthroscopy”, “osteoporosis”, “hip fracture”, and “total knee/hip arthroplasty” were basic and transversal themes, signifying that more papers on these topics are currently published. Last, the themes “infection” and “anterior cruciate ligament” were highly developed but may be isolated (Fig.  3 ).

figure 3

Thematic map of the trends in orthopaedic publications. The centrality measures the importance, and the density measures the development. Four zones represent different trends. The upper left zone refers to topics with high density but low centrality, which means the themes may highly developed but isolated. The upper right zone is with high density and centrality, which means the themes are developed and essential (motor theme). The lower left zone is with low density and low centrality, which refers to the emerging or declining themes. The lower right zone with low density but high centrality represents the basic and transversal theme

For subgroup analysis, the top three keywords for “pandemic” ( n  = 382 papers) were “covid-19” (28%), “pandemic” (8%), and “coronavirus” (7%); in this topic, “telemedicine” (3%) attracted more attention during pandemic. For “sports and arthroscope” ( n  = 1082 papers), the top three keywords were “knee” (6%), “anterior cruciate ligament” (5%), and “sports” (4%). For “arthritis” ( n  = 1071 papers), the top three keywords were “osteoarthritis” (11%), “rheumatoid arthritis” (7%), and “total knee arthroplasty” (6%). For “spine” ( n  = 2210 papers), the top three keywords were “spine” (11%), “spine surgery” (7%), and “osteoporosis” (5%). For “shoulder and elbow” ( n  = 2490 papers), the top three keywords were “shoulder” (14%), “elbow” (5%), and “rotator cuff” (5%).

Keywords-based research reveals keywords that have generated the most traffic to sites in a specific publications market. This information may be used to build keyword groups, to find trending topics, and to point out specific fields of interest. The growth of the overall volume of publications is an objective fact that could not be ignored. The published papers discuss basically the same topics observed in the previous two years. New terms of interest such as viral infection or COVID were observed but they were not found responsible for such an impressive rise of the number of publications in 2020. The research items in orthopaedics were sensibly the same as in the recent past; however the volume of papers published for the same MeSH terms had a significant growth in number. Unfortunately, there is no application to control for the quality of the published papers; only the number of citations may be considered for evaluating the utility of a publication and this has to be considered in the following years.

The present study does not provide a reasonable explanation for the substantial growth of orthopaedic publications in 2020. Also, we cannot predict if this growth is sustainable or only punctual, and/or if it was generated or related to the decrease of the scheduled surgical operations in the specific time frame of the pandemic. We could presume that the increased number of published papers can be explained by the fact that the surgeons were for a long time away from the operating theaters, as the number of scheduled operations was strongly decreased secondary to the pandemic. However, meanwhile the academic pressure for academic rise, prestige and promotion was constant as the doctors were still working for achieving academic status and progressing in their professional career and status. A surgeon’s main activity is to perform surgery and care. However, a big number of publications in the years 2017 to 2020 were related to alternative methods for managing orthopaedic conditions, medical treatments, infiltrations, physical therapy, patient education, diet, and so many others [ 4 , 5 , 6 ].

Many of the published papers in 2020 describe a decrease in the surgical management of different bone and joint conditions during the pandemic, resulting eventually in a loose of quality and volume of care in different services. This could eventually lead to a change in the overall number of papers published in each journal in the future. Because the research begins and ends to the patients, we hope but we are not very positive that this growth in publications might eventually lead to a change in clinical practice.

Bornmann L, Mutz R (2015) Growth rates of modern science: a bibliometric analysis based on the number of publications and cited references. JASIST 66:2215–2222. https://doi.org/10.1002/asi.23329

Article   CAS   Google Scholar  

Romanò CL, Drago L, Del Sel H, Johari A, Lob G, Mavrogenis AF, Benzakour T (2020) World Association against Infection in Orthopedics and Trauma (WAIOT) Study Group On Bone And Joint Infection Definitions Loud and silent epidemics in the third millennium: tuning-up the volume. Int Orthop. 44(6):1019–1022. https://doi.org/10.1007/s00264-020-04608-8

Article   PubMed   Google Scholar  

Fang D, Fan M, Jia Z (2016) Fifty top-cited fracture articles from China: a systematic review and bibliometric analysis. J Orthop Surg Res 11(1):1–8

Article   Google Scholar  

Bezuglov EN, Tikhonova AA, Chubarovskiy PV, Repetyuk AD, Khaitin VY, Lazarev AM, Usmanova EM (2020) Conservative treatment of Osgood-Schlatter disease among young professional soccer players. Int Orthop 44(9):1737–1743. https://doi.org/10.1007/s00264-020-04572-3

Article   CAS   PubMed   Google Scholar  

Shanmugasundaram S, Vaish A, Chavada V, Murrell WD, Vaishya R (2021) Assessment of safety and efficacy of intra-articular injection of stromal vascular fraction for the treatment of knee osteoarthritis-a systematic review. Int Orthop 45(3):615–625. https://doi.org/10.1007/s00264-020-04926-x

Gou PG, Zhao ZH, Zhou JM, Ren LH, Wang XY, Mu YF, Wang YG, Chang F, Xue Y (2021) Vertebral collapse prevented following teriparatide treatment in postmenopausal Kummell’s disease patients with severe osteoporosis. Orthop Surg 13(2):506–516. https://doi.org/10.1111/os.12959

Article   PubMed   PubMed Central   Google Scholar  

Download references

Author information

Authors and affiliations.

Orthopaedic Surgery, No. 406, Jie Fang Nan Road, Hexi District, Tianjin, 300050, People’s Republic of China

First Department of Orthopaedics, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece

Andreas F. Mavrogenis

Groupe ELSAN, Clinique St. Michel, Av. Orient, 83100, Toulon, France

Marius M. Scarlat

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marius M. Scarlat .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Sun, J., Mavrogenis, A.F. & Scarlat, M.M. The growth of scientific publications in 2020: a bibliometric analysis based on the number of publications, keywords, and citations in orthopaedic surgery . International Orthopaedics (SICOT) 45 , 1905–1910 (2021). https://doi.org/10.1007/s00264-021-05171-6

Download citation

Published : 01 August 2021

Issue Date : August 2021

DOI : https://doi.org/10.1007/s00264-021-05171-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Pharm Pract (Granada)
  • v.18(1); Jan-Mar 2020

How many manuscripts should I peer review per year?

Fernando fernandez-llimos.

Institute for Medicines Research (iMed.ULisboa), Department of Social Pharmacy, Faculty of Pharmacy, Universidade de Lisboa . Lisbon ( Portugal ). tp.pu.ff@somillf

Teresa M. Salgado

Center for Pharmacy Practice Innovation, School of Pharmacy, Virginia Commonwealth University. Richmond, VA ( United States ). ude.ucv@odaglasmt

Fernanda S. Tonin

Department of Pharmacy, Federal University of Parana. Curitiba ( Brazil ). moc.liamtoh@ninot_fpmuts_ref

Peer review provides the foundation for the scholarly publishing system. The conventional peer review system consists of using authors of articles as reviewers for other colleagues’ manuscripts in a collaborative-basis system. However, authors complain about a theoretical overwhelming number of invitations to peer review. It seems that authors feel that they are invited to review many more manuscripts than they should when taking into account their participation in the scholarly publishing system. The high number of scientific journals and the existence of predatory journals were reported as potential causes of this excessive number of reviews required. In this editorial, we demonstrate that the number of reviewers required to publish a given number of articles depends exclusively on the journals’ rejection rate and the number of reviewers intended per manuscript. Several initiatives to overcome the peer review crises are suggested.

Peer review provides the foundation for the scholarly publishing system. Despite the pessimistic conclusion in Jefferson et al.’s abstract – “At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research” –, the two studies included in their systematic review, which aimed to assess “the effects of peer review on study report quality,” clearly demonstrate the positive effects of peer review on the methodological quality and the value of the articles reviewed. 1 , 2 , 3

Alternative methods for peer review have been studied, even utilizing randomized controlled trial designs, but testing their impact on the quality of the articles in a real-life environment “would be costly, time-consuming and sometimes not feasible”. 4 At the end of the day, the conventional peer review system was reported to be one of the most efficient systems in Kovanis et al.’s analysis. 4 In fact, an experience of post-publication review already exists and has exposed the risks associated with the system: Social media is a perfect example of a non-reviewed publishing system, which incontrovertibly has led to a high prevalence of fake news. Facebook’s adoption of fact-checking programs – nothing more than a post-publication review system – demonstrated the limitations of any post-publication peer review. 5 This is a lesson we should learn before introducing post-publication review as a common practice in scientific publishing in substitution of traditional pre-publication peer review. 6 , 7

So, if peer review seems to be a good system to improve article quality, why is the system permanently under criticism? Let’s be honest: We are in a rush to publish our papers. Sometimes because they are part of a master’s or doctoral dissertation, other times because we need to add a line to our CVs. Scientific articles live forever and should not follow the popular saying concerning newspapers: “Today’s News, Tomorrow’s Fish Wrap”.

When authors complain about publication delay and the tardiness of the peer review process, we would rather provide figures, as we usually do in science. Many studies evaluated the publication process times in different biomedical areas and geographic regions, reporting acceptance lag (i.e., time from submission date to acceptance date) of usually over 100 days. 8 , 9 , 10 , 11 , 12 , 13 , 14 Pharmacy Practice reported a first response time after peer review comments of 92 days (SE=5.7) in 2018. 15 We are happy to announce that Pharmacy Practice first response time for original research articles accepted decreased to 80 days (SE=3.8) in 2019, with an acceptance lag of 124 days (SE=5.0).

As editors of a scientific journal, we have to ask authors who complain about the long publication process times: Do you think we intentionally extend the article’s processing time? Don’t you think that we would prefer to quickly make a decision as to whether to accept or reject the hundreds/thousands of articles we receive? To accept an article, the editor of a peer reviewed journal needs a number of peer reviewer comments supporting the quality of the manuscript. However, to reject a paper, two options exist: desk rejection or rejection supported by peer reviewers’ comments. A desk rejection is the negative decision made exclusively by the editor or the editorial board prior to any external peer review process. Considering the principles of a peer reviewed journal, desk rejection should only apply when the manuscript received is outside of the scope of the journal or the study suffers from methodological flaws beyond any possible repair. Although commonly used, desk rejection subverts the concept of a peer review system. 16

Interestingly, authors also complain about the excessive number of manuscripts they are invited to review. Some of them write ironic commentaries about why they decline invitations to review based on personal events. 17 Pharmacy Practice has started an in-depth analysis of its peer review selection process, with the aim of identifying differential characteristics of the accepters and decliners. Apart from the “I’m buried in reviews” argument and individuals who simply do not respond to the invitation email, other explanations for declining to serve as peer reviewers were as follows:

  • I’m at the end of the semester
  • I’m about to go on vacation
  • I’m on vacation
  • I’ve just returned from vacation
  • I’m at the beginning of the semester

So, if in the six-month period of a semester we exclude these four or five month vacation-related periods, not a lot of availability to review remains, especially if we add leaves of absence, sabbaticals, and conference abroad attendance justifications.

As scientists, and before killing the traditional (a.k.a. conventional) peer review system, let us make some calculations to explore what should be the real burden of the system for authors invited to review other’s manuscripts. This is to say, let us calculate the number of reviewers required per article published, using the conventional peer review system (following Kovanis et al.’s terminology), and considering that a manuscript, if rejected, is submitted to a different journal with the same rejection rate. The first journal received A articles and assigned R reviewers to each article, resulting in A*R total reviewers assigned. With a T rejection rate, that first journal will publish A*(1-T) articles. The remaining A*T articles will be submitted to a second journal that will assign the R reviewers to each article, resulting in a total of R*A*T reviewers, publishing (A*T)*(1-T) articles and rejecting A*T*T articles that will be submitted to a third journal. So, the total number of reviewers assigned to the initial A articles after a series of N journals will be:

An external file that holds a picture, illustration, etc.
Object name is pharmpract-18-1804-g001.jpg

And the number of articles published will be:

An external file that holds a picture, illustration, etc.
Object name is pharmpract-18-1804-g002.jpg

So, the total number of required peer reviewers per published article will be:

An external file that holds a picture, illustration, etc.
Object name is pharmpract-18-1804-g003.jpg

In fact, the number of reviewers per article published depends only on two variables: the number of peer reviewers assigned per manuscript and the journal’s rejection rate. The latter is expected to have an inverse (negative) correlation with the “climbing upwards” number of existing journals alleged by Rohn. 18 Thus, with a commonly used number of three reviewers assigned to each manuscript received, a journal with an 80% rejection rate will need 15 reviewers to complete the task in order to publish one article. 18 Figure 1 provides the shape of the series with two to five reviewers assigned per manuscript received.

An external file that holds a picture, illustration, etc.
Object name is pharmpract-18-1804-g004.jpg

Colored lines represent the number of reviewers assigned per manuscript received

In plain language, to keep the scholarly peer reviewing publishing wheel spinning, the authors of each article published in a journal with an 80% rejection rate should review 15 manuscripts; and if the same research team published five articles in a given year, they should have reviewed 75 manuscripts. Considering an average of five authors per article, each author, in theory, should have to review fifteen manuscripts per every article that they publish. This does not seem to be an unreasonable number of manuscripts to review, but is higher than many researchers do. As a rule of thumb, in the case of an 80% rejection rate journal with three reviewers assigned per manuscript, the number of manuscripts each researcher should review per year is:

An external file that holds a picture, illustration, etc.
Object name is pharmpract-18-1804-g005.jpg

So, what makes authors perceive that they are overwhelmed with the number of invitations they receive to act as peer reviewers? The answer is quite obvious: to maintain the quality of the peer review system and avoid the overwhelming feeling, every author has to serve as a peer reviewer. When one author declines an invitation to review, another author will be invited, and so on. Reviewing three manuscripts per article published is not a hard job, but reviewing 15 manuscripts per article published, which could result in 75 reviews a year if you publish five articles, may be overwhelming. However, this is not a system problem, but a neglect of duty from the other four co-authors who should be sharing the task.

In 2019, Pharmacy Practice sent out 891 invitations to act as a peer reviewer, with 36 returned as undeliverable emails. From the remaining 855 invitations, 13 (1.5%) colleagues declared that the topic of the manuscript was outside of their expertise, 4 (0.5%) declared that they had a conflict of interest, 209 (24.4%) declined because they were busy, and 411 (48.1%) ignored the invitation altogether and did not reply to the email. Additionally, 7 individuals who had accepted the review never completed the task (12 reviews were ‘in progress’ at the time this editorial was written).

Can we solve this peer review crisis? Yes, we can. Before killing the system, we can try some of the many possible solutions. First and foremost, conducting an educational effort to raise awareness among authors of scientific articles that all should act as peer reviewers, not only the lead or the corresponding authors. Then, a practicality that some journals are implementing, email addresses of all the authors should be available. At the end of the day, per authorship requirements, all authors are responsible for the entire content of the article published. A second potential solution is to compensate reviewers for their time. The job of peer reviewers was traditionally associated with generosity and collegiality, or even just as a moral obligation. Compensating the review effort is still an unsolved issue. 17 , 19 Third, we should accept that peer reviewers, when they perform a good review, contributed to the final version of the article more so than many of the individuals listed in the acknowledgements section. Unfortunately, journals, indexers, academic institutions and funding bodies are not considering these contributions as curricular merits. Three years ago, Pharmacy Practice started a new practice of including all peer reviewers of the past year as part of collective author in the first editorial of the new year. Thus, their names are searchable in PubMed using the [IR] field descriptor. 15 , 20 Finally, a more complete and fair method of recognizing the contribution of a reviewer to the final version of the article, would be to list them in the article, which would require open peer reviews. Journals and indexers can organize systems to provide public recognition to open reviewers, but more educational efforts are required to change the mind of those defending the old-fashioned blind and double blind peer review processes. 21 , 22 More drastic solutions may exist, but hopefully they will not be necessary.

Three reviews:

Margarida Castel-Branco, University of Coimbra, Portugal

Filipa A. Costa, ISCSEM, Portugal

Derek Stewart, Qatar University, Qatar

Two reviews:

Maria Cordina, University of Malta, Malta

Jack Collins, University of Sydney, Australia

Paul Dillon, Royal College of Surgeons, Ireland, Ireland

Sofia Kälvemark Sporrong, University of Copenhagen, Denmark

Damian Świeczkowski, Medical University of Gdansk, Poland

Van D. Tran, RUDN University, Russia

Qalab Abbas, Aga Khan University Hospital, Pakistan

Ali A. Al-Jumaili, University of Iowa, United States

Abdelmajid H. Alnatsheh, Parkview Regional Medical Center, United States

Moawia Altabakha, Ajman University, United Arab Emirates

Wasem Alsabbagh, University of Waterloo, Canada

Chioma Amadi, City University of New York, United States

Johanna Aponte-González, Colombia National University, Colombia

Alejandro Arana, RTI Health Solutions, Spain

Ronen Arbel, Sapir College, Israel

Zubin Austin, University of Toronto, Canada

Minyon Avent, University of Queensland, Australia

Asnakew A. Ayele, University of Gondar, Ethiopia

David Balayssac, CHU Clermont-Ferrand, France

Claudio Barbaranelli, Sapienza University of Rome, Italy

Ben J. Basger, University of Sydney, Australia

Charlotte Bekker, Radboud University Medical Center, Netherlands

Durga Bista, Kathmandu University, Nepal

Aline F. Bonetti, Federal University of Parana, Brazil

Helena H. Borba, Federal University of Parana, Brazil

Marcel L. Bouvy, Utrecht University, Netherlands

Cecilia Brata, University of Surabaya, Indonesia

Rachele S. Britt, Beth Israel Deaconess Medical Center, United States

Lea Brühwiler, Patientensicherheit Schweiz, Switzerland

Sarah Brown, Cardiff Metropolitan University, United Kingdom

Josipa Bukic, University of Split, Croatia

Paul W. Bush, Duke University Hospital, United States

Ana C. Cabral, University of Coimbra, Portugal

Barry L. Carter, University of Iowa, United States

Kimberly L. Carter, University of Pennsylvania Health System, United States

Manuel J. Carvajal, Nova Southeastern University, United States

Afonso M. Cavaco, University of Lisbon, Portugal

Huan Keat Chan, Hospital Sultanah Bahiyah, Malaysia

Tyler Chanas, Vidant Medical Center, United States

Timothy F. Chen, University of Sydney, Australia

Bernadette Chevalier, University of Alberta, Canada

Allison M. Chung, Auburn University, United States

Mariann D. Churchwell, University of Toledo, United States

Richard Cooper, University of Sheffield, United Kingdom

Erika Cretton-Scott, Samford University, United States

Petra Czarniak, Curtin University, Australia

Ryan G. D’Angelo, University of the Sciences, United States

Rhian Deslandes, Cardiff University, United Kingdom

Shane P. Desselle, Touro University, United States

Parastou Donyai, University of Reading, United Kingdom

Aaron Drovandi, James Cook University, Australia

Julie Dunne, Dublin Institute of Technology, Ireland

Abubaker Elbur, Imam Abdulrahman Bin Faisal University, Saudi Arabia

Paul Forsyth, NHS Greater Glasgow & Clyde, United Kingdom

Victoria Garcia Cardenas, University of Technology Sydney, Australia

Miguel A. Gastelurrutia, University of Granada, Spain

Maria C. Gaudiano, Italian National Institute of Health, Italy

Natalie Gauld, University of Auckland, New Zealand

Chris M. Gildea, Saint Joseph Health System, United States

Ainhoa Gomez-Lumbreras, University Hospital Vall d’Hebron, Spain

Brian Godman, Karolinska Institute, Sweden

Jason R. Goldsmith, University of Pennsylvania, United States

Diego Gómez-Ceballos, Funiber, Colombia

Jean-Venable R. Goode, Virginia Commonwealth University, United States

Elisabeth Grey, University of Bath, United Kingdom

Olga Grintsova, Pharmacy of Detmold Post, Germany

Gerusa C. Halila, Federal University of Parana, Brazil

Nicola J. Hall, University of Sunderland, United Kingdom

Tora Hammar, Linnaeus University, Sweden

Drayton A. Hammond, Rush University, United States

Furqan K. Hashmi, University of Punjab, Pakistan

Mohamed A. Hassali, University of Science Malaysia, Malaysia

Andi Hermansyah, Airlangga University, Indonesia

Ludwig Höllein, University of Wuerzburg, Germany

Nejc Horvat, University of Ljubljana, Slovenia

Yen-Ming Huang, University of Wisconsin-Madison, United States

Klejda Hudhra, University of Medicine Tirana, Albania

Inas R. Ibrahim, Uruk University, Iraq

Katia Iskandar, Lebanese International University, Lebanon

Sherine Ismail, King Saud Bin Abdulaziz University, Saudi Arabia

Kristin K. Janke, University of Minnesota, United States

Kelsey L. Japs, VA Palo Alto, United States

Jennie B. Jarrett, University of Illinois at Chicago, United States

Jean-Pierre Jourdan, CHU de Caen Normandie, France

Maram G. Katoue, Kuwait University, Kuwait

Margaret Kay, University of Queensland, Australia

Clark D. Kebodeaux, University of Kentucky, United States

Thomas G. Kempen, Uppsala University, Sweden

Jennifer Kirwin, Northeastern University, United States

Nathalie Lahoud, Lebanese University, Lebanon

Anna Laven, Heinrich-Heine-University, Germany

Anandi V. Law, Western University of Health Sciences, United States

Miranda G. Law, Howard University, United States

Sukhyang Lee, Ajou University, South Korea

Leticia Leonart, Federal University of Parana, Brazil

Michelle D. Liedtke, University of Oklahoma, United States

Phei Ching Lim, Hospital Pulau Pinang, Malaysia

Amanda Wei Yin Lim, National Institutes of Health, Malaysia

Chung-Ying Lin, Hong Kong Polytechnic University, China

José Julián López, Universidad Nacional de Colombia, Colombia

Rosa C. Lucchetta, Federal University of Parana, Brazil

Karen Luetsch, University of Queensland, Australia

Elyse A. MacDonald, University of Utah Health Care, United States

Katie MacLure, Robert Gordon University, United Kingdom

Kurt Mahan, Presbyterian Healthcare Services, United States

Mark J. Makowsky, University of Alberta, Canada

Márcia Malfará, University of São Paulo, Brazil

Bejoy P. Maniara, James J. Peters VA Medical Center, United States

Brahm Marjadi, Western Sydney University, Australia

Gary R. Matzke, Virginia Commonwealth University, United States

Christopher McCoy, Beth Israel Deaconess Medical Center, United States

Tressa McNorris, Roseman University of Health Sciences, United States

Angelita C. Melo, Federal University of São João Del-Rei, Brazil

Zahra Mirshafiei Langaria, Shahid Beheshti University of Medical Sciences, Iran

Norazlina Mohamed, University Kebangsaan Malaysia, Malaysia

Jean Moon, University of Minnesota, United States

Michelle Murphy, Cooper University Hospital, United States

Sagir Mustapha, Ahmadu Bello University, Nigeria

Joseph Nathan, CVS Health, United States

Sujin Nitadpakorn, Chulalongkorn University, Thailand

Lucas M. Okumura, Clinical Hospital of Porto Alegre, Brazil

Edmund N. Ossai, Ebonyi State University, Nigeria

Courtney Pagels, Sanford Medical Center Fargo, United States

Subish Palaian, Ajman University, United Arab Emirates

Bridget Paravattil, Qatar University, Qatar

Nilesh Patel, University of Reading, United Kingdom

Guenka Petrova, Medical University Sofia, Bulgaria

Daphne Philbert, University Utrecht, Netherlands

Ann M. Philbrick, University of Minnesota, United States

Jill M. Plevinsky, Rosalind Franklin University, United States

Eng Whui Poh, Southern Australia Health, Australia

Bobby Presley, University of Surabaya, Indonesia

Urszula Religioni, Medical University of Warsaw, Poland

Oleksa G. Rewa, University of Alberta, Canada

Jadranka V. Rodriguez, University of Zagreb, Croatia

Sónia Romano, Centre for Health Evaluation & Research, Portugal

Olaf Rose, impac2t, Germany

Paula Rossignoli, Parana Health Secretariat, Brazil

Janelle F. Ruisinger, University of Kansas, United States

Hala Sacre, Lebanese Pharmacists Association, Lebanon

Wada A. Sadiq, Bayero University, Nigeria

Teresa M. Salgado, Virginia Commonwealth University, United States

Martina Salib, Royal Prince Alfred Hospital, Australia

Shane Scahill, University of Auckland, New Zealand

Terri Schindel, Edmonton Clinic Health Academy, Canada

Hanna Seidling, University of Heidelberg, Germany

Marguerite Sendall, Queensland University of Technology, Australia

Benjamin Seng, Duke-NUS Medical School, Singapore

Ana Seselja Perisin, University of Split, Croatia

Adji P. Setiadi, University of Surabaya, Indonesia

Amy Shaver, University at Buffalo, United States

Olayinka O. Shiyanbola, University of Wisconsin-Madison, United States

Tin Fei Sim, Curtin University, Australia

Bilge Sozen-Sahne, Hacettepe University, Turkey

Sidney Stohs, Creighton University, United States

Ieva Stupans, University of New England, Australia

André-Marie Tchouatieu, Medicines for Malaria Venture, Switzerland

Roberta Teixeira, National Institute of Cardiology, Brazil

Fitsum S. Teni, Addis Ababa University, Ethiopia

Fernanda S. Tonin, Federal University of Parana, Brazil

Jessica S. Triboletti, Butler University, United States

J. W. Foppe van Mil, Van Mil Consultancy, Netherlands

Tineshwaran Velvanathan, National University of Malaysia, Malaysia

Tara B. Vlasimsky, Denver Health Medical Center, United States

Helen Vosper, Robert Gordon University, United Kingdom

Sandy Vrignaud, University Hospital Center of Angers, France

Jennifer Walters, VCU Health, United States

Cheri K. Walker, Southwestern Oklahoma State University, United States

Geoffrey C Wall, Drake University, United States

Jocelyn A. Watkins, University of Warwick, United Kingdom

Mayyada Wazaify, University of Jordan, Jordan

Tommy Westerlund, Malmö University, Sweden

Sara A. Wettergreen, University of North Texas, United States

James S. Wheeler, University of Tennessee, United States

Kyle J. Wilby, University of Otago, New Zealand

Charlene Williams, University of North Carolina, United States

Aris Widayati, University Sanata Dharma, Indonesia

Matthew J. Witry, University of Iowa, United States

Seth E. Wolpin, University of Washington, United States

David Wright, University of East Anglia, United Kingdom

Nancy Yunker, Virginia Commonwealth University, United States

Ismaeel Yunusa, Massachusetts College of Pharmacy and Health Sciences, United States

Contributor Information

Fernando Fernandez-Llimos, Institute for Medicines Research (iMed.ULisboa), Department of Social Pharmacy, Faculty of Pharmacy, Universidade de Lisboa . Lisbon ( Portugal ). tp.pu.ff@somillf .

Teresa M. Salgado, Center for Pharmacy Practice Innovation, School of Pharmacy, Virginia Commonwealth University. Richmond, VA ( United States ). ude.ucv@odaglasmt .

Fernanda S. Tonin, Department of Pharmacy, Federal University of Parana. Curitiba ( Brazil ). moc.liamtoh@ninot_fpmuts_ref .

Margarida Castel-Branco

University of Coimbra , Portugal

Filipa A. Costa

ISCSEM , Portugal

Derek Stewart

Qatar University, Qatar

Maria Cordina

University of Malta, Malta

Jack Collins

University of Sydney, Australia

Paul Dillon

Royal College of Surgeons, Ireland

Sofia K. Sporrong

University of Copenhagen, Denmark

Damian Świeczkowski

Medical University of Gdansk, Poland

Van D. Tran

RUDN University, Russia

Qalab Abbas

Aga Khan University Hospital, Pakistan

Ali A. Al-Jumaili

University of Iowa, United States

Abdelmajid H. Alnatsheh

Parkview Regional Medical Center, United States

Moawia Altabakha

Ajman University, United Arab Emirates

Wasem Alsabbagh

University of Waterloo, Canada

Chioma Amadi

City University of New York, United States

Johanna Aponte-González

Colombia National University, Colombia

Alejandro Arana

RTI Health Solutions, Spain

Ronen Arbel

Sapir College, Israel

Zubin Austin

University of Toronto, Canada

Minyon Avent

University of Queensland, Australia

Asnakew A. Ayele

University of Gondar, Ethiopia

David Balayssac

CHU Clermont-Ferrand, France

Claudio Barbaranelli

Sapienza University of Rome, Italy

Ben J. Basger

Charlotte bekker.

Radboud University Medical Center, Netherlands

Durga Bista

Kathmandu University, Nepal

Aline F. Bonetti

Federal University of Parana, Brazil

Helena H. Borba

Marcel l. bouvy.

Utrecht University, Netherlands

Cecilia Brata

University of Surabaya, Indonesia

Rachele S. Britt

Beth Israel Deaconess Medical Center, United States

Lea Brühwiler

Patientensicherheit Schweiz, Switzerland

Sarah Brown

Cardiff Metropolitan University, United Kingdom

Josipa Bukic

University of Split, Croatia

Paul W. Bush

Duke University Hospital, United States

Ana C. Cabral

University of Coimbra, Portugal

Barry L. Carter

Kimberly l. carter.

University of Pennsylvania Health System, United States

Manuel J. Carvajal

Nova Southeastern University, United States

Afonso M. Cavaco

University of Lisbon, Portugal

Huan Keat Chan

Hospital Sultanah Bahiyah, Malaysia

Tyler Chanas

Vidant Medical Center, United States

Timothy F. Chen

Bernadette chevalier.

University of Alberta, Canada

Allison M. Chung

Auburn University, United States

Mariann D. Churchwell

University of Toledo, United States

Richard Cooper

University of Sheffield, United Kingdom

Erika Cretton-Scott

Samford University, United States

Petra Czarniak

Curtin University, Australia

Ryan G. D’Angelo

University of the Sciences, United States

Rhian Deslandes

Cardiff University, United Kingdom

Shane P. Desselle

Touro University, United States

Parastou Donyai

University of Reading, United Kingdom

Aaron Drovandi

James Cook University, Australia

Julie Dunne

Dublin Institute of Technology, Ireland

Abubaker Elbur

Imam Abdulrahman Bin Faisal University, Saudi Arabia

Paul Forsyth

NHS Greater Glasgow & Clyde, United Kingdom

Victoria Garcia Cardenas

University of Technology Sydney, Australia

Miguel A. Gastelurrutia

University of Granada, Spain

Maria C. Gaudiano

Italian National Institute of Health, Italy

Natalie Gauld

University of Auckland, New Zealand

Chris M. Gildea

Saint Joseph Health System, United States

Ainhoa Gomez-Lumbreras

University Hospital Vall d'Hebron, Spain

Brian Godman

Karolinska Institute, Sweden

Jason R. Goldsmith

University of Pennsylvania, United States

Diego Gómez‐Ceballos

Funiber, Colombia

Jean-Venable R. Goode

Virginia Commonwealth University, United States

Elisabeth Grey

University of Bath, United Kingdom

Olga Grintsova

Pharmacy of Detmold Post, Germany

Gerusa C. Halila

Nicola j. hall.

University of Sunderland, United Kingdom

Tora Hammar

Linnaeus University, Sweden

Drayton A. Hammond

Rush University, United States

Furqan K. Hashmi

University of Punjab, Pakistan

Mohamed A. Hassali

University of Science Malaysia, Malaysia

Andi Hermansyah

Airlangga University, Indonesia

Ludwig Höllein

University of Wuerzburg, Germany

Nejc Horvat

University of Ljubljana, Slovenia

Yen-Ming Huang

University of Wisconsin-Madison, United States

Klejda Hudhra

University of Medicine Tirana, Albania

Inas R. Ibrahim

Uruk University, Iraq

Katia Iskandar

Lebanese International University, Lebanon

Sherine Ismail

King Saud Bin Abdulaziz University, Saudi Arabia

Kristin K. Janke

University of Minnesota, United States

Kelsey L. Japs

VA Palo Alto, United States

Jennie B. Jarrett

University of Illinois at Chicago, United States

Jean-Pierre Jourdan

CHU de Caen Normandie, France

Maram G. Katoue

Kuwait University, Kuwait

Margaret Kay

Clark d. kebodeaux.

University of Kentucky, United States

Thomas G. Kempen

Uppsala University, Sweden

Jennifer Kirwin

Northeastern University, United States

Nathalie Lahoud

Lebanese University, Lebanon

Heinrich-Heine-University, Germany

Anandi V. Law

Western University of Health Sciences, United States

Miranda G. Law

Howard University, United States

Sukhyang Lee

Ajou University, South Korea

Leticia Leonart

Michelle d. liedtke.

University of Oklahoma, United States

Phei Ching Lim

Hospital Pulau Pinang, Malaysia

Amanda Wei Yin Lim

National Institutes of Health, Malaysia

Chung-Ying Lin

Hong Kong Polytechnic University, China

José Julián López

Universidad Nacional de Colombia, Colombia

Rosa C. Lucchetta

Karen luetsch, elyse a. macdonald.

University of Utah Health Care, United States

Katie MacLure

Robert Gordon University, United Kingdom

Presbyterian Healthcare Services, United States

Mark J. Makowsky

Márcia malfará.

University of São Paulo, Brazil

Bejoy P. Maniara

James J. Peters VA Medical Center, United States

Brahm Marjadi

Western Sydney University, Australia

Gary R. Matzke

Christopher mccoy, tressa mcnorris.

Roseman University of Health Sciences, United States

Angelita C. Melo

Federal University of São João Del-Rei, Brazil

Zahra Mirshafiei Langaria

Shahid Beheshti University of Medical Sciences, Iran

Norazlina Mohamed

University Kebangsaan Malaysia, Malaysia

Michelle Murphy

Cooper University Hospital, United States

Sagir Mustapha

Ahmadu Bello University, Nigeria

Joseph Nathan

CVS Health, United States

Sujin Nitadpakorn

Chulalongkorn University, Thailand

Lucas M. Okumura

Clinical Hospital of Porto Alegre, Brazil

Edmund N. Ossai

Ebonyi State University, Nigeria

Courtney Pagels

Sanford Medical Center Fargo, United States

Subish Palaian

Bridget paravattil, nilesh patel, guenka petrova.

Medical University Sofia, Bulgaria

Daphne Philbert

University Utrecht, Netherlands

Ann M. Philbrick

Jill m. plevinsky.

Rosalind Franklin University, United States

Eng Whui Poh

Southern Australia Health, Australia

Bobby Presley

Urszula religioni.

Medical University of Warsaw, Poland

Oleksa G. Rewa

Jadranka v. rodriguez.

University of Zagreb, Croatia

Sónia Romano

Centre for Health Evaluation & Research, Portugal

impac2t, Germany

Paula Rossignoli

Parana Health Secretariat, Brazil

Janelle F. Ruisinger

University of Kansas, United States

Lebanese Pharmacists Association, Lebanon

Wada A. Sadiq

Bayero University, Nigeria

Martina Salib

Royal Prince Alfred Hospital, Australia

Shane Scahill

Terri schindel.

Edmonton Clinic Health Academy, Canada

Hanna Seidling

University of Heidelberg, Germany

Marguerite Sendall

Queensland University of Technology, Australia

Benjamin Seng

Duke-NUS Medical School, Singapore

Ana Seselja Perisin

Adji p. setiadi.

University at Buffalo, United States

Olayinka O. Shiyanbola

Tin fei sim, bilge sozen-sahne.

Hacettepe University, Turkey

Sidney Stohs

Creighton University, United States

Ieva Stupans

University of New England, Australia

André-Marie Tchouatieu

Medicines for Malaria Venture, Switzerland

Roberta Teixeira

National Institute of Cardiology, Brazil

Fitsum S. Teni

Addis Ababa University, Ethiopia

Jessica S. Triboletti

Butler University, United States

J. W. Foppe van Mil

Van Mil Consultancy, Netherlands

Tineshwaran Velvanathan

National University of Malaysia, Malaysia

Tara B. Vlasimsky

Denver Health Medical Center, United States

Helen Vosper

Sandy vrignaud.

University Hospital Center of Angers, France

Jennifer Walters

VCU Health, United States

Cheri K. Walker

Southwestern Oklahoma State University, United States

Geoffrey C. Wall

Drake University, United States

Jocelyn A. Watkins

University of Warwick, United Kingdom

Mayyada Wazaify

University of Jordan, Jordan

Tommy Westerlund

Malmö University, Sweden

Sara A. Wettergreen

University of North Texas, United States

James S. Wheeler

University of Tennessee, United States

Kyle J. Wilby

University of Otago, New Zealand

Charlene Williams

University of North Carolina, United States

Aris Widayati

University Sanata Dharma, Indonesia

Matthew J. Witry

Seth e. wolpin.

University of Washington, United States

David Wright

University of East Anglia, United Kingdom

Nancy Yunker

Ismaeel yunusa.

Massachusetts College of Pharmacy and Health Sciences, United States

Margarida Castel-Branco, University of Coimbra , Portugal .

Filipa A. Costa, ISCSEM , Portugal .

Derek Stewart, Qatar University, Qatar .

Maria Cordina, University of Malta, Malta .

Jack Collins, University of Sydney, Australia .

Paul Dillon, Royal College of Surgeons, Ireland .

Sofia K. Sporrong, University of Copenhagen, Denmark .

Damian Świeczkowski, Medical University of Gdansk, Poland .

Van D. Tran, RUDN University, Russia .

Qalab Abbas, Aga Khan University Hospital, Pakistan.

Ali A. Al-Jumaili, University of Iowa, United States.

Abdelmajid H. Alnatsheh, Parkview Regional Medical Center, United States.

Moawia Altabakha, Ajman University, United Arab Emirates.

Wasem Alsabbagh, University of Waterloo, Canada.

Chioma Amadi, City University of New York, United States.

Johanna Aponte-González, Colombia National University, Colombia.

Alejandro Arana, RTI Health Solutions, Spain.

Ronen Arbel, Sapir College, Israel.

Zubin Austin, University of Toronto, Canada.

Minyon Avent, University of Queensland, Australia.

Asnakew A. Ayele, University of Gondar, Ethiopia.

David Balayssac, CHU Clermont-Ferrand, France.

Claudio Barbaranelli, Sapienza University of Rome, Italy.

Ben J. Basger, University of Sydney, Australia.

Charlotte Bekker, Radboud University Medical Center, Netherlands.

Durga Bista, Kathmandu University, Nepal.

Aline F. Bonetti, Federal University of Parana, Brazil.

Helena H. Borba, Federal University of Parana, Brazil.

Marcel L. Bouvy, Utrecht University, Netherlands.

Cecilia Brata, University of Surabaya, Indonesia.

Rachele S. Britt, Beth Israel Deaconess Medical Center, United States.

Lea Brühwiler, Patientensicherheit Schweiz, Switzerland.

Sarah Brown, Cardiff Metropolitan University, United Kingdom.

Josipa Bukic, University of Split, Croatia.

Paul W. Bush, Duke University Hospital, United States.

Ana C. Cabral, University of Coimbra, Portugal.

Barry L. Carter, University of Iowa, United States.

Kimberly L. Carter, University of Pennsylvania Health System, United States.

Manuel J. Carvajal, Nova Southeastern University, United States.

Afonso M. Cavaco, University of Lisbon, Portugal.

Huan Keat Chan, Hospital Sultanah Bahiyah, Malaysia.

Tyler Chanas, Vidant Medical Center, United States.

Timothy F. Chen, University of Sydney, Australia.

Bernadette Chevalier, University of Alberta, Canada.

Allison M. Chung, Auburn University, United States.

Mariann D. Churchwell, University of Toledo, United States.

Richard Cooper, University of Sheffield, United Kingdom.

Erika Cretton-Scott, Samford University, United States.

Petra Czarniak, Curtin University, Australia.

Ryan G. D’Angelo, University of the Sciences, United States.

Rhian Deslandes, Cardiff University, United Kingdom.

Shane P. Desselle, Touro University, United States.

Parastou Donyai, University of Reading, United Kingdom.

Aaron Drovandi, James Cook University, Australia.

Julie Dunne, Dublin Institute of Technology, Ireland.

Abubaker Elbur, Imam Abdulrahman Bin Faisal University, Saudi Arabia.

Paul Forsyth, NHS Greater Glasgow & Clyde, United Kingdom.

Victoria Garcia Cardenas, University of Technology Sydney, Australia.

Miguel A. Gastelurrutia, University of Granada, Spain.

Maria C. Gaudiano, Italian National Institute of Health, Italy.

Natalie Gauld, University of Auckland, New Zealand.

Chris M. Gildea, Saint Joseph Health System, United States.

Ainhoa Gomez-Lumbreras, University Hospital Vall d'Hebron, Spain.

Brian Godman, Karolinska Institute, Sweden.

Jason R. Goldsmith, University of Pennsylvania, United States.

Diego Gómez‐Ceballos, Funiber, Colombia.

Jean-Venable R. Goode, Virginia Commonwealth University, United States.

Elisabeth Grey, University of Bath, United Kingdom.

Olga Grintsova, Pharmacy of Detmold Post, Germany.

Gerusa C. Halila, Federal University of Parana, Brazil.

Nicola J. Hall, University of Sunderland, United Kingdom.

Tora Hammar, Linnaeus University, Sweden.

Drayton A. Hammond, Rush University, United States.

Furqan K. Hashmi, University of Punjab, Pakistan.

Mohamed A. Hassali, University of Science Malaysia, Malaysia.

Andi Hermansyah, Airlangga University, Indonesia.

Ludwig Höllein, University of Wuerzburg, Germany.

Nejc Horvat, University of Ljubljana, Slovenia.

Yen-Ming Huang, University of Wisconsin-Madison, United States.

Klejda Hudhra, University of Medicine Tirana, Albania.

Inas R. Ibrahim, Uruk University, Iraq.

Katia Iskandar, Lebanese International University, Lebanon.

Sherine Ismail, King Saud Bin Abdulaziz University, Saudi Arabia.

Kristin K. Janke, University of Minnesota, United States.

Kelsey L. Japs, VA Palo Alto, United States.

Jennie B. Jarrett, University of Illinois at Chicago, United States.

Jean-Pierre Jourdan, CHU de Caen Normandie, France.

Maram G. Katoue, Kuwait University, Kuwait.

Margaret Kay, University of Queensland, Australia.

Clark D. Kebodeaux, University of Kentucky, United States.

Thomas G. Kempen, Uppsala University, Sweden.

Jennifer Kirwin, Northeastern University, United States.

Nathalie Lahoud, Lebanese University, Lebanon.

Anna Laven, Heinrich-Heine-University, Germany.

Anandi V. Law, Western University of Health Sciences, United States.

Miranda G. Law, Howard University, United States.

Sukhyang Lee, Ajou University, South Korea.

Leticia Leonart, Federal University of Parana, Brazil.

Michelle D. Liedtke, University of Oklahoma, United States.

Phei Ching Lim, Hospital Pulau Pinang, Malaysia.

Amanda Wei Yin Lim, National Institutes of Health, Malaysia.

Chung-Ying Lin, Hong Kong Polytechnic University, China.

José Julián López, Universidad Nacional de Colombia, Colombia.

Rosa C. Lucchetta, Federal University of Parana, Brazil.

Karen Luetsch, University of Queensland, Australia.

Elyse A. MacDonald, University of Utah Health Care, United States.

Katie MacLure, Robert Gordon University, United Kingdom.

Kurt Mahan, Presbyterian Healthcare Services, United States.

Mark J. Makowsky, University of Alberta, Canada.

Márcia Malfará, University of São Paulo, Brazil.

Bejoy P. Maniara, James J. Peters VA Medical Center, United States.

Brahm Marjadi, Western Sydney University, Australia.

Gary R. Matzke, Virginia Commonwealth University, United States.

Christopher McCoy, Beth Israel Deaconess Medical Center, United States.

Tressa McNorris, Roseman University of Health Sciences, United States.

Angelita C. Melo, Federal University of São João Del-Rei, Brazil.

Zahra Mirshafiei Langaria, Shahid Beheshti University of Medical Sciences, Iran.

Norazlina Mohamed, University Kebangsaan Malaysia, Malaysia.

Jean Moon, University of Minnesota, United States.

Michelle Murphy, Cooper University Hospital, United States.

Sagir Mustapha, Ahmadu Bello University, Nigeria.

Joseph Nathan, CVS Health, United States.

Sujin Nitadpakorn, Chulalongkorn University, Thailand.

Lucas M. Okumura, Clinical Hospital of Porto Alegre, Brazil.

Edmund N. Ossai, Ebonyi State University, Nigeria.

Courtney Pagels, Sanford Medical Center Fargo, United States.

Subish Palaian, Ajman University, United Arab Emirates.

Bridget Paravattil, Qatar University, Qatar.

Nilesh Patel, University of Reading, United Kingdom.

Guenka Petrova, Medical University Sofia, Bulgaria.

Daphne Philbert, University Utrecht, Netherlands.

Ann M. Philbrick, University of Minnesota, United States.

Jill M. Plevinsky, Rosalind Franklin University, United States.

Eng Whui Poh, Southern Australia Health, Australia.

Bobby Presley, University of Surabaya, Indonesia.

Urszula Religioni, Medical University of Warsaw, Poland.

Oleksa G. Rewa, University of Alberta, Canada.

Jadranka V. Rodriguez, University of Zagreb, Croatia.

Sónia Romano, Centre for Health Evaluation & Research, Portugal.

Olaf Rose, impac2t, Germany.

Paula Rossignoli, Parana Health Secretariat, Brazil.

Janelle F. Ruisinger, University of Kansas, United States.

Hala Sacre, Lebanese Pharmacists Association, Lebanon.

Wada A. Sadiq, Bayero University, Nigeria.

Martina Salib, Royal Prince Alfred Hospital, Australia.

Shane Scahill, University of Auckland, New Zealand.

Terri Schindel, Edmonton Clinic Health Academy, Canada.

Hanna Seidling, University of Heidelberg, Germany.

Marguerite Sendall, Queensland University of Technology, Australia.

Benjamin Seng, Duke-NUS Medical School, Singapore.

Ana Seselja Perisin, University of Split, Croatia.

Adji P. Setiadi, University of Surabaya, Indonesia.

Amy Shaver, University at Buffalo, United States.

Olayinka O. Shiyanbola, University of Wisconsin-Madison, United States.

Tin Fei Sim, Curtin University, Australia.

Bilge Sozen-Sahne, Hacettepe University, Turkey.

Sidney Stohs, Creighton University, United States.

Ieva Stupans, University of New England, Australia.

André-Marie Tchouatieu, Medicines for Malaria Venture, Switzerland.

Roberta Teixeira, National Institute of Cardiology, Brazil.

Fitsum S. Teni, Addis Ababa University, Ethiopia.

Jessica S. Triboletti, Butler University, United States.

J. W. Foppe van Mil, Van Mil Consultancy, Netherlands.

Tineshwaran Velvanathan, National University of Malaysia, Malaysia.

Tara B. Vlasimsky, Denver Health Medical Center, United States.

Helen Vosper, Robert Gordon University, United Kingdom.

Sandy Vrignaud, University Hospital Center of Angers, France.

Jennifer Walters, VCU Health, United States.

Cheri K. Walker, Southwestern Oklahoma State University, United States.

Geoffrey C. Wall, Drake University, United States.

Jocelyn A. Watkins, University of Warwick, United Kingdom.

Mayyada Wazaify, University of Jordan, Jordan.

Tommy Westerlund, Malmö University, Sweden.

Sara A. Wettergreen, University of North Texas, United States.

James S. Wheeler, University of Tennessee, United States.

Kyle J. Wilby, University of Otago, New Zealand.

Charlene Williams, University of North Carolina, United States.

Aris Widayati, University Sanata Dharma, Indonesia.

Matthew J. Witry, University of Iowa, United States.

Seth E. Wolpin, University of Washington, United States.

David Wright, University of East Anglia, United Kingdom.

Nancy Yunker, Virginia Commonwealth University, United States.

Ismaeel Yunusa, Massachusetts College of Pharmacy and Health Sciences, United States.

WordsRated

Number of Academic Papers Published Per Year

How many academic articles are published each year.

It is estimated that at least 64 million academic papers have been published since the year 1996, with the growth rate of newly published articles increasing over time.

  • As of 2022, over 5.14 million academic articles are published per year, including short surveys, reviews, and conference proceedings.
  • The number of published articles had increased by 2.06% since 2021, when over 5.03 million papers were published.
  • Since 2018, the number of articles published by year jumped by 22.78%, starting from 4.18 million.
  • The growth in published documents was exceptionally high during 2021, when 7.62% more articles were published compared to the previous year.

how many research articles are published each year

What country publishes the most academic articles?

  • Since 2022, China has been the country with the most academic articles published in a year, and the first country to publish over 1 million documents during a year.
  • Over 19.67% of all academic papers published in a year come from China as of 2022.
  • The United States lost the leading position when it comes to the number of published articles, now accounting for 17.04% of all published papers in a year.
  • These two countries, along with India and the United Kingdom, accounted for over 52% of all academic papers over the past year.
  • Top 10 producers of academic papers account for over 87% of all published articles over the past year.

how many research articles are published each year

How many academic journals are there?

  • As of 2020, there are 46,736 academic journals publishing papers worldwide, 1.07% more compared to 2019.
  • It is also the seventh year in a row with over 40,000 active academic journals.
  • Over the last 10 years, the number of academic journals has grown by 28.7%, growing at an average rate of 2.56% every year.
  • However, the growth rate of academic journals has slowed down over the last decade after growing by 3.65% annually from 2002 to 2011.
  • For the first time, there were over 100 published articles per academic journal during the year. This happened in 2020.

how many research articles are published each year

  • 75.04% of all academic journals are published in the English language. There were 35,070 English-language journals published in 2020.
  • The number of English-language journals has been growing by 3.22% per year, on average, over the last 10 years.
  • On the other hand, the number of non-English journals declined during 2020 by 0.42% compared to the previous year.
  • The growth of non-English language journals over the past 10 years stands at 2.33% annually, much slower compared to English language journals.

how many research articles are published each year

Which country publishes the most academic journals?

  • Over 5,856 academic journals are published annually in the United Kingdom, as of 2020.
  • The UK is the world’s largest producer of academic journals, accounting for over 12.53% of the global production.
  • The United States publishes around 5,712 academic journals, accounting for 12.22% of the global production.
  • The Netherlands (1,372) and Germany (1,339) are the only remaining countries that publish over 1,000 academic journals annually.
  • Even though China publishes the most academic papers in the world, the country accounts for only 1.36% of the journals published per year.

Reach out with feedback.

We're working to create the best, most intuitive word finders and related word tools on the web. If you have any feedback or suggestions then please drop us a message.

Word Finders

Word Finder

Wordle Solver

Unscramble Words

Scrabble Word Finder

Words With Friends Cheat

5 Letter Words For Wordle

46  Hodges Avenue APT #1 Taunton, MA 02780

© WordsRated

AI Index: State of AI in 13 Charts

In the new report, foundation models dominate, benchmarks fall, prices skyrocket, and on the global stage, the U.S. overshadows.

Illustration of bright lines intersecting on a dark background

This year’s AI Index — a 500-page report tracking 2023’s worldwide trends in AI — is out.

The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year’s report covers the rise of multimodal foundation models, major cash investments into generative AI, new performance benchmarks, shifting global opinions, and new major regulations.

Don’t have an afternoon to pore through the findings? Check out the high level here.

Pie chart showing 98 models were open-sourced in 2023

A Move Toward Open-Sourced

This past year, organizations released 149 foundation models, more than double the number released in 2022. Of these newly released models, 65.7% were open-source (meaning they can be freely used and modified by anyone), compared with only 44.4% in 2022 and 33.3% in 2021.

bar chart showing that closed models outperformed open models across tasks

But At a Cost of Performance?

Closed-source models still outperform their open-sourced counterparts. On 10 selected benchmarks, closed models achieved a median performance advantage of 24.2%, with differences ranging from as little as 4.0% on mathematical tasks like GSM8K to as much as 317.7% on agentic tasks like AgentBench.

Bar chart showing Google has more foundation models than any other company

Biggest Players

Industry dominates AI, especially in building and releasing foundation models. This past year Google edged out other industry players in releasing the most models, including Gemini and RT-2. In fact, since 2019, Google has led in releasing the most foundation models, with a total of 40, followed by OpenAI with 20. Academia trails industry: This past year, UC Berkeley released three models and Stanford two.

Line chart showing industry far outpaces academia and government in creating foundation models over the decade

Industry Dwarfs All

If you needed more striking evidence that corporate AI is the only player in the room right now, this should do it. In 2023, industry accounted for 72% of all new foundation models.

Chart showing the growing costs of training AI models

Prices Skyrocket

One of the reasons academia and government have been edged out of the AI race: the exponential increase in cost of training these giant models. Google’s Gemini Ultra cost an estimated $191 million worth of compute to train, while OpenAI’s GPT-4 cost an estimated $78 million. In comparison, in 2017, the original Transformer model, which introduced the architecture that underpins virtually every modern LLM, cost around $900.

Bar chart showing the united states produces by far the largest number of foundation models

What AI Race?

At least in terms of notable machine learning models, the United States vastly outpaced other countries in 2023, developing a total of 61 models in 2023. Since 2019, the U.S. has consistently led in originating the majority of notable models, followed by China and the UK.

Line chart showing that across many intellectual task categories, AI has exceeded human performance

Move Over, Human

As of 2023, AI has hit human-level performance on many significant AI benchmarks, from those testing reading comprehension to visual reasoning. Still, it falls just short on some benchmarks like competition-level math. Because AI has been blasting past so many standard benchmarks, AI scholars have had to create new and more difficult challenges. This year’s index also tracked several of these new benchmarks, including those for tasks in coding, advanced reasoning, and agentic behavior.

Bar chart showing a dip in overall private investment in AI, but a surge in generative AI investment

Private Investment Drops (But We See You, GenAI)

While AI private investment has steadily dropped since 2021, generative AI is gaining steam. In 2023, the sector attracted $25.2 billion, nearly ninefold the investment of 2022 and about 30 times the amount from 2019 (call it the ChatGPT effect). Generative AI accounted for over a quarter of all AI-related private investments in 2023.

Bar chart showing the united states overwhelming dwarfs other countries in private investment in AI

U.S. Wins $$ Race

And again, in 2023 the United States dominates in AI private investment. In 2023, the $67.2 billion invested in the U.S. was roughly 8.7 times greater than the amount invested in the next highest country, China, and 17.8 times the amount invested in the United Kingdom. That lineup looks the same when zooming out: Cumulatively since 2013, the United States leads investments at $335.2 billion, followed by China with $103.7 billion, and the United Kingdom at $22.3 billion.

Infographic showing 26% of businesses use AI for contact-center automation, and 23% use it for personalization

Where is Corporate Adoption?

More companies are implementing AI in some part of their business: In surveys, 55% of organizations said they were using AI in 2023, up from 50% in 2022 and 20% in 2017. Businesses report using AI to automate contact centers, personalize content, and acquire new customers. 

Bar chart showing 57% of people believe AI will change how they do their job in 5 years, and 36% believe AI will replace their jobs.

Younger and Wealthier People Worry About Jobs

Globally, most people expect AI to change their jobs, and more than a third expect AI to replace them. Younger generations — Gen Z and millennials — anticipate more substantial effects from AI compared with older generations like Gen X and baby boomers. Specifically, 66% of Gen Z compared with 46% of boomer respondents believe AI will significantly affect their current jobs. Meanwhile, individuals with higher incomes, more education, and decision-making roles foresee AI having a great impact on their employment.

Bar chart depicting the countries most nervous about AI; Australia at 69%, Great Britain at 65%, and Canada at 63% top the list

While the Commonwealth Worries About AI Products

When asked in a survey about whether AI products and services make you nervous, 69% of Aussies and 65% of Brits said yes. Japan is the least worried about their AI products at 23%.  

Line graph showing uptick in AI regulation in the united states since 2016; 25 policies passed in 2023

Regulation Rallies

More American regulatory agencies are passing regulations to protect citizens and govern the use of AI tools and data. For example, the Copyright Office and the Library of Congress passed copyright registration guidance concerning works that contained material generated by AI, while the Securities and Exchange Commission developed a cybersecurity risk management strategy, governance, and incident disclosure plan. The agencies to pass the most regulation were the Executive Office of the President and the Commerce Department. 

The AI Index was first created to track AI development. The index collaborates with such organizations as LinkedIn, Quid, McKinsey, Studyportals, the Schwartz Reisman Institute, and the International Federation of Robotics to gather the most current research and feature important insights on the AI ecosystem. 

More News Topics

View the latest institution tables

View the latest country/territory tables

These 10 institutions published the most papers in Nature and Science in 2018

From CRISPR to CLARITY, here are some of the most high-profile studies.

Gemma Conroy, Bec Crew

how many research articles are published each year

Gene-editing tech, CRISPR-Cas9, featured in one of Harvard's most widely discussed Nature papers in 2017. Credit: Meletios Verras/Getty Images

3 September 2019

how many research articles are published each year

Meletios Verras/Getty Images

Gene-editing tech, CRISPR-Cas9, featured in one of Harvard's most widely discussed Nature papers in 2017.

The journals Nature and Science are where some of the highest quality research is showcased to the world.

The institutions listed below were the largest contributors to papers published in Nature and Science in 2018, as tracked by the Nature Index.

View the Nature Index 2019 Annual Tables for Nature and Science .

1. Harvard University

Fractional count: 70.67 (-15.3%), Article count: 210

No research institute beats Harvard University for papers published in Nature and Science .

As one of the oldest universities in the United States , and one of the world’s leading higher education institutions, Harvard has a storied history of revolutionary discoveries, including the smallpox vaccine, anaesthesia, and oral contraception.

More recently, widely discussed studies by Harvard scientists include a 2017 Nature paper that described a method for storing digital data in bacterial genomes using CRISPR-Cas9 gene-editing technology, and a Science paper this year , in which researchers from Harvard’s Institute for Quantitative Social Science analyzed the effect of fake news on Twitter during the 2016 US presidential election.

2. Stanford University

Fractional Count: 39.85 (-24.3%); Article Count: 100

Stanford University appears in the upper echelons of many of the Nature Index Annual Tables rankings, including chemistry, life sciences, the physical sciences, and academic institutions. It’s also the second most prolific publisher in Nature and Science .

Top authors at Stanford include bioengineer and neuroscientist, Karl Deisseroth , whose 2013 Nature paper on a new technology named CLARITY describes how mammalian brains can be rendered “clear as Jell-O” for access by molecular probes, as The New York Times put it .

Computer scientist, Fei-Fei Li, co-director of Stanford's Human-Centered AI Institute and Vision and Learning Lab, is one of the most prolific researchers in the field of AI, and has published on machine learning, deep learning, computer vision and cognitive neuroscience.

3. Massachusetts Institute of Technology

Fractional count: 37.69 (13.3%), Article count: 130

As one of the world’s most prestigious higher education institutions, MIT has been at the frontier of research for more than 150 years, its close ties with industry fostering an emphasis on entrepreneurship and applied science.

With 12,707 faculty and staff on campus and an annual budget of more than US$3.5 billion (2018), almost 85% of MIT’s undergraduates engage in frontline, faculty-led research.

One of MIT’s most talked-about studies in 2018 was a Science paper that analysed the spread of true and false news. Garnering an Altmetrics score of more than 9,600, which includes more than 8,000 tweets and 360 news stories, it lent support to the adage: lies spread faster than the truth.

A 2018 Nature paper by MIT researchers on the ‘Moral Machine experiment’ also drew significant attention from the wider public, posing complex questions about how AI in future will make moral decisions.

4. Max Planck Society

Fractional Count: 32.35 (-15%) Article count: 139

From nuclear fission to the theory of relativity, Germany's Max Planck Society has been the birthplace of some of the most important discoveries in science.

Among Max Planck’s Nature and Science findings published last year were a new antiretroviral treatment for HIV , graphene nanoribbons , and a new method for capturing changes in RNA .

One of Max Planck’s most highly cited Nature papers of 2018 presented the most precise measurements of the electron ever made. The findings support the standard model of particle physics, which proposes that electrons maintain a near perfect spherical shape.

Along with an international team, Max Planck researchers published a paper in Science last year that revealed the source of cosmic neutrinos, a type of high-energy particle that travels over vast distances. The study found that neutrinos originate from blazars, a type of galaxy powered by supermassive black holes.

5. University of California, Berkeley

Fractional count: 24.30 (39.4%), Article count: 78

Credited for the discovery or co-discovery of 16 elements — more than any other university — the University of California, Berkeley (UC Berkeley) has a well-established history of scientific advances. You can see its fingerprints on 97 berkelium (Bk), 98 californium (Cf) and 95 americium (Am).

Among its most highly-cited Nature and Science papers of last year were reports on the genetic basis of psychiatric disorders ; a possible cause of fast radio bursts ; and cobalt-free batteries .

In February 2018 , UC Berkeley researchers were involved in the development of genome-editing tool that can detect human papillomavirus with great sensitivity. The study was the institution’s most highly-cited Science paper of that year.

A paper published in Nature also attracted a lot of attention in 2018, when UC Berkeley researchers, along with an international team, described a microchip that uses light to transmit data. The new device is faster and more efficient than conventional silicon chip technologies.

6. University of California, Los Angeles

Fractional Count: 21.11 (82.1%), Article count: 67

Having just celebrated its 100th birthday, the University of California, Los Angeles (UCLA) is one of the youngest universities in Nature Index’s top 10 tables, and yet it receives the most student applications of any university in the US.

Key papers published in Science last year include a study that revealed how certain psychiatric disorders share global gene expression patterns, and one describing how UCLA engineers 3D-printed an AI device that identifies objects at the speed of light, a promising development for medicine, robotics and security .

As vice-chancellor for research, Roger Wakimoto, says, UCLA’s growth and achievements mirror the trajectory of the city it calls home.

“I believe the institution’s success is based on its strong commitment to teaching, recruiting and retaining some of the most talented researchers in the world, and having a close and synergistic relationship with the city of Los Angeles.”

7. Yale University

Fractional count: 19.75 (36.8%), Article count: 57

From finding evidence of time crystals to discovering how gut bacteria cause autoimmune disease, Yale University’s research consistently breaks ground. As one of the oldest universities in the US, it’s also built a strong record of highly-cited papers published in the world’s leading scientific journals.

Last year, Yale astronomers made headlines when they discovered that the galaxy NGC 1052-DF2 contained almost no dark matter, challenging the assumption that it’s essential for the formation of galaxies. The study was one of Yale’s most talked-about Nature papers of 2018.

A Science study involving Yale researchers on how household products contribute to air pollution also sparked conversation last year. The team found that cleaning products, perfumes and shampoos rival transport as major emissions sources.

8. Columbia University in the City of New York

Fractional count: 19.67 (-4.3%), Article count: 65

Established 265 years ago by royal charter of King George II of England, Columbia University is the oldest institution of higher learning in New York City and the fifth oldest in the US.

With more than 200 research centres and institutes, 350 new inventions each year and 84 Nobel laureates, Columbia is at the forefront of scientific discovery.

In 2018, Columbia University produced 65 papers in Nature and Science , placing it in eighth place in this ranking. The institution’s research performance is particularly strong in the life sciences, which accounts for around half of its overall output.

Last year, researchers from Columbia co-authored a study suggesting that 60% of people with European descent living in the US can be genetically identified using available genetic information, even if they have not undergone genetic testing.

Published in Science , the findings highlight the need for strategies that safeguard genetic privacy.

9. University of Oxford

Fractional count: 19.56 (73.6%), Article count: 73

The University of Oxford is the oldest in the English-speaking world, with teaching dating back as far as 1096. Today, it maintains its prestigious reputation with eighth place among the leading academic institutions in the Nature Index.

The largest share of Oxford’s scientific output is in the life sciences, with papers in the discipline contributing around one-third of its output.

Research funding provided by councils, trusts and industry is Oxford’s largest source of income, accounting for 26% of its funds — the highest research income of all the United Kingdom's universities.

Oxford is also home to the Beecroft Building, a new US$64-million facility dedicated to experimental and theoretical physics with laboratories designed for making precise measurements at the atomic level.

In 2018, Oxford researchers co-authored an influential analysis of how to reduce the environmental impacts of food. The paper, published in Science , triggered widespread debate about food choices, such as avoiding dairy and meat.

10. Swiss Federal Institute of Technology Zurich

Fractional count: 19.23 (29.1%), Article count: 55

Boasting 21 Nobel Prize winners, two Fields medallists and a Turing Award recipient, the Swiss Federal Institute of Technology Zurich (ETH Zurich) has built a reputation for trailblazing research over the past 160 years.

The technical and scientific university is also renowned for its ability to produce highly-cited research, taking tenth place among the top institutions in Nature and Science papers.

Among the most talked-about papers from the university are predictions of increased marine heatwaves under climate change; a technique for transmitting quantum information ; and detailed images of proteins involved in opioid signalling.

In January 2018 , researchers at ETH Zurich discovered how a particular group of nanocrystals are able to emit a bright light. The study, which was the most highly-cited Nature article from the institution that year, could have a range of applications in materials science, from data transmission to supercomputers.

The top 10 academic institutions in 2018: normalized

The top 10 academic institutions in 2018

The top 10 countries for scientific research in 2018

Correction: Text amended to remove "in the 21 century" from the opening sentence of the Max Planck Society's profile.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

What we know about unauthorized immigrants living in the U.S.

The unauthorized immigrant population in the United States reached 10.5 million in 2021, according to new Pew Research Center estimates. That was a modest increase over 2019 but nearly identical to 2017.

A line chart showing that the number of unauthorized immigrants in the U.S. remained mostly stable from 2017 to 2021.

The number of unauthorized immigrants living in the U.S. in 2021 remained below its peak of 12.2 million in 2007. It was about the same size as in 2004 and lower than every year from 2005 to 2015.

The new estimates do not reflect changes that have occurred since apprehensions and expulsions of migrants along the U.S.-Mexico border started increasing in March 2021 . Migrant encounters at the border have since reached historic highs .

Pew Research Center undertook this research to understand ongoing changes in the size and characteristics of the unauthorized immigrant population in the United States. The Center has published estimates of the U.S. unauthorized immigrant population for more than two decades. The estimates presented in this research are the Center’s latest, adding new and updated annual estimates for 2017 through 2021.

Center estimates of the unauthorized immigrant population use a “residual method.” It is similar to methods used by the U.S. Department of Homeland Security’s Office of Immigration Statistics and nongovernmental organizations, including the Center for Migration Studies and the Migration Policy Institute . Those organizations’ estimates are generally consistent with ours. Our estimates also align with official U.S. data sources, including birth records, school enrollment figures and tax data, as well as Mexican censuses and surveys.

Our “residual” method for estimating the nation’s unauthorized immigrant population includes these steps:

  • Estimate the total number of immigrants living in the country in a particular year using data from U.S. censuses and government surveys such as the American Community Survey and the Current Population Survey.
  • Estimate the number of immigrants living in the U.S. legally using official counts of immigrant and refugee admissions together with other demographic data (for example, death and out-migration rates).
  • Subtract our estimate of lawful immigrants from our estimate of the total immigrant population . This provides an initial estimate of the unauthorized immigrant population .

Our final estimate of the U.S. unauthorized immigrant population, as well as estimates for lawful immigrants, includes an upward adjustment. We do this because censuses and surveys tend to miss some people . Undercounts for immigrants, especially unauthorized immigrants, tend to be higher than for other groups. (Our 1990 estimate comes from work by Robert Warren and John Robert Warren; details can be found here .)

The term “unauthorized immigrant” reflects standard and customary usage by many academic researchers and policy analysts. The U.S. Department of Homeland Security’s Office of Immigration Statistics also generally uses it. The term means the same thing as undocumented immigrants, illegal immigrants and illegal aliens.

For more details on how we produced our estimates, read the Methodology section of our November 2018 report on unauthorized immigrants.

The unauthorized immigrant population includes any immigrants not in the following groups:

  • Immigrants admitted for lawful residence (i.e., green card admissions)
  • People admitted formally as refugees
  • People granted asylum
  • Former unauthorized immigrants granted legal residence under the 1985 Immigration Reform and Control Act
  • Immigrants admitted under any of categories 1-4 who have become naturalized U.S. citizens
  • Individuals admitted as lawful temporary residents under specific visa categories

Read the Methodology section of our November 2018 report on unauthorized immigrants for more details.

Pew Research Center’s estimate of unauthorized immigrants includes more than 2 million immigrants who have temporary permission to be in the United States. (Some also have permission to work in the country.) These immigrants account for about 20% of our national estimate of 10.5 million unauthorized immigrants for 2021.

Although these immigrants have permission to be in the country, they could be subject to deportation if government policy changes. Other organizations and the federal government also include these immigrants in their estimates of the U.S. unauthorized immigrant population.

Immigrants can receive temporary permission to be in the U.S. through the following ways:

Temporary Protected Status (TPS)

In 2021, there were about 500,000 unauthorized immigrants with Temporary Protected Status . This status provides protection from removal or deportation to individuals who cannot safely return to their country because of civil unrest, violence or natural disaster.

Deferred Enforced Departure (DED) is a similar program that grants protection from removal. The number of immigrants with DED is much smaller than the number with TPS.

Deferred Action for Childhood Arrivals (DACA)

Deferred Action for Childhood Arrivals is a program that offers protection from deportation to individuals who were brought to the U.S. as children before June 15, 2007. As of the end of 2021, there were slightly more than 600,000 DACA beneficiaries , largely immigrants from Mexico.

Asylum applicants

Individuals who have applied for asylum but are awaiting a ruling are not legal residents yet but cannot be deported. There are two types of asylum claims, defensive and affirmative .

Defensive asylum applications are generally filed by individuals facing deportation or removal from the U.S. These are processed by the Department of Justice’s Executive Office for Immigration Review. At the end of 2021, there were almost 600,000 applications pending.

Affirmative asylum claims are made by individuals already in the U.S. who are not in the process of being deported or removed. These claims are handled by the U.S. Department of Homeland Security’s Citizenship and Immigration Services (USCIS). At the end of 2021, more than 400,000 applications for affirmative asylum were pending, some covering more than one applicant.

Here are key findings about how the U.S. unauthorized immigrant population changed from 2017 to 2021:

  • The most common country of birth for unauthorized immigrants is Mexico. However, the population of unauthorized immigrants from Mexico dropped by 900,000 from 2017 to 2021 , to 4.1 million.
  • There were increases in unauthorized immigrants from nearly every other region of the world – Central America, the Caribbean, South America, Asia, Europe and sub-Saharan Africa.
  • Among U.S. states, only Florida and Washington saw increases to their unauthorized immigrant populations , while California and Nevada saw decreases. In all other states, unauthorized immigrant populations were unchanged.
  • 4.6% of U.S. workers in 2021 were unauthorized immigrants , virtually identical to the share in 2017.

Trends in the U.S. immigrant population

A pie chart showing that unauthorized immigrants were 22% of the U.S. foreign-born population in 2021.

The U.S. foreign-born population was 14.1% of the nation’s population in 2021. That was very slightly higher than in the last five years but below the record high of 14.8% in 1890.

As of 2021, the nation’s 10.5 million unauthorized immigrants represented about 3% of the total U.S. population and 22% of the foreign-born population. These shares were among the lowest since the 1990s.

Between 2007 and 2021, the unauthorized immigrant population decreased by 1.75 million, or 14%.

Meanwhile, the lawful immigrant population grew by more than 8 million, a 29% increase, and the number of naturalized U.S. citizens grew by 49%. In 2021, naturalized citizens accounted for about half (49%) of all immigrants in the country.

Where unauthorized immigrants come from

Unauthorized immigrants living in the U.S. come from many parts of the world, with Mexico being the most common origin country.

A line chart showing that Mexicans are no longer a majority of unauthorized immigrants living in the U.S.

The origin countries for unauthorized immigrants have changed since the population peaked in 2007, before the Great Recession slowed immigration. Here are some highlights of those changes:

The number of unauthorized immigrants from Mexico living in the U.S. (4.1 million in 2021) was the lowest since the 1990s. Mexico accounted for 39% of the nation’s unauthorized immigrants in 2021, by far the smallest share on record .

The decrease in unauthorized immigrants from Mexico reflects several factors:

  • A broader decline in migration from Mexico to the U.S.
  • Mexican immigrants to the U.S. continuing to return to Mexico
  • Expanded opportunities for lawful immigration from Mexico and other countries, especially for temporary agricultural workers.

The rest of the world

The total number of unauthorized immigrants in the U.S. from countries other than Mexico has grown rapidly. In 2021, this population was 6.4 million, up by 900,000 from 2017.

A bar chart showing that the U.S. unauthorized immigrant populations from most world regions grew from 2017 to 2021.

Almost every region in the world had a notable increase in the number of unauthorized immigrants in the U.S. from 2007 to 2021. The largest increases were from Central America (240,000) and South and East Asia (180,000).

After Mexico, the countries of origin with the largest unauthorized immigrant populations in the U.S. in 2021 were:

  • El Salvador (800,000)
  • India (725,000)
  • Guatemala (700,000)
  • Honduras (525,000)

India, Guatemala and Honduras all saw increases from 2017.

The Northern Triangle

Three Central American countries – El Salvador, Honduras and Guatemala – together represented 2.0 million unauthorized immigrants in the U.S. in 2021, or almost 20% of the total. The unauthorized immigrant population from the Northern Triangle grew by about 250,000 from 2017 and about 700,000 from 2007.

Other origin countries

Venezuela was the country of birth for 190,000 U.S. unauthorized immigrants in 2021. This population saw particularly fast growth, from 130,000 in 2017 and 55,000 in 2007.

Among countries with the largest numbers of U.S. unauthorized immigrants, India, Brazil, Canada and former Soviet Union countries all experienced growth from 2017 to 2021.

Some origin countries with significant unauthorized immigrant populations showed no change, notably China (375,000) and the Dominican Republic (230,000).

Detailed table: Unauthorized immigrant population by region and selected country of birth (and margins of error), 1990-2021 (Excel)

U.S. states of residence of unauthorized immigrants

The unauthorized immigrant population in most U.S. states stayed steady from 2017 to 2021. However, four states saw significant changes:

  • Florida (+80,000)
  • Washington (+60,000)
  • California (-150,000)
  • Nevada (-25,000)

States with the most unauthorized immigrants

U.S. state map showing color-coded range of unauthorized immigrant population by state. Six states had 400,000 or more unauthorized immigrants in 2021: California, Texas, Florida, New York, New Jersey and Illinois.

The six states with the largest unauthorized immigrant populations in 2021 were:

  • California (1.9 million)
  • Texas (1.6 million)
  • Florida (900,000)
  • New York (600,000)
  • New Jersey (450,000)
  • Illinois (400,000)

These states have consistently had the most unauthorized immigrants since 1990 and earlier .

At the same time, the unauthorized immigrant population has become less geographically concentrated. In 2021, these six states were home to 56% of the nation’s unauthorized immigrants, down from 80% in 1990.

Detailed table: Unauthorized immigrant population for states (and margins of error), 1990-2021 (Excel)

Detailed table: Unauthorized immigrants and characteristics for states, 2021 (Excel)

Unauthorized immigrants in the labor force

A line chart showing that the number of unauthorized immigrants in the U.S. workforce has remained mostly steady since 2017.

The share of unauthorized immigrants in the U.S. workforce was slightly less than 5% in 2021, compared with 3% of the total U.S. population.

Demographics help explain the difference: The unauthorized immigrant population includes relatively few children or elderly adults, groups that tend not to be in the labor force.

Overall, about 7.8 million unauthorized immigrants were in the U.S. labor force in 2021. That was up slightly from 2019 but smaller than every year from 2007 through 2015.

Detailed table: Unauthorized immigrants in the labor force for states, 2021 (Excel)

Here are some additional findings about unauthorized immigrants as a share of the workforce nationwide and in certain states:

  • Since 2003, unauthorized immigrants have made up 4.4% to 5.4% of all U.S. workers, a relatively narrow range.
  • Fewer than 1% of workers in Maine, Montana, Vermont and West Virginia in 2021 were unauthorized immigrants.
  • Nevada (9%) and Texas (8%) had the highest shares of unauthorized immigrants in the workforce.
  • Immigrant Populations
  • Immigration Issues
  • Unauthorized Immigration

Jeffrey S. Passel's photo

Jeffrey S. Passel is a senior demographer at Pew Research Center

Jens Manuel Krogstad's photo

Jens Manuel Krogstad is a senior writer and editor at Pew Research Center

Key facts about Asian Americans living in poverty

Latinos’ views on the migrant situation at the u.s.-mexico border, key facts about the nation’s 47.9 million black americans, key facts about the wealth of immigrant households during the covid-19 pandemic, 8 facts about recent latino immigrants to the u.s., most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Only 1 in 3 US adults think Trump acted illegally in New York hush money case, AP-NORC poll shows

The first criminal trial facing former President Donald Trump is also the one in which Americans are least convinced he committed a crime, a new AP-NORC Center for Public Affairs Research poll finds.

FILE - Former President Donald Trump sits in Manhattan criminal court with his legal team in New York, April 15, 2024. (Jabin Botsford/Pool Photo via AP)

FILE - Former President Donald Trump sits in Manhattan criminal court with his legal team in New York, April 15, 2024. (Jabin Botsford/Pool Photo via AP)

  • Copy Link copied

WASHINGTON (AP) — The first criminal trial facing former President Donald Trump is also the one in which Americans are least convinced he committed a crime, a new AP-NORC Center for Public Affairs Research poll finds.

Only about one-third of U.S. adults say Trump did something illegal in the hush money case for which jury selection began Monday, while close to half think he did something illegal in the other three criminal cases pending against him. And they’re fairly skeptical that Trump is getting a fair shake from the prosecutors in the case — or that the judge and jurors can be impartial in cases involving him.

What to know about Trump’s hush money trial:

  • Trump will be first ex-president on criminal trial. Here’s what to know about the hush money case.
  • A jury of his peers: A look at how jury selection will work in Donald Trump’s first criminal trial .
  • Trump is facing four criminal indictments, and a civil lawsuit. You can track all of the cases here.
  • Trump trial day 6 highlights: David Pecker testifies on “catch-and-kill” scheme .

Still, half of Americans would consider Trump unfit to serve as president if he is convicted of falsifying business documents to cover up hush money payments to a woman who said he had a sexual encounter with her.

While a New York jury will decide whether to convict Trump of felony charges, public opinion of the trial proceedings could hurt him politically. The poll suggests a conviction could hurt Trump’s campaign. Trump enters a rematch with President Joe Biden as the first presumptive nominee of a major party — and the first former president — to be under indictment. A verdict is expected in roughly six weeks, well before the Republican National Convention, at which he will accept the GOP nomination.

Trump has made the prosecutions against him a centerpiece of his campaign and argued without evidence that Biden, a Democrat, engineered the cases. That argument helped him consolidate GOP support during the Republican primary, but a conviction might influence how many Americans — including independent voters and people long skeptical of Trump — perceive his candidacy.

FILE - Dr. Kelli Ward, left, chair of the Arizona Republican Party, talks with a supporter of President Donald Trump as they join the crowd at a rally outside the Arizona Capitol, Nov. 7, 2020, in Phoenix. Ward is one of 11 Republicans in Arizona who submitted a document to Congress falsely declaring Donald Trump had beaten Joe Biden in the state during the 2020 presidential election were charged Wednesday, April 24, 2024 with conspiracy, fraud and forgery, marking the fourth state to bring charges against "fake electors." (AP Photo/Ross D. Franklin, File)

“Any conviction should disqualify him,” said Callum Schlumpf, a 31-year-old engineering student and political independent from Clifton, Texas. “It sets a bad example to the rest of the world. I think it misrepresents us, as a country, as to what we believe is important and virtuous.”

Yet, a cloud of doubt hangs over all the proceedings. Only about 3 in 10 Americans feel that any of the prosecutors who have brought charges against Trump are treating the former president fairly. And only about 2 in 10 Americans are extremely or very confident that the judges and jurors in the cases against him can be fair and impartial.

“It’s very obvious political persecution,” said Christopher Ruff, a 46-year-old political independent and museum curator from Sanford, North Carolina. “I’m no fan of Trump in any way, shape or form. Didn’t vote for him, never will. But it’s obviously all political.”

Former President Donald Trump sits in Manhattan criminal court with his legal team in New York, April 15, 2024. (Jabin Botsford/Pool Photo via AP)

Consistent with AP-NORC polls conducted over the past year, the new poll found that about half of Americans say Trump did something illegal regarding the classified documents found at his Florida home , and a similar share think he did something illegal regarding his alleged attempt to interfere in Georgia’s vote count in the 2020 presidential election . The poll also found that nearly half of Americans believe he did something illegal related to his effort to overturn the results of the 2020 election .

Prosecutors in New York will argue that Trump falsified his company’s internal records to hide the true nature of a payment to his former lawyer Michael Cohen. Cohen alleges he was directed by Trump to pay adult film actor Stormy Daniels $130,000 one month before the 2016 election to silence her claims about an extramarital sexual encounter with Trump.

Trump has pleaded not guilty to the 34-count indictment and denied any sexual encounter with Daniels.

The poll found that 35% of Americans say Trump has done something illegal with regard to the hush money allegations. Slightly fewer, about 3 in 10, think he did something unethical without breaking the law. Fourteen percent think he did nothing wrong at all. Those numbers haven’t shifted meaningfully in the year since he was first charged in the case.

Republicans are much less likely than Democrats and independents to say Trump committed a crime in the hush money case.

“He’s done nothing wrong,” said Louie Tsonos, a 43-year-old sales representative and Republican from Carleton, Michigan, a suburb of Detroit. “Because Trump has a lot of money and fame, they want to destroy his reputation. Or at least they are trying to.”

Fewer than one in 10 Republicans say Trump did something illegal in the case, while 4 in 10 Republicans think he did something unethical but did not break the law. About 3 in 10 Republicans, like Tsonos, say he did nothing wrong.

By contrast, about 6 in 10 Democrats and roughly 3 in 10 independents believe he did something illegal.

Monica Brown, a Democrat from Knoxville, Tennessee, thinks Trump did something unethical, though not illegal, in the New York criminal case under way. But a conviction would ruin his credibility to serve as president, she said.

“I don’t believe any president – whether it’s Donald Trump or anyone else – should have a criminal conviction on his record,” said Brown, a 60-year-old veterinary technician and social worker. “Even if it’s related to something like hush money, what respect are they going to get from anyone? Citizens of the country or world leaders, they aren’t going to respect you.”

Nearly 6 in 10 Republicans say they would consider Trump fit to be president even if he were to be convicted of falsifying business documents in the hush money case. About 8 in 10 Democrats say Trump would not be fit to serve in the event of a conviction. About half of independents think he would be unfit to serve, with 22% saying he would be fit and 30% saying they didn’t know enough to say.

“I don’t think any of that stuff has any relevance to his ability to lead this country,” said Jennifer Solich, a Republican from York, Pennsylvania, and retired nuclear engineer who believes Trump would be fit to serve if convicted in the New York case. “There may be some unethical aspects to it. I just think it’s more trivial than what we’re facing as a nation.”

Beaumont reported from Des Moines, Iowa.

The poll of 1,204 adults was conducted April 4-8, 2024, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 3.9 percentage points.

how many research articles are published each year

IMAGES

  1. Number of articles published per year from 2001 to 2019 by journal. on

    how many research articles are published each year

  2. Number of Academic Papers Published Per Year

    how many research articles are published each year

  3. Which country leads the world in publishing scientific research

    how many research articles are published each year

  4. The number of articles published each year and in each journal

    how many research articles are published each year

  5. Number of articles published per year

    how many research articles are published each year

  6. The number of articles published per year. Note that the lower number

    how many research articles are published each year

VIDEO

  1. The Article Publishing Process Part 1 of 2

  2. Academic Writing

  3. Why Science Fraud Goes Deeper Than the Stanford Scandal

  4. Number of R&D researchers per million people

  5. Pacioli AI Validating Node

  6. How scientific papers are published

COMMENTS

  1. Scientific literature: Information overload

    Recent bibliometrics show that the number of published scientific papers has climbed by 8-9% each year over the past several decades. In the biomedical field alone, more than 1 million papers ...

  2. List of countries by number of scientific and technical journal articles

    Scientific publications per million people (2020) 1 China: 744042 527 2 United States: 624554 1875 3 United Kingdom: 198500 2959 4 India: 191590 138 5 Germany: 174524 2097 6 Italy: 127502 2159 7 Japan: 127408 1016 8 Canada: 121111 3184 9 Russia: 119195 819 10 France: 112838 1664 11 Australia: 106614 4109 12 Spain: 104353 2202 13 South Korea: 91030

  3. Publications Output: U.S. Trends and International Comparisons

    This report utilizes data from the Scopus database of global S&E publications and finds that worldwide S&E publication output continues to grow on average at nearly 4% per year; from 2008 to 2018, output grew from 1.8 million to 2.6 million articles. In 2018, China (with a share of 21%) and the United States (with a share of 17%) were the ...

  4. Some scientists publish more than 70 papers a year. Here's how ...

    Like Stephen Kings of academia, some researchers are unusually prolific publishers, appearing as an author on as many as 72 scientific papers a year—or about every 5 days. John Ioannidis, a statistician at Stanford University in Palo Alto, California, wondered whether some of them were gaming the system. So he and colleagues dove into the ...

  5. Millions of research papers are published in a year. How do scientists

    And there are millions of scientific papers each year. ... Citation: Millions of research papers are published in a year. How do scientists keep up? (2022, April 27) retrieved 19 April 2024 from ...

  6. The number of papers over time. The total number of papers has surged

    By reviewing the scientific literature published within the period 2006-2022, we observed a significant increase in the number of published articles on Speleomantes behavior, overall obtaining ...

  7. Thousands of scientists publish a paper every five days

    When we excluded conference papers, almost two-thirds belonged to medical and life sciences (86/131). Among the 265, 154 authors produced more than the equivalent of one paper every 5 days for 2 ...

  8. Annual articles published in scientific and technical journals per

    Number of R&D researchers per million people; Ocean science and research funding; R&D researchers per million people vs. GDP per capita; Research & development spending as a share of GDP; Share of clinical trials that report results within a year By country; Share of government expenditure going to interest payments

  9. More journal articles and fewer books: Publication practices in the

    Introduction. The number of scientific and scholarly journal articles published each year has been increasing for some time. Kyvik [] estimated there was a 30% increase in scientific and scholarly publishing between 1980 and 2000.In a later study, Kyvik and Aksnes [] noted that Web of Science records increased from 500,000 indexed articles in 1981 to 1.5 million indexed articles in 2013.

  10. How many authors are (too) many? A retrospective ...

    In total 17,015,001 PubMed articles published between 2000 and 2020 were included. Across all 13 publication types, the number of articles per year steadily increased almost threefold from ~ 490,000 in 2000 to over ~ 1.3 million in 2020 (Fig. 2A and Table 2). In the same period, the mean number of authors per publication significantly increased ...

  11. Academics Write Papers Arguing Over How Many People Read (And Cite

    There are a lot of scientific papers out there. One estimate puts the count at 1.8 million articles published each year, in about 28,000 journals.Who actually reads those papers? According to one ...

  12. How Many Journal Articles Have Been Published?

    The Number of Journal Articles Published. Each year, over 2 million new research articles are published in more than 30,000 peer-reviewed journals across all fields of study. With more than 2 million journal articles, the number of academic papers published yearly is staggering.

  13. Measures of Impact for Journals, Articles, and Authors

    5-Year Impact Factor is the average number of times articles published in the previous 5 years were cited in the indexed year. It gives information on the sustained influence of journal publications. JGIM's 2020 score was 6.070, meaning that articles published in 2014-2019 were cited an average of 6 times in 2020.

  14. Number of articles published per year from 2001 to 2019 by journal. on

    Journal metadata was analysed using summary descriptive statistics. 58,952 articles published by 40 journals between 1972 and 2021 were found. 62.4% (n = 36,806) were original articles with 66.4% ...

  15. Publications Output: U.S. Trends and International Comparisons

    Collectively, the top 15 countries produced 76% of the world's publication output of 2.9 million articles in 2020 ( Table PBS-1 ). The two countries producing the most S&E publications in 2020 were China (669,744, or 23%) and the United States (455,856, or 16%) ( Figure PBS-2 ). With the exception of Iran replacing Taiwan beginning in 2014 ...

  16. COVID-19 research update: How many pandemic papers have been published

    23 May 2020: Roughly 40% of articles on COVID-19 are preprints, according to an analysis of more than 16,000 papers related to the pandemic. On the preprint servers bioRxiv and medRxiv, COVID-19 ...

  17. Scopus 1900-2020: Growth in articles, abstracts ...

    Abstract. Scientometric research often relies on large-scale bibliometric databases of academic journal articles. Long-term and longitudinal research can be affected if the composition of a database varies over time, and text processing research can be affected if the percentage of articles with abstracts changes. This article therefore assesses changes in the magnitude of the coverage of a ...

  18. The growth of scientific publications in 2020: a ...

    Mass media coverage of COVID-19 pandemic has been exceptional with more than 180,000 articles published each day in 70 languages from March 8 to April 8, 2020. ... Before 2020, International Orthopaedics was receiving less than 3000 papers per year for consideration; approximately 400 were published. ... Because the research begins and ends to ...

  19. How to obtain number of publications by year on a given subject?

    But in mathematics, you could at least easily check how many publications there are each year in a given AMS subject classification. For instance, this search result shows that 23 papers were published on Enumeration in graph theory in 2005. Note: you may need a subscription to access that link. Share. Improve this answer.

  20. Original article Nearly 80 systematic reviews were published each day

    If only the year of search was given, we considered the date as being 'not reported'. 2.6. Data analysis. First, we estimated the total number of published SRs per year with 95% confidence intervals (95% CI) by using the selection probability as sampling weights.

  21. How many manuscripts should I peer review per year?

    In plain language, to keep the scholarly peer reviewing publishing wheel spinning, the authors of each article published in a journal with an 80% rejection rate should review 15 manuscripts; and if the same research team published five articles in a given year, they should have reviewed 75 manuscripts. Considering an average of five authors per ...

  22. Number of Academic Papers Published Per Year

    As of 2022, over 5.14 million academic articles are published per year, including short surveys, reviews, and conference proceedings. The number of published articles had increased by 2.06% since 2021, when over 5.03 million papers were published. Since 2018, the number of articles published by year jumped by 22.78%, starting from 4.18 million.

  23. AI Index: State of AI in 13 Charts

    This year's AI Index — a 500-page report tracking 2023's worldwide trends in AI — is out.. The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year's report covers the rise of multimodal foundation models ...

  24. These 10 institutions published the most papers in Nature and Science

    2. Stanford University. Fractional Count: 39.85 (-24.3%); Article Count: 100. Stanford University appears in the upper echelons of many of the Nature Index Annual Tables rankings, including ...

  25. What the data says about gun deaths in the U.S.

    The U.S. gun death rate was 10.6 per 100,000 people in 2016, the most recent year in the study, which used a somewhat different methodology from the CDC. That was far higher than in countries such as Canada (2.1 per 100,000) and Australia (1.0), as well as European nations such as France (2.7), Germany (0.9) and Spain (0.6).

  26. US voter turnout recently soared but lags behind many ...

    The 2020 voting surge followed unusually high turnout in the 2018 midterm elections, when about 47.5% of the voting-age population - and 51.8% of voting-age citizens - went to the polls. This year, some political analysts are predicting another heavy turnout in this month's midterms. According to a recent Center survey, 72% of registered ...

  27. What we know about unauthorized immigrants living in the U.S

    The unauthorized immigrant population in the United States reached 10.5 million in 2021, according to new Pew Research Center estimates. That was a modest increase over 2019 but nearly identical to 2017. The number of unauthorized immigrants living in the U.S. in 2021 remained below its peak of 12.2 million in 2007.

  28. How Americans view Trump's hush money case: AP-NORC poll

    Trump has pleaded not guilty to the 34-count indictment and denied any sexual encounter with Daniels. The poll found that 35% of Americans say Trump has done something illegal with regard to the hush money allegations. Slightly fewer, about 3 in 10, think he did something unethical without breaking the law.