How to Write a Science Fair Project Report

Lab Reports and Research Essays

  • Projects & Experiments
  • Chemical Laws
  • Periodic Table
  • Scientific Method
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

Writing a science fair project report may seem like a challenging task, but it is not as difficult as it first appears. This is a format that you may use to write a science project report. If your project included animals, humans, hazardous materials, or regulated substances, you can attach an appendix that describes any special activities your project required. Also, some reports may benefit from additional sections, such as abstracts and bibliographies. You may find it helpful to fill out the science fair lab report template to prepare your report.

Important: Some science fairs have guidelines put forth by the science fair committee or an instructor. If your science fair has these guidelines, be sure to follow them.

  • Title:  For a science fair, you probably want a catchy, clever title. Otherwise, try to make it an accurate description of the project. For example, I could entitle a project, "Determining Minimum NaCl Concentration That Can Be Tasted in Water." Avoid unnecessary words, while covering the essential purpose of the project. Whatever title you come up with, get it critiqued by friends, family, or teachers.
  • Introduction and Purpose:  Sometimes this section is called "background." Whatever its name, this section introduces the topic of the project, notes any information already available, explains why you are interested in the project, and states the purpose of the project. If you are going to state references in your report, this is where most of the citations are likely to be, with the actual references listed at the end of the entire report in the form of a bibliography or reference section.
  • The Hypothesis or Question:  Explicitly state your hypothesis or question.
  • Materials and Methods:  List the materials you used in your project and describe the procedure that you used to perform the project. If you have a photo or diagram of your project, this is a good place to include it.
  • Data and Results:  Data and results are not the same things. Some reports will require that they be in separate sections, so make sure you understand the difference between the concepts. Data refers to the actual numbers or other information you obtained in your project. Data can be presented in tables or charts, if appropriate. The results section is where the data is manipulated or the hypothesis is tested. Sometimes this analysis will yield tables, graphs, or charts, too. For example, a table listing the minimum concentration of salt that I can taste in water, with each line in the table being a separate test or trial, would be data. If I average the data or perform a statistical test of a null hypothesis , the information would be the results of the project.
  • Conclusion:  The conclusion focuses on the hypothesis or question as it compares to the data and results. What was the answer to the question? Was the hypothesis supported (keep in mind a hypothesis cannot be proved, only disproved)? What did you find out from the experiment? Answer these questions first. Then, depending on your answers, you may wish to explain the ways in which the project might be improved or introduce new questions that have come up as a result of the project. This section is judged not only by what you were able to conclude but also by your recognition of areas where you could not draw valid conclusions based on your data.

Appearances Matter

Neatness counts, spelling counts, grammar counts. Take the time to make the report look nice. Pay attention to margins, avoid fonts that are difficult to read or are too small or too large, use clean paper, and make print the report cleanly on as good a printer or copier as you can.

  • Make a Science Fair Poster or Display
  • Null Hypothesis Examples
  • Questions for Each Level of Bloom's Taxonomy
  • How to Organize Your Science Fair Poster
  • How to Do a Science Fair Project
  • Science Fair Project Help
  • How to Select a Science Fair Project Topic
  • How To Design a Science Fair Experiment
  • 6th Grade Science Fair Projects
  • Science Lab Report Template - Fill in the Blanks
  • Biology Science Fair Project Ideas
  • How to Write a Bibliography For a Science Fair Project
  • What Judges Look for in a Science Fair Project
  • How to Write a Lab Report
  • Chemistry Science Fair Project Ideas
  • 5 Types of Science Fair Projects
  • Earth Science
  • Physics & Engineering
  • Science Kits
  • Microscopes
  • Science Curriculum and Kits
  • About Home Science Tools

Teaching Resources & Guides > How to Teach Science Tips > Writing a Science Report  

Writing a Science Report

With science fair season coming up as well as many end of the year projects, students are often required to write a research paper or a report on their project. Use this guide to help you in the process from finding a topic to revising and editing your final paper.

Brainstorming Topics

Sometimes one of the largest barriers to writing a research paper is trying to figure out what to write about. Many times the topic is supplied by the teacher, or the curriculum tells what the student should research and write about. However, this is not always the case. Sometimes the student is given a very broad concept to write a research paper on, for example, water. Within the category of water, there are many topics and subtopics that would be appropriate. Topics about water can include anything from the three states of water, different water sources, minerals found in water, how water is used by living organisms, the water cycle, or how to find water in the desert. The point is that “water” is a very large topic and would be too broad to be adequately covered in a typical 3-5 page research paper.

When given a broad category to write about, it is important to narrow it down to a topic that is much more manageable. Sometimes research needs to be done in order to find the best topic to write about. (Look for searching tips in “Finding and Gathering Information.”) Listed below are some tips and guidelines for picking a suitable research topic:

  • Pick a topic within the category that you find interesting. It makes it that much easier to research and write about a topic if it interests you.
  • You may find while researching a topic that the details of the topic are very boring to you. If this is the case, and you have the option to do this, change your topic.
  • Pick a topic that you are already familiar with and research further into that area to build on your current knowledge.
  • When researching topics to do your paper on, look at how much information you are finding. If you are finding very little information on your topic or you are finding an overwhelming amount, you may need to rethink your topic.
  • If permissible, always leave yourself open to changing your topic. While researching for topics, you may come across one that you find really interesting and can use just as well as the previous topics you were searching for.
  • Most importantly, does your research topic fit the guidelines set forth by your teacher or curriculum?

Finding and Gathering Information

There are numerous resources out there to help you find information on the topic selected for your research paper. One of the first places to begin research is at your local library. Use the Dewey Decimal System or ask the librarian to help you find books related to your topic. There are also a variety of reference materials, such as encyclopedias, available at the library.

A relatively new reference resource has become available with the power of technology – the Internet. While the Internet allows the user to access a wealth of information that is often more up-to-date than printed materials such as books and encyclopedias, there are certainly drawbacks to using it. It can be hard to tell whether or not a site contains factual information or just someone’s opinion. A site can also be dangerous or inappropriate for students to use.

You may find that certain science concepts and science terminology are not easy to find in regular dictionaries and encyclopedias. A science dictionary or science encyclopedia can help you find more in-depth and relevant information for your science report. If your topic is very technical or specific, reference materials such as medical dictionaries and chemistry encyclopedias may also be good resources to use.

If you are writing a report for your science fair project, not only will you be finding information from published sources, you will also be generating your own data, results, and conclusions. Keep a journal that tracks and records your experiments and results. When writing your report, you can either write out your findings from your experiments or display them using graphs or charts .

*As you are gathering information, keep a working bibliography of where you found your sources. Look under “Citing Sources” for more information. This will save you a lot of time in the long run!

Organizing Information

Most people find it hard to just take all the information they have gathered from their research and write it out in paper form. It is hard to get a starting point and go from the beginning to the end. You probably have several ideas you know you want to put in your paper, but you may be having trouble deciding where these ideas should go. Organizing your information in a way where new thoughts can be added to a subtopic at any time is a great way to organize the information you have about your topic. Here are two of the more popular ways to organize information so it can be used in a research paper:

  • Graphic organizers such as a web or mind map . Mind maps are basically stating the main topic of your paper, then branching off into as many subtopics as possible about the main topic. Enchanted Learning has a list of several different types of mind maps as well as information on how to use them and what topics fit best for each type of mind map and graphic organizer.
  • Sub-Subtopic: Low temperatures and adequate amounts of snow are needed to form glaciers.
  • Sub-Subtopic: Glaciers move large amounts of earth and debris.
  • Sub-Subtopic: Two basic types of glaciers: valley and continental.
  • Subtopic: Icebergs – large masses of ice floating on liquid water

Different Formats For Your Paper

Depending on your topic and your writing preference, the layout of your paper can greatly enhance how well the information on your topic is displayed.

1. Process . This method is used to explain how something is done or how it works by listing the steps of the process. For most science fair projects and science experiments, this is the best format. Reports for science fairs need the entire project written out from start to finish. Your report should include a title page, statement of purpose, hypothesis, materials and procedures, results and conclusions, discussion, and credits and bibliography. If applicable, graphs, tables, or charts should be included with the results portion of your report.

2. Cause and effect . This is another common science experiment research paper format. The basic premise is that because event X happened, event Y happened.

3. Specific to general . This method works best when trying to draw conclusions about how little topics and details are connected to support one main topic or idea.

4. Climatic order . Similar to the “specific to general” category, here details are listed in order from least important to most important.

5. General to specific . Works in a similar fashion as the method for organizing your information. The main topic or subtopic is stated first, followed by supporting details that give more information about the topic.

6. Compare and contrast . This method works best when you wish to show the similarities and/or differences between two or more topics. A block pattern is used when you first write about one topic and all its details and then write about the second topic and all its details. An alternating pattern can be used to describe a detail about the first topic and then compare that to the related detail of the second topic. The block pattern and alternating pattern can also be combined to make a format that better fits your research paper.

Citing Sources

When writing a research paper, you must cite your sources! Otherwise you are plagiarizing (claiming someone else’s ideas as your own) which can cause severe penalties from failing your research paper assignment in primary and secondary grades to failing the entire course (most colleges and universities have this policy). To help you avoid plagiarism, follow these simple steps:

  • Find out what format for citing your paper your teacher or curriculum wishes you to use. One of the most widely used and widely accepted citation formats by scholars and schools is the Modern Language Association (MLA) format. We recommended that you do an Internet search for the most recent format of the citation style you will be using in your paper.
  • Keep a working bibliography when researching your topic. Have a document in your computer files or a page in your notebook where you write down every source that you found and may use in your paper. (You probably will not use every resource you find, but it is much easier to delete unused sources later rather than try to find them four weeks down the road.) To make this process even easier, write the source down in the citation format that will be used in your paper. No matter what citation format you use, you should always write down title, author, publisher, published date, page numbers used, and if applicable, the volume and issue number.
  • When collecting ideas and information from your sources, write the author’s last name at the end of the idea. When revising and formatting your paper, keep the author’s last name attached to the end of the idea, no matter where you move that idea. This way, you won’t have to go back and try to remember where the ideas in your paper came from.
  • There are two ways to use the information in your paper: paraphrasing and quotes. The majority of your paper will be paraphrasing the information you found. Paraphrasing is basically restating the idea being used in your own words.   As a general rule of thumb, no more than two of the original words should be used in sequence when paraphrasing information, and similes should be used for as many of the words as possible in the original passage without changing the meaning of the main point. Sometimes, you may find something stated so well by the original author that it would be best to use the author’s original words in your paper. When using the author’s original words, use quotation marks only around the words being directly quoted and work the quote into the body of your paper so that it makes sense grammatically. Search the Internet for more rules on paraphrasing and quoting information.

Revising and Editing Your Paper

Revising your paper basically means you are fixing grammatical errors or changing the meaning of what you wrote. After you have written the rough draft of your paper, read through it again to make sure the ideas in your paper flow and are cohesive. You may need to add in information, delete extra information, use a thesaurus to find a better word to better express a concept, reword a sentence, or just make sure your ideas are stated in a logical and progressive order.

After revising your paper, go back and edit it, correcting the capitalization, punctuation, and spelling errors – the mechanics of writing. If you are not 100% positive a word is spelled correctly, look it up in a dictionary. Ask a parent or teacher for help on the proper usage of commas, hyphens, capitalization, and numbers. You may also be able to find the answers to these questions by doing an Internet search on writing mechanics or by checking you local library for a book on writing mechanics.

It is also always a good idea to have someone else read your paper. Because this person did not write the paper and is not familiar with the topic, he or she is more likely to catch mistakes or ideas that do not quite make sense. This person can also give you insights or suggestions on how to reword or format your paper to make it flow better or convey your ideas better.

More Information:

  • Quick Science Fair Guide
  • Science Fair Project Ideas

Teaching Homeschool

Welcome! After you finish this article, we invite you to read other articles to assist you in teaching science at home on the Resource Center, which consists of hundreds of free science articles!

Shop for Science Supplies!

Home Science Tools offers a wide variety of science products and kits. Find affordable beakers, dissection supplies, chemicals, microscopes, and everything else you need to teach science for all ages!

Related Articles

29 Creative Ways to Use a Home Science Tools Beaker Mug

29 Creative Ways to Use a Home Science Tools Beaker Mug

Infuse a dash of experimentation into your daily routine with a Home Science Tools Beaker Mug! As we gear up for our 29th Anniversary, we've compiled a list of 29 exciting ways to use your beaker mug in everyday life. From brewing up creative concoctions to unleashing...

Next Generation Science Standards (NGSS)

Next Generation Science Standards (NGSS)

What are the Next Generation Science Standards (NGSS)?  These guidelines summarize what students “should” know and be able to do in different learning levels of science. The NGSS is based on research showing that students who are well-prepared for the future need...

The Beginners Guide to Choosing a Homeschool Science Curriculum

The Beginners Guide to Choosing a Homeschool Science Curriculum

Get Started: Researching Homeschool Science   Curriculums  Teaching homeschool science is a great way for families to personalize their child's education while giving you the flexibility to teach it your way. There are many wonderful science curriculums...

Synthetic Frog Dissection Guide Project

Synthetic Frog Dissection Guide Project

Frog dissections are a great way to learn about the human body, as frogs have many organs and tissues similar to those of humans. It is important to determine which type of dissection is best for your student or child. Some individuals do not enjoy performing...

Snowstorm in a Boiling Flask Density Project

Snowstorm in a Boiling Flask Density Project

You know the mesmerizing feeling of watching the snow fall during a snowstorm? With this project, you can make your own snowstorm in a flask using an adaptation from the lava lamp science experiment! It’s a perfect project for any winter day.


Get project ideas and special offers delivered to your inbox.

should I learn computer coding

Science Fair Wizard

  • Pick a topic
  • Determine a problem
  • Investigate your problem
  • Formulate a hypothesis


  • Design an experiment
  • Test your hypothesis
  • Compile your data
  • Write your research paper
  • Construct your exhibit
  • Prepare your presentation
  • Show Time! Pre-science fair checklist
  • Submit your paperwork


Step 8: Write your research paper

Writing your research paper should be a snap! With every step of the process, you have been collecting information for and writing parts of your research paper. As you are composing your research paper, be sure to save your work frequently and in more than one place!

The research paper should include the following sections in this order:

  • Safety sheet
  • Endorsements
  • Table of contents
  • Acknowledgements
  • Purpose & Hypothesis
  • Review of literature
  • Materials and methods of procedure
  • Conclusions
  • Reference list

Keep these points in mind when reviewing your paper.

  • Paper should include a table of contents, abstract, and references.
  • Title page should be in the correct format with signatures.
  • Header information should be in the top left corner with your last name and the title of the project.
  • Paper should be double-spaced, single-sided, with one inch margins on all sides, and in a standard font such as Times New Roman 10 pt. or 12 pt.
  • All pages should be numbered.

Important: Check out the Science Fair Handbook for detailed instructions regarding the content of the research paper. The handbook also includes examples of the title page, abstract, and references. [ Download Handbook ]

Check out the Science Fair Handbook for detailed instructions regarding the content of the research paper. The handbook also includes examples of the title page, abstract, and references.

Click to go to the Student Science Fair website

The digital library project

What are your chances of acceptance?

Calculate for all schools, your chance of acceptance.

Duke University

Your chancing factors


science fair research report

How to Write a Convincing Science Fair Research Proposal

science fair research report

Do you have a plan for applying to college?

With our free chancing engine, admissions timeline, and personalized recommendations, our free guidance platform gives you a clear idea of what you need to be doing right now and in the future.

For students interested in the STEM fields, there are many extracurriculars to choose from. You might join the Math or Science Olympiad team, you could join the Computer Science Club, or you could even volunteer as a naturalist at a local conservation area.

If you are interested in scientific research, you might pursue the opportunity to secure a research assistant position or shadow various scientific researchers. But if you truly want to take the helm and guide your own research, your path may lead you to participating in the science fair.

The science fair is a traditional component of many high school science programs, with participation ranging widely from school to school and science fair to science fair. At some schools, the science fair might be a rite of passage expected of every student. At others, it attracts a handful of dedicated science die-hards.

Regardless, most science fairs feature presentations by students who have completed experiments, demonstrated scientific principles, or undertaken an engineering challenge. Participants are judged by a panel of experts who score each presentation according to a rubric. Traditionally, awards are presented for the top-scoring projects. 

There are many science fairs beyond school-sponsored fairs, too. Regional, state, national, and even international fairs are open to students who qualify through their schools and work their way up through the science fair circuit. Others, like the Regeneron Science Talent Search, are open through an intensive application process.

If you are considering entering a project in the science fair, you will need to think carefully about your subject matter, your experimental design, and the relevance of your work before committing to a project. Many science fairs will even require that you complete a formal research proposal to demonstrate the level of thinking you’ve put into your experiment before beginning it.

In this post, we will outline the purpose of a research proposal for the science fair, the common elements of such a proposal, and how you can go about writing a comprehensive research proposal that is sure to impress.

What is the Purpose of a Research Proposal?

A research proposal has three primary purposes. The first purpose is to explain what you intend to do. This is essentially what you will do in your experiment or project, summarized into a basic overview.

The second function of a research proposal is to explain how you intend to accomplish this. You will give a brief summary of the methods and techniques that you intend to employ, and list the materials that you will need to do so.

The final point of a research proposal is to explain why this project should be done. Here, you will discuss the important or relevance of this study. Basically, in this portion of your proposal you’ll answer the question, “so what?”

Now that you know the aim of a research proposal, you can begin to prepare to write one. -->

Step-By-Step Guide to Creating a Research Proposal

1. narrow down the subject area..

Before you go into your project in any sort of depth, you’ll need a fairly good idea of what your project’s focus will be. In order to narrow this down, you should consider a few different angles.

First, ask yourself what you’re interested in. You will be more likely to feel engaged and passionate about a project that is genuinely interesting to you, so take some time to carefully consider the areas of science that you find the most fascinating. Even if they don’t seem particularly well-suited to a science fair project at first, you never know what you might be able to come up with through some collaboration with mentors or through some background research. Keep a running list of areas of science that sincerely fascinate you.

Next, consider any specialized labs or equipment to which you might have access. Does your best friend’s mother work in a lab with highly specialized tools? Does your school have a state-of-the-art wind tunnel or fully equipped greenhouse? These are all possible resources you can utilize if you want your project to truly stand out. Of course, it’s completely possible to choose a project that shines on its own without any specialized equipment, but if you’re looking for every boost you might get, having access to specialized technology can be a great advantage to make your project truly unique.

Finally, consider if you know a teacher or other professional who might be willing to mentor you. You can also seek out a mentor specifically if you can’t think of anyone obvious. Having a mentor in your field will provide you with invaluable insight into practice and past research in the field.

In the ideal world, you would find a project that maximizes all of your resources, including your interests, access to equipment, and an enthusiastic mentor. Don’t worry if you can’t secure all three, though. Plenty of science fair participants go on to do quite well relying on only their own dogged determination and commitment to their subject matter.

2 . Decide How Your Experiment Will Be Done

If you have a mentor, teacher, or adviser willing to consult with you, schedule a time to sit down with them and discuss what you’d like to do. If you can’t find someone more experienced than you, even discussing your ideas with a trusted classmate, parent, or older sibling is a good idea. Sometimes the outside perspective will help to fine-tune your design or identify areas for improvement.

You should also begin some research at this stage to learn how similar projects have been conducted in the past. Use the results and limitations from these experiments to help guide your own experimental design.

As you do so, keep in mind any limiting factors. Remember to consider what equipment you have at your disposal, the time commitment you’re able to make, and the materials that you’ll need to acquire.

In addition, be sure to check the rules of the specific science fairs you’ll be attending. Some have strict regulations designed to keep you safe, like limiting the ways in which potentially hazardous chemicals can be used. Other rules are designed to keep the environment safe, like placing restrictions on how you dispose of foreign substances or non-native species. There are also ethical rules that govern the use of human participants or vertebrate animals in your studies. Make sure to check which rules govern the fair in which you’re participating and how they might impact your ideas before you put any more thought into your project.

Discover your chances at hundreds of schools

Our free chancing engine takes into account your history, background, test scores, and extracurricular activities to show you your real chances of admission—and how to improve them.

3. Background Research

Your background research should be fairly comprehensive at this point and will be the single largest component of your research proposal. You should focus on your research on relevant past studies that inform your work either by identifying areas for future research or by identifying limiting factors in their own research. You should also research past experiments that support or attempt to disprove your working theory.

Finally, your research should clearly show why the project is relevant. What is important about it? What does it add to the field? Why should we care? Make sure that you can communicate the scientific value of the project you’re proposing.

4. Write Your Proposal

Once you’ve chosen a project, decided how you’ll undertake it, and done the relevant background research, you are finally able to begin drafting your research proposal. Check with your school or science fair to see if there is a specific format or form that you’re required to adhere to. If not, and you are producing a general research proposal, follow this format:

This should be a one-paragraph description of the project, your hypothesis, and the goals of your experiment. Here, you provide a brief overview of your project for anyone who is skimming your work.

Introduction/Literature Review:

This is the bulk of your proposal. In your literature review, you present what is currently known about your project’s focus and summarize relevant research that has been done in the field. You will discuss previous discoveries in your field, including how they were made and what they lend to your current work.

You will also show what is interesting and ground-breaking about your research idea. In this section you will need to summarize why your project is relevant, what makes it important, and how the field or current base of knowledge could change or be improved due to your project’s results.

As you write your literature review, you’ll need to be sure that you’re using high-quality, accurate sources. It’s best to rely on scholarly journal articles or reference books. Be wary of using the Internet, as many sources are unverified. If you are using online resources, be sure to verify their source. Published, peer-reviewed scholarly articles are best.

It’s also important to include proper citations for every source cited. You’ll need to list all your sources in the appropriate format in your bibliography along with citing them in the text of your proposal when you quote directly or reference specific data. If you aren’t sure how to cite properly, check out the Scientific Style and Format page.


This is the working theory that you are testing and what you expect the results will be, based off what you have learned through your background research.

Materials and Methods:

In this section you’ll provide a precise, in depth description of how you plan to test your hypothesis and what tools or materials you’ll need to do so. Summarize your experimental design, specifically referring to how you will control and replicate the experiment. Also list the equipment and materials that you will need for undertaking your experiment.


Here, you will reiterate how your proposed research will advance knowledge in the scientific field and outline any potential longterm impact that your work could have on theory or practice within the field.


List all sources used in appropriate format. Refer to the Scientific Style and Format page if you aren’t sure how to do so.

What Happens After I Submit a Research Proposal?

After you submit the research proposal, it will be reviewed by your teacher or a science fair administrator or adviser. It will be approved, rejected, or returned for revisions based on its feasibility, value to the scientific field, and adherence to the science fair rules and regulations.

While larger, more selective science fairs will have to select only a limited number of candidates based on the merits of their research proposals, it is fairly uncommon for a science fair research proposal to get completely denied at the school level. Usually, in these cases, your proposal will be returned to you with requests for edits or further clarification. You have most likely consulted with your teacher or adviser throughout the process of developing your proposal, so nothing should come as a complete surprise when you receive feedback.

If your proposal is rejected and you don’t receive constructive feedback, don’t be shy about respectfully requesting some feedback to help you shape a better, more effective proposal in the future.

If your proposal is returned for revisions, you should feel encouraged. While you still have some work to do, this is generally a sign that with a few tweaks, your proposal will be accepted. Meet with a teacher, mentor, or adviser to review the revisions requested and address each thoroughly before returning the proposal for another round of review.

If your proposal is accepted, congratulations! It’s time to get to work. While your proposal itself was probably a time-consuming endeavor, your research will ultimately be easier for having taken the time and care to craft a precise proposal. Your research will be more focused and likely a smoother process due to all your careful planning, and you will be able to use large chunks of your written work in your final scientific report.

Don’t be intimidated if you’re getting ready to write a science fair research proposal. It can be a long process to fine-tune your project and focus your proposed research, but the work that you put in now ultimately makes your job easier in the long run.

Looking for help navigating the road to college as a high school student? Download our  free guide for 9th graders  and our  free guide for 10th graders . Our guides go in-depth about subjects ranging from  academics ,  choosing courses ,  standardized tests ,  extracurricular activities ,  and much more !

For more information about the science fair and opportunities for students interested in the STEM fields, see these valuable CollegeVine posts:

  • How to Spend Your Summer As a Prospective Math Major (And Why Math is a Great Career Path)
  • A Guide to STEM Scholarships
  • Summer Activities for the Future BS/MD Applicant
  • Ultimate Guide to the AP Research Course and Assessment
  • How to Choose a Project for Your AP Research Course
  • How to Get a Research Assistant Position in High School
  • An Introduction to the AP Capstone Diploma
  • How to Choose a Winning Science Fair Project Idea
  • How to Plan and Implement an Independent Study in High School
  • A Beginner’s Guide to the Science Fair
  • Guide to National Youth Science Camp

Want access to expert college guidance — for free? When you create your free CollegeVine account, you will find out your real admissions chances, build a best-fit school list, learn how to improve your profile, and get your questions answered by experts and peers—all for free. Sign up for your CollegeVine account today to get a boost on your college journey.

High School Graduation Year* 2017 2018 2019 2020 2021 2022 2023 Other

Can't see the form above? Subscribe to our newsletter here .

Related CollegeVine Blog Posts

science fair research report

Image that reads Space Place and links to

Do a Science Fair Project!

How do you do a science fair project.

Cartoon of boy and girl doing experiment with small containers on table.

Ask a parent, teacher, or other adult to help you research the topic and find out how to do a science fair project about it.

Test, answer, or show?

Your science fair project may do one of three things:

Test an idea (or hypothesis.)

Answer a question.

Show how nature works.

Topic ideas:

Space topics:.

How do the constellations change in the night sky over different periods of time?

How does the number of stars visible in the sky change from place to place because of light pollution?

Learn about and demonstrate the ancient method of parallax to measure the distance to an object, such as stars and planets.

Study different types of stars and explain different ways they end their life cycles.

Earth topics:

Cross-section drawing of ocean at mouth 9of a river, with heavier saltwater slipping in under the fresh water.

How do the phases of the Moon correspond to the changing tides?

Demonstrate what causes the phases of the Moon?

How does the tilt of Earth’s axis create seasons throughout the year?

How do weather conditions (temperature, humidity) affect how fast a puddle evaporates?

How salty is the ocean?

Solar system topics:

Drawing of the solar system.

How does the size of a meteorite relate to the size of the crater it makes when it hits Earth?

How does the phase of the Moon affect the number of stars visible in the sky?

Show how a planet’s distance from the Sun affects its temperature.

Sun topics:

Observe and record changes in the number and placement of sun spots over several days. DO NOT look directly at the Sun!

Make a sundial and explain how it works.

Show why the Moon and the Sun appear to be the same size in the sky.

How effective are automobile sunshades?

Study and explain the life space of the sun relative to other stars.

Drawing of a science fair project display.

Pick a topic.

Try to find out what people already know about it.

State a hypothesis related to the topic. That is, make a cause-and-effect-statement that you can test using the scientific method .

Explain something.

Make a plan to observe something.

Design and carry out your research, keeping careful records of everything you do or see.

Create an exhibit or display to show and explain to others what you hoped to test (if you had a hypothesis) or what question you wanted to answer, what you did, what your data showed, and your conclusions.

Write a short report that also states the same things as the exhibit or display, and also gives the sources of your initial background research.

Practice describing your project and results, so you will be ready for visitors to your exhibit at the science fair.

Follow these steps to a successful science fair entry!

If you liked this, you may like:

Illustration of a game controller that links to the Space Place Games menu.

Learn STEM by Doing (and having fun)!

science fair board

The Ultimate Science Fair Project Guide – From Start to Finish

When our daughter entered her first science fair, we kept seeing references to the Internet Public Library Science Fair Project Resource Guide .  However, the IPL2 permanently closed… taking the guide with it.  Bummer !  After now participating in over a half-dozen elementary school science fairs (including a first-place finish!), we created our own guide to help other students go from start to finish in their next science fair project.  If this is your first science fair, have fun!  If you’ve done it before, we hope this is your best one!  Let’s science!

*Images from Unsplash

How to Use the STEMium Science Fair Project Ultimate Guide?

science fair research report

If you are just starting off and this is your first science fair, here’s how to get started:

  • Start with the STEMium Science Fair Project Roadmap . This is an infographic that “maps” out the process from start to finish and shows all the steps in a visual format.
  • Getting Started – Why Do a Science Fair Project . Besides walking through some reasons to do a project, we also share links to examples of national science fair competitions, what’s involved and examples of winning science fair experiments .  *Note: this is where you’ll get excited!!
  • The Scientific Method – What is It and What’s Involved . One of the great things about a science fair project is that it introduces students to an essential process/concept known as the scientific method.  This is simply the way in which we develop a hypothesis to test.
  • Start the Process – Find an Idea . You now have a general idea of what to expect at the science fair, examples of winning ideas, and know about the scientific method.  You’re ready to get started on your own project.  How do you come up with an idea for a science fair project?  We have resources on how to use a Google tool , as well as some other strategies for finding an idea.
  • Experiment and Build the Project . Time to roll up those sleeves and put on your lab coat.
  • Other Resources for the Fair. Along the way, you will likely encounter challenges or get stuck.  Don’t give up – it’s all part of the scientific process.  Check out our STEMium Resources page for more links and resources from the web.  We also have additional experiments like the germiest spot in school , or the alka-seltzer rocket project that our own kids used.

Getting Started – Why Do a Science Fair Project

For many students, participating in the science fair might be a choice that was made FOR you.  In other words, something you must do as part of a class.  Maybe your parents are making you do it.  For others, maybe it sounded like a cool idea.  Something fun to try.  Whatever your motivation, there are a lot of great reasons to do a science fair project.

  • Challenge yourself
  • Learn more about science
  • Explore cool technology
  • Make something to help the world! (seriously!)
  • Win prizes (and sometimes even money)
  • Do something you can be proud of!

Many students will participate in a science fair at their school.  But there are also national competitions that include 1000s of participants.  There are also engineering fairs, maker events, and hackathons.  It’s an exciting time to be a scientist!!  The list below gives examples of national events.

  • Regeneron Science Talent Search
  • Regeneron International Science and Engineering Fair
  • Google Science Fair
  • Conrad Challenge
  • Microsoft Imagine Cup
  • JSHS Program
  • Exploravision

What’s the Scientific Method?

Before we jump into your project, it’s important to introduce a key concept:  The Scientific Method .  The scientific method is the framework scientists use to answer their questions and test their hypothesis.  The figure below illustrates the steps you’ll take to get to the end, but it starts with asking a question (you’ve already finished the first step!).

scientific method - for the science fair

After we find a problem/idea to tackle, and dig into some background research, we create a guess on a potential solution.  This is known as our hypothesis.

Example of a Hypothesis

My brother can hold his breath underwater longer than I can (“our problem”) –> how can I hold my breath longer? (“our question”) –>  if I drink soda with caffeine before I hold my breath, I will be able to stay underwater longer (“our solution”).  Our hypothesis is that using caffeine before we go underwater will increase the time we hold our breath.  We’re not sure if that is a correct solution or not at this stage – just taking a guess.

Once we have a hypothesis, we design an experiment to TEST our hypothesis.  First, we will change variables/conditions one at a time while keeping everything else the same, so we can compare the outcomes.

Experimental Design Example

Using our underwater example, maybe we will test different drinks and count how long I can hold my breath.  Maybe we can also see if someone else can serve as a “control” – someone who holds their breath but does not drink caffeine.  For the underwater experiment, we can time in seconds how long I hold my breath before I have a drink and then time it again after I have my caffeine drink.  I can also time how long I stay underwater when I have a drink without caffeine.

Then, once we finish with our experiment, we analyze our data and develop a conclusion.

  • How many seconds did I stay underwater in the different situations? 
  • Which outcome is greater?  Did caffeine help me hold my breath longer? 

Finally, (and most important), we present our findings. Imagine putting together a poster board with a chart showing the number of seconds I stayed underwater in the different conditions.

Hopefully you have a better sense of the scientific method.  If you are completing a science fair project, sticking with these steps is super important.  Just in case there is any lingering confusion, here are some resources for learning more about the scientific method:

  • Science Buddies – Steps of the Scientific Method
  • Ducksters – Learn About the Scientific Method
  • Biology4kids – Scientific Method
  • National Institute of Environmental Health Sciences – Scientific Method

What Science Fair Project Should I Do?

science fair - keep an open mind

And science is no different.

Just know that if you can get through the idea part, the rest of the science fair is relatively smooth sailing.  Remember to keep an open mind and a positive outlook .  Each year 100s of 1000s of kids, teenagers and college students come up with new projects and ideas to test.  You’ve got this!

What Makes a Great Science Fair Project?  Start with a Problem To Solve

science fair research report

As we discuss below, good science experiments attempt to answer a QUESTION.  Why is the sky blue?  Why does my dog bark at her reflection?  First, we will step through some ways to find TESTABLE QUESTIONS.  These questions that you create will be what you work on for your science fair project.  Pick something fun, something interesting and something that you are excited about.  Not sure what that looks like?  Step through some of the tips below for help.

Use the Google Science Fair Idea Generator

Are you surprised Google made a tool for science fair projects??  Our post called the low-stress way to find a science fair project gives a more in-depth overview about how to use it.  It’s a great first stop if you’re early in the brainstorming process.

Answer your own questions

science fair research report

  • What type of music makes you run faster?
  • Can boys hold their breath underwater longer than girls?
  • How can I be sure the sandwich I bought is gluten free?
  • If we plant 100 trees in our neighborhood, will the air be cleaner?

Still stuck? Get inspiration from other science fair projects

science fair research report

Check out the Getting Started section and look at some of the winning science project ideas, our STEMium experiments and our Resource page.  We’ve presented a ton of potential idea starters for you – take time to run through some of these, but our suggestion is to give yourself a deadline to pick an idea .  Going through the lists could take you longer than you think, and in many cases sometimes it’s just better to pick something and go for it!  The next section will take you through how to create testable questions for your project.

Starting Your Project: Find A Testable Question

The best experiments start with a question.  Taking that a step further, the questions you useyou’re your science fair project should be ones that are TESTABLE.  That means something you can measure.  Let’s look at an example.  Let’s say I’m super excited about baking.  OH YEA!!  I love baking.  Specifically, baking cakes.  In fact, I love baking cakes so much that I want to do a science project related to cakes.  We’ve got two questions on cakes that we created.  Which question below could be most useful for a science fair project:

1)  Can eating cake before a test improve your score?

2)  Why isn’t carrot cake more popular than chocolate cake?

The second question isn’t necessarily a bad question to pick.  You could survey people and perhaps tackle the question that way.  However, chances are you will get a lot of different answers and it will probably take a lot of surveys to start to pick up a trend.

Although, the first question might be a little easier.  How would you test this?   Maybe you pick one type of cake and one test that you give people.  If you can get five people to take the test after eating cake and five people take the test with no cake, you can compare the test results.  There might be other variables beyond cake that you could test (example: age, sex, education).  But you can see that the first question is probably a little easier to test.  The first question is also a little easier to come up with a hypothesis.

At this point, you’ve got an idea.  That was the hard part!  Now it’s time to think a little more about that idea and focus it into a scientific question that is testable and that you can create a hypothesis around .

What makes a question “testable”?

Testable questions are ones that can be measured and should focus on what you will change.  In our first cake question, we would be changing whether or not people eat cake before a test.  If we are giving them all the same test and in the same conditions, you could compare how they do on the test with and without cake.  As you are creating your testable question, think about what you WILL CHANGE (cake) and what you are expecting to be different (test scores).  Cause and effect.  Check out this reference on testable questions for more details.

Outline Your Science Project – What Steps Should I Take?

science fair research report

Do Background Research / Create Hypothesis

Science experiments typically start with a question (example: Which cleaning solution eliminates more germs?).  The questions might come up because of a problem.  For example, maybe you’re an engineer and you are trying to design a new line of cars that can drive at least 50 mph faster.  Your problem is that the car isn’t fast enough.  After looking at what other people have tried to do to get the car to go faster, and thinking about what you can change, you try to find a solution or an answer.  When we talk about the scientific method, the proposed answer is referred to as the HYPOTHESIS.

science fair research report

  • Science Buddies
  • National Geographic

The information you gather to answer these research questions can be used in your report or in your board.  This will go in the BACKGROUND section.  For resources that you find useful, make sure you note the web address where you found it, and save in a Google Doc for later.

Additional Research Tips

For your own science fair project, there will likely be rules that will already be set by the judges/teachers/school.  Make sure you get familiar with the rules FOR YOUR FAIR and what needs to be completed to participate .  Typically, you will have to do some research into your project, you’ll complete experiments, analyze data, make conclusions and then present the work in a written report and on a poster board.  Make a checklist of all these “to do” items.  Key things to address:

  • Question being answered – this is your testable question
  • Hypothesis – what did you come up with and why
  • Experimental design – how are you going to test your hypothesis
  • Conclusions – why did you reach these and what are some alternative explanations
  • What would you do next? Answering a testable question usually leads to asking more questions and judges will be interested in how you think about next steps.

Need more help?  Check out these additional resources on how to tackle a science fair project:

  • Developing a Science Fair Project – Wiley
  • Successful Science Fair Projects – Washington University
  • Science Fair Planning Guide – Chattahoochee Elementary

Experiment – Time to Test That Hypothesis

Way to go!  You’ve found a problem and identified a testable question.  You’ve done background research and even created a hypothesis.  It’s time to put it all together now and start designing your experiment.  Two experiments we have outlined in detail – germiest spot in school and alka-seltzer rockets – help show how to set up experiments to test variable changes.

The folks at ThoughtCo have a great overview on the different types of variables – independent, dependent and controls.  You need to identify which ones are relevant to your own experiment and then test to see how changes in the independent variable impacts the dependent variable .  Sounds hard?  Nope.  Let’s look at an example.  Let’s say our hypothesis is that cold weather will let you flip a coin with more heads than tails.  The independent variable is the temperature.  The dependent variable is the number of heads or tails that show up.  Our experiment could involve flipping a coin fifty times in different temperatures (outside, in a sauna, in room temperature) and seeing how many heads/tails we get.

One other important point – write down all the steps you take and the materials you use!!  This will be in your final report and project board.  Example – for our coin flipping experiment, we will have a coin (or more than one), a thermometer to keep track of the temperature in our environment.  Take pictures of the flipping too!

Analyze Results – Make Conclusions

Analyzing means adding up our results and putting them into pretty pictures.  Use charts and graphs whenever you can.  In our last coin flipping example, you’d want to include bar charts of the number of heads and tails at different temperatures.  If you’re doing some other type of experiment, take pictures during the different steps to document everything.

This is the fun part….  Now we get to see if we answered our question!  Did the weather affect the coin flipping?  Did eating cake help us do better on our test??  So exciting!  Look through what the data tells you and try to answer your question.  Your hypothesis may / may not be correct.  It’s not important either way – the most important part is what you learned and the process.  Check out these references for more help:

  • How to make a chart or graph in Google Sheets
  • How to make a chart in Excel

Presentation Time – Set Up Your Board, Practice Your Talk

Personally, the presentation is my favorite part!  First, you get to show off all your hard work and look back at everything you did!  Additionally, science fair rules should outline the specific sections that need to be in the report, and in the poster board – so, be like Emmett from Lego Movie and read the instructions.  Here’s a loose overview of what you should include:

  • Title – what is it called.
  • Introduction / background – here’s why you’re doing it and helping the judges learn a bit about your project.
  • Materials/Methods – what you used and the steps in your experiment. This is so someone else could repeat your experiment.
  • Results – what was the outcome? How many heads/tails?  Include pictures and graphs.
  • Conclusions – was your hypothesis correct? What else would you like to investigate now?  What went right and what went wrong?
  • References – if you did research, where did you get your information from? What are your sources?

The written report will be very similar to the final presentation board.  The board that you’ll prepare is usually a three-panel board set up like the picture shown below.

science fair board

To prepare for the presentation, you and your partner should be able to talk about the following:

  • why you did the experiment
  • the hypothesis that was tested
  • the data results
  • the conclusions.

It’s totally OK to not know an answer.  Just remember this is the fun part!

And that’s it!  YOU DID IT!! 

Science fair projects have been great opportunities for our kids to not only learn more about science, but to also be challenged and push themselves.  Independent projects like these are usually a great learning opportunity.  Has your child completed a science fair project that they are proud of?  Include a pic in the comments – we love to share science!!  Please also check out our STEMium Resources page for more science fair project tips and tricks .

STEMomma is a mother & former scientist/educator. She loves to find creative, fun ways to help engage kids in the STEM fields (science, technology, engineering and math).  When she’s not busy in meetings or carpooling kids, she loves spending time with the family and dreaming up new experiments  or games they can try in the backyard.

Leave a Comment Cancel reply

Notify me of follow-up comments by email.

Notify me of new posts by email.

Science Bob

  • Experiments
  • Science Fair Ideas
  • Science Q&A
  • Research Help
  • Experiment Blog

Okay, this is the hardest part of the whole project…picking your topic. But here are some ideas to get you started. Even if you don’t like any, they may inspire you to come up with one of your own. Remember, check all project ideas with your teacher and parents, and don’t do any project that would hurt or scare people or animals. Good luck!

  • Does music affect on animal behavior?
  • Does the color of food or drinks affect whether or not we like them?
  • Where are the most germs in your school? ( CLICK for more info. )
  • Does music have an affect on plant growth?
  • Which kind of food do dogs (or any animal) prefer best?
  • Which paper towel brand is the strongest?
  • What is the best way to keep an ice cube from melting?
  • What level of salt works best to hatch brine shrimp?
  • Can the food we eat affect our heart rate?
  • How effective are child-proof containers and locks.
  • Can background noise levels affect how well we concentrate?
  • Does acid rain affect the growth of aquatic plants?
  • What is the best way to keep cut flowers fresh the longest?
  • Does the color of light used on plants affect how well they grow?
  • What plant fertilizer works best?
  • Does the color of a room affect human behavior?
  • Do athletic students have better lung capacity?
  • What brand of battery lasts the longest?
  • Does the type of potting soil used in planting affect how fast the plant grows?
  • What type of food allow mold to grow the fastest?
  • Does having worms in soil help plants grow faster?
  • Can plants grow in pots if they are sideways or upside down?
  • Does the color of hair affect how much static electricity it can carry? (test with balloons)
  • How much weight can the surface tension of water hold?
  • Can some people really read someone else’s thoughts?
  • Which soda decays fallen out teeth the most?
  • What light brightness makes plants grow the best?
  • Does the color of birdseed affect how much birds will eat it?
  • Do natural or chemical fertilizers work best?
  • Can mice learn? (you can pick any animal)
  • Can people tell artificial smells from real ones?
  • What brands of bubble gum produce the biggest bubbles?
  • Does age affect human reaction times?
  • What is the effect of salt on the boiling temperature of water?
  • Does shoe design really affect an athlete’s jumping height?
  • What type of grass seed grows the fastest?
  • Can animals see in the dark better than humans?

Didn’t see one you like? Don’t worry…look over them again and see if they give you an idea for your own project that will work for you. Remember, find something that interests you, and have fun with it.

To download and print this list of ideas CLICK HERE .

science fair research report

  • The scientific method
  • science fair resources
  • a little helpful advice

ADS (these ads support our free website)

Share this page.

How to write a scientific report at university

David foster, professor in science and engineering at the university of manchester, explains the best way to write a successful scientific report.

David H Foster's avatar

David H Foster

laptop showing business analytics dashboard with charts, metrics and data analysis/ iStock

At university, you might need to write scientific reports for laboratory experiments, computing and theoretical projects, and literature-based studies – and some eventually as research dissertations. All have a similar structure modelled on scientific journal articles. Their special format helps readers to navigate, understand and make comparisons across the research field.

Scientific report structure

The main components are similar for many subject areas, though some sections might be optional.

If you can choose a title, make it informative and not more than around 12 words. This is the average for scientific articles. Make every word count.  

The abstract summarises your report’s content in a restricted word limit. It might be read separately from your full report, so it should contain a micro-report, without references or personalisation.  

Usual elements you can include:  

  • Some background to the research area.
  • Reason for the work.
  • Main results.
  • Any implications.

Ensure you omit empty statements such as “results are discussed”, as they usually are.  


The introduction should give enough background for readers to assess your work without consulting previous publications.  

It can be organised along these lines:  

  • An opening statement to set the context.  
  • A summary of relevant published research.
  • Your research question, hypothesis or other motivation.  
  • The purpose of your work.
  • An indication of methodology.
  • Your outcome.

Choose citations to any previous research carefully. They should reflect priority and importance, not necessarily recency. Your choices signal your grasp of the field.  

Literature review  

Dissertations and literature-based studies demand a more comprehensive review of published research than is summarised in the introduction. Fortunately, you don’t need to examine thousands of articles. Just proceed systematically.  

  • Use two to three published reviews to familiarise yourself with the field.  
  • Use authoritative databases such as Scopus or Web of Science to find the most frequently cited articles.  
  • Read these articles, noting key points. Experiment with their order and then turn them into sentences, in your own words.  
  • Get advice about expected review length and database usage from your individual programme.

Aims and objectives  

Although the introduction describes the purpose of your work, dissertations might require something more accountable, with distinct aims and objectives.

The aim or aims represent the overall goal (for example, to land people on the moon). The objectives are the individual tasks that together achieve this goal (build rocket, recruit volunteers, launch rocket and so on).

The method section must give enough detail for a competent researcher to repeat your work. Technical descriptions should be accessible, so use generic names for equipment with proprietary names in parentheses (model, year, manufacturer, for example). Ensure that essential steps are clear, especially any affecting your conclusions.

The results section should contain mainly data and analysis. Start with a sentence or two to orient your reader. For numeric data, use graphs over tables and try to make graphs self-explanatory. Leave any interpretations for the discussion section.

The purpose of the discussion is to say what your results mean. Useful items to include:  

  • A reminder of the reason for the work.
  • A review of the results. Ensure you are not repeating the results themselves; this should be more about your thoughts on them.
  • The relationship between your results and the original objective.
  • Their relationship to the literature, with citations.  
  • Any limitations of your results.  
  • Any knowledge you gained, new questions or longer-term implications.

The last item might form a concluding paragraph or be placed in a separate conclusion section. If your report is an internal document, ensure you only refer to your future research plans.  

Try to finish with a “take-home” message complementing the opening of your introduction. For example: “This analysis has shown the process is feasible, but cost will decide its acceptability.”  

Five common mistakes to avoid when writing your doctoral dissertation   9 tips to improve your academic writing Five resources to help students with academic writing


If appropriate, thank colleagues for advice, reading your report and technical support. Make sure that you secure their agreement first. Thank any funding agency. Avoid emotional declarations that you might later regret. That is all that is required in this section.  


Giving references ensures other authors’ ideas, procedures, results and inferences are credited. Use Web of Science or Scopus as mentioned earlier. Avoid databases giving online sources without journal publication details because they might be unreliable.

Don’t refer to Wikipedia. It isn’t a citable source.  

Use one referencing style consistently and make sure it matches the required style of your degree or department. Choose either numbers or author and year to refer to the full references listed near the end of your report. Include all publication details, not just website links. Every reference should be cited in the text.  

Figures and tables  

Each figure should have a caption below with a label, such as “Fig. 1”, with a title and a sentence or two about what it shows. Similarly for tables, except that the title appears above. Every figure and table should be cited in the text.

Theoretical studies  

More flexibility is possible with theoretical reports, but extra care is needed with logical development and mathematical presentation. An introduction and discussion are still needed, and possibly a literature review.

Final steps

Check that your report satisfies the formatting requirements of your department or degree programme. Check for grammatical errors, misspellings, informal language, punctuation, typos and repetition or omission.

Ask fellow students to read your report critically. Then rewrite it. Put it aside for a few days and read it afresh, making any new edits you’ve noticed. Keep up this process until you are happy with the final report. 

You may also like

istock/woman writing

.css-185owts{overflow:hidden;max-height:54px;text-indent:0px;} How to write an undergraduate university dissertation

Grace McCabe

Student on computer

How to use digital advisers to improve academic writing

Students having fun during study in library

How to write a successful research piece at university

Maggie Tighe

Register free and enjoy extra benefits

science fair research report

Data Science Journal

Press logo

  • Download PDF (English) XML (English) Supplement 1 (English)
  • Alt. Display

The FAIR Assessment Conundrum: Reflections on Tools and Metrics

  • Leonardo Candela
  • Dario Mangione
  • Gina Pavone

Several tools for assessing FAIRness have been developed. Although their purpose is common, they use different assessment techniques, they are designed to work with diverse research products, and they are applied in specific scientific disciplines. It is thus inevitable that they perform the assessment using different metrics. This paper provides an overview of the actual FAIR assessment tools and metrics landscape to highlight the challenges characterising this task. In particular, 20 relevant FAIR assessment tools and 1180 relevant metrics were identified and analysed concerning (i) the tool’s distinguishing aspects and their trends, (ii) the gaps between the metric intents and the FAIR principles, (iii) the discrepancies between the declared intent of the metrics and the actual aspects assessed, including the most recurring issues, (iv) the technologies used or mentioned the most in the assessment metrics. The findings highlight (a) the distinguishing characteristics of the tools and the emergence of trends over time concerning those characteristics, (b) the identification of gaps at both metric and tool levels, (c) discrepancies observed in 345 metrics between their declared intent and the actual aspects assessed, pointing at several recurring issues, and (d) the variety in the technology used for the assessments, the majority of which can be ascribed to linked data solutions. This work also highlights some open issues that FAIR assessment still needs to address.

  • FAIR assessment tools
  • FAIR assessment metrics

1 Introduction

Wilkinson et al. formulated the FAIR guiding principles to support data producers and publishers in dealing with four fundamental challenges in scientific data management and formal scholarly digital publishing, namely Findability, Accessibility, Interoperability, and Reusability ( Wilkinson et al. 2016 ). The principles were minimally defined to keep, as low as possible, the barrier-to-entry for data producers, publishers, and stewards who wish to make their data holdings FAIR. Moreover, the intent was to formulate principles that apply not only to ‘data’ in the conventional sense but also to the algorithms, tools, and workflows that led to that data. All scholarly digital research objects were expected to benefit from applying these principles since all components of the research process must be available to ensure transparency, reusability and, whenever possible, reproducibility. Later, homologous principles were formulated to deal with specific typologies of research products ( Goble et al. 2020 ; Katz, Gruenpeter & Honeyman 2021 ; Lamprecht et al. 2020 ).

Such principles were well received by several communities and are nowadays in the research agenda of almost any community dealing with research data despite the absence of concrete implementation details ( Jacobsen et al. 2020 ; Mons et al. 2017 ). This situation is producing a proliferation of approaches and initiatives related to their interpretation and concrete implementation ( Mangione, Candela & Castelli 2022 ; Thompson et al. 2020 ). It also requires evaluating the level of FAIRness achieved, which results in a multitude of maturity indicators, metrics, and assessment frameworks, e.g. ( Bahim, Dekkers & Wyns 2019 ; De Miranda Azevedo & Dumontier 2020 ; Krans et al. 2022 ).

Having a clear and up-to-date understanding of FAIR assessment practices and approaches helps in perceiving the differences that characterise them, properly interpreting their results, and eventually envisaging new solutions to overcome the limitations affecting the current landscape. This paper analyses a comprehensive set of FAIR assessment tools and the metrics used by these tools for the assessment to highlight the challenges characterising this valuable task. In particular, the research questions this study focuses on are: (i) to highlight the characteristics and trends of the currently existing tools, and (ii) to identify the relationships that exist among the FAIR principles and the approaches exploited to assess them in practice thus to discuss whether the resulting assessment is practical or there are gaps to deal with. A comprehensive ensemble of tools and metrics is needed to respond to these questions. This ensemble was developed by carefully analysing the literature, the information on the web and the actual implementation of tools and metrics. The resulting data set is openly available (see Data Accessibility Statements).

The rest of the paper is organised as follows. Section 2 discusses the related works, namely the surveys and analysis of FAIR assessment tools performed before this study. Section 3 presents the research questions this study focuses on, and the methodology used to respond to them. Section 4 describes the results of the study. Section 5 critically discusses the results by analysing them and providing insights. Finally, Section 6 concludes the paper by summarising the study’s findings. An appendix mainly containing the tabular representation of the data underlying the findings complements the paper.

2 Related Work

Several comparative studies and surveys on the existing FAIR assessment tools can be found in the literature.

Bahim et al. ( Bahim, Dekkers & Wyns 2019 ) conducted a landscape analysis to define FAIR indicators by assessing the approaches and the metrics developed until 2019. They produced a list of twelve tools (The twelve tools analysed were: the ANDS-NECTAR-RDS-FAIR data assessment tool, the DANS-Fairdat, the DANS-Fair enough?, the CSIRO 5-star Data Rating tool, the FAIR Metrics Questionnaire, the Stewardship Maturity Mix, the FAIR Evaluator, the Data Stewardship Wizard, the Checklist for Evaluation of Dataset Fitness for Use, the RDA-SHARC Evaluation, the WMO-Wide Stewardship Maturity Matrix for Climate Data, and the Data Use and Services Maturity Matrix.). They also produced a comparison of the different 148 metrics characterising the selected tools, ultimately presenting a classification of the metrics by FAIR principle and specifically by five dimensions: ‘Findable’, ‘Accessible’, ‘Interoperable’, ‘Reusable’, and ‘Beyond FAIR’.

Peters-von Gehlen et al. ( 2022 ) widened the FAIR assessment tool list originating from Bahim, Dekkers and Wyns ( 2019 ). By adopting a research data repository’s perspective, they shifted their attention to the different evaluation results obtained by employing five FAIR evaluation tools for assessing the same set of discipline-specific data resources. Their study showed that the evaluation results produced by the selected tools reliably reflected the curation status of the data resources assessed and that the scores, although consistent on the overall FAIRness level, were more likely to be similar among the tools that shared the same manual or automated methodology. They also concluded that even if manual approaches proved to be better suited for capturing contextual information, when focusing on assessing discipline-specific FAIRness there is no FAIR evaluation tool that meets the need, and promising solutions would be envisaging hybrid approaches.

Krans et al. ( 2022 ) classified and described ten assessment tools (selected through online searches in June 2020) to highlight the gaps between the FAIR data practices and the ones currently characterising the field of human risk assessment of microplastics and nanomaterials. The ten tools discussed were: FAIRdat, FAIRenough? (no longer available), ARDC FAIR self-assessment, FAIRshake, SATIFYD, FAIR maturity indicators for nanosafety, FAIR evaluator software, RDA-SHARC Simple Grids, GARDIAN (no longer available), and Data Stewardship Wizard. These tools were classified by type, namely ‘online survey’, ‘(semi-)automated’, ‘offline survey’, and ‘other’, and evaluated using two sets of criteria: developer-centred and user-centred. The first characterised the tools binarily based on their extensibility and degree of maturity; the latter distinguished nine user friendliness dimensions (‘expertise’, ‘guidance’, ‘ease of use’, ‘type of input’, ‘applicability’, ‘time investment’, ‘type of output’, ‘detail’, and ‘improvement’) grouped in three sets (‘prerequisites’, ‘use’, and ‘output’). Their study showed that the instruments based on human judgement could not guarantee the consistency of the results even if used by domain experts. In contrast, the (semi-)automated ones were more objective. Overall, they registered a lack of consensus in the score systems and on how FAIRness should be measured.

Sun et al. ( 2022 ) focused on comparing three automated FAIR evaluation tools (F-UJI, FAIR Evaluator, and FAIR checker) based on three dimensions: ‘usability’, ‘evaluation metrics’, and ‘metric test results’. They highlighted three significant differences among the tools, which heavily influenced the results: the different understanding of data and metadata identifiers, the different extent of information extraction, and the differences in the metrics implementation.

In this paper, we have extended the previous analyses by including more tools and, above all, by including a concrete study of the metrics that these tools use. The original contribution of the paper consists of a precise analysis of the metrics used for the assessment and what issues arise in FAIRness assessment processes. The aim is to examine the various implementation choices and the challenges that emerge in the FAIR assessment process related to them. These implementation choices are in fact necessary for transitioning from the principle level to the factual check level. Such checks are rule-based and depend on the selection of parameters and methods for verification. Our analysis shows the issues associated with the implementation choices that define the current FAIR assessment process.

3 Methodology

We defined the following research questions to drive the study:

  • RQ1. What are the aspects characterising existing tools? What are the trends characterising these aspects?
  • RQ2. Are there any gaps between the FAIR principles coverage and the metrics’ overall coverage emerging from the declared intents?
  • RQ3. Are there discrepancies between the declared intent of the metrics and the actual aspects assessed? What are the most recurring issues?
  • RQ4. Which approaches and technologies are the most cited and used by the metrics implementations for each principle?

To reply to these questions, we identified a suitable ensemble of existing tools and metrics. The starting point was the list provided by FAIRassist ( ). To achieve an up-to-date ensemble of tools, we enriched the tool list by referring to Mangione et al. ( 2022 ), by snowballing, and, lastly, by web searching. From the overall resulting list of tools, we removed the no longer running ones and the ones not intended for the assessment of FAIR principles in the strict sense. In particular: the GARDIAN FAIR Metrics, the 5 Star Data Rating Tool, and the FAIR enough? were removed because they are no longer running; the FAIR-Aware, the Data Stewardship Wizard, the Do I-PASS for FAIR, the TRIPLE Training Toolkit, and the CLARIN Metadata Curation Dashboard were removed because they were considered out of scope. Table 1 reports the resulting list of the 20 tools identified and surveyed.

List of FAIR assessment tools analysed.

We used several sources to collect the list of existing metrics. In particular, we carefully analysed the specific websites, papers, and any additional documentation characterising the selected tools, including the source code and information deriving from the use of the tools themselves. For the tools that enable users to define their specific metrics, we considered all documented metrics, except those created by users for testing purposes or written in a language other than English. In the case of metrics structured as questions with multiple answers (not just binary), each answer was considered a different metric as the different checks cannot be put into a single formulation. This approach was necessary for capturing the different degrees of FAIRness that the tool creators conceived. The selection process resulted in a data set of 1180 metrics.

Some tools associate each metric with specific principles. To observe the distribution of the metrics and the gaps concerning the FAIR principles, we considered the FAIR principle (or the letter of the FAIR acronym) which were designed to assess, as declared in the papers describing the tools, but also in the source code, and the results of the assessments performed by the tools themselves.

To analyse the metrics for the identification of discrepancies between the declared intent of the metrics and the actual aspects assessed, we adopted a classification approach based on a close reading of the FAIR principles, assigning one or more principles to each metric. This approach was preferred to one envisaging the development of our list of checks, as any such list would merely constitute an additional FAIR interpretation. This process was applied to both the tools that already had principles associated with the metrics and those that did not. We classified each metric under the FAIR principle we deemed the closest, depending on the metric formulation or implementation. We relied on the metrics implementation source code, when available, to better understand the checks performed. The classification is provided in the accompanying data set and it is summarised in Figure 3 .

The analysis of the approaches and technologies used by the metrics is based on the metric formulation, their source code, and the results of the assessments performed by the tools. With regard to the approaches, we classified the approach of each metric linked to a specific FAIR principle, as declared by the metric authors, following a bottom to top process. We grouped the metrics by specific FAIR principle and then we created a taxonomy of approaches based on the ones observed in each group (App. A.5). For the technologies, we annotated each metric with the technologies mentioned in the metric formulation, in the results of the assessments performed by the tools, and as observed through a source code review.

This section reports the study findings concerning the tool-related and metric-related research questions. Findings are discussed and analysed in Section 5.

4.1 Assessment tools

Table 1 enumerates the 20 FAIR Assessment tools analysed by reporting their name, URL, and the year the tool was initially proposed.

The tools were analysed through the following characteristics: (i) the target , i.e. the digital object the tool focuses on (e.g., dataset, software); (ii) the methodology , i.e. whether the assessment process is manual or automatic; (iii) the adaptability , i.e. whether the assessment process is fixed or can be adapted (specific methods and metrics can be added); (iv) the discipline-specificity , i.e. whether the assessment method is tailored for a specific discipline (or conceived to be) or discipline-agnostic; (v) the community-specificity , i.e. whether the assessment method is tailored for a specific community (or conceived to be) or community-agnostic; (vi) the provisioning , i.e. whether the tool is made available as-a-service or on-premises.

Table 2 shows the differentiation of the analysed tools based on the identified distinguishing characteristics.

Differentiation of analysed tools based on identified distinguishing characteristics. The term ‘enabled’ signifies that the configuration allows the addition of new metrics, allowing individuals to include metrics relevant to their discipline or community. The ‘any dig. obj.*’ value means that there is a large number of typologies supported yet this is specialised rather than actually supporting ‘any’.

By observing the emergence of the identified characteristics over time, from 2017 to 2023, it is possible to highlight trends in the development of tools created for FAIR assessment purposes. Figure 1 depicts these trends.

FAIR assessment tools trends

FAIR assessment tools trends.

Target. We observed an increasing variety of digital objects ( Figure 1 Target), reflecting the growing awareness of the specificities of different research products stemming from the debates that followed the publication of the FAIR guiding principles. However, 45% of the tools deal with datasets. We assigned the label any dig. obj . (any digital object) to the tools that allow the creation of user-defined metrics, but also when the checks performed are generic enough to be applied notwithstanding the digital object type, e.g., evaluations based on the existence of a persistent identifier such as a DOI and the use of a generic metadata schema, such as the Dublin Core, for describing a digital object. The asterisk that follows the label ‘ any dig. obj.’ in Table 2 indicates that, although many types of objects are supported, the tools specifically assess some of them. In particular: (a) AUT deals with datasets, tools, and a combination of them in workflows, and (b) FRO is intended for assessing a specific format of digital objects, namely RO-Crate ( Soiland-Reyes et al. 2022 ), which can package any type of digital object.

Methodology. The tools implement three modes of operation: (i) manual , i.e. if the assessment is performed manually by the user; (ii) automatic , i.e. if it does not require user judgement; (iii) hybrid , i.e. a combination of manual and automated approaches. Manual and hybrid approaches were the first implemented, but over time, automatic approaches were preferred due to the high subjectivity characterising the first two methodologies ( Figure 1 . Assessment methodology). Fifty-five per cent of the tools implement automatic assessments. Notable exceptions are MAT (2020 – hybrid) and FES (2023 – manual), assessing the FAIRness of a repository and requiring metrics that include organisational aspects, which are not easily measured and whose automation still poses difficulties.

Adaptability. We distinguished tools between (i) non-adaptable (whose metrics are predefined and cannot be extended) and (ii) adaptable (when it is possible to add user-defined metrics). Only three tools of the ensemble are adaptable, namely FSH, EVL, and ENO. EVA was considered a ‘fixed’ tool, although it supports the implementation of plug-ins that specialise the actual checks performed by a given metric. Despite their limitations, the preference for non-adaptable tools is observed to persist over time ( Figure 1 . Assessment method).

Discipline-specific. A further feature is whether a tool is conceived to assess the FAIRness of discipline-specific research outputs or is discipline agnostic. We grouped three tools as discipline-specific: AUT, CHE, and MAT. While the adaptable tools (FSH, EVL, and ENO) may not include discipline-specific metrics at the moment, they enable such possibility, as well as EVA, since it allows defining custom configurations for the existing assessments. The trend observed is a preference for discipline-agnostic tools ( Figure 1 . Discipline-specific nature).

Community-specific. Among the tools, some include checks related to community-specific standards (e.g. the OpenAIRE Guidelines) or that allow the possibility of defining community-relevant evaluations. As for the case of discipline-specific tools, the adaptable tools (FSH, EVL, and ENO) also enable community-specific evaluations, as well as EVA. Figure 1 . Community-specific nature shows that, in general, community-agnostic solutions were preferred.

Provisioning. The tools are offered following the as-a-service model or as an on-premises application (we included in the latter category the self-assessment questionnaires in a PDF format). While on-premises solutions are still being developed (e.g. python notebooks and libraries), the observed trend is a preference for the as-a-service model ( Figure 1 . Provisioning).

4.2 Assessment metrics

Existing assessment metrics are analysed to (i) identify gaps between the FAIR principles’ coverage and the metrics’ overall coverage emerging from the declared intents (cf. Section 4.2.1), (ii) highlight discrepancies among metrics intent and observed behaviour concerning FAIR principles and distil the issues leading to the mismatch (cf. Section 4.2.2), and (iii) determine frequent approaches and technologies considered in metrics implementations (cf. Section 4.2.3).

4.2.1 Assessment metrics: gaps with respect to FAIR principles

To identify possible gaps in the FAIR assessment process we observed the distributions of the selected metrics grouped according to the FAIR principle they were designed to assess. Such information was taken from different sources, including the papers describing the tools, other available documentation, the source code, and the use of the tools themselves.

Figure 2 reports the distribution of metrics with respect to the declared target principle, if any, for each tool. Appendix A.1 reports a table with the detailed data. In the left diagram, a metric falls in the F, A, I, and R when it refers only to Findable, Accessible, Interoperable, and Reusable and not to a numbered/specific principle. The ‘n/a’ series is used for counting the metrics that do not declare a reference to a specific principle or even to a letter of the FAIR acronym. In the right diagram, the metrics are aggregated by class of principles, e.g. the F-related metrics include all the ones that in the left diagram are either F, F1, F2, F3 or F4.

Distribution of declared metric intent per tool

FAIR assessment tools’ declared metric intent distribution. In the left diagram, F, A, I, and R series refer to metrics with declared intent Findable, Accessible, Interoperable, and Reusable rather than a numbered/specific principle. The ‘n/a’ series is for metrics that do not declare an intent referring to a specific principle or even to a letter of the FAIR acronym. In the right diagram, the metrics are aggregated by class of principles, e.g. the F-related metrics include all the ones that in the left diagram are either F, F1, F2, F3 or F4.

Distribution of observed metric intent per tool

FAIR assessment tools’ observed metric goal distribution. In the left diagram, metrics are associated either with a specific principle, ‘many’ principles or ‘none’ principle. In the right diagram, the metrics associated with a specific principle are aggregated by class of principles, e.g. the R-related metrics include all the ones that in the left diagram are either F1, F2, F3 or F4.

Only 12 tools (CHE, ENO, EVA, EVL, FOO, FRO, FSH, FUJ, MAT, OFA, OPE, RDA) out of 20 identify a specific principle linked to the metrics. The rest either refer only to Findable, Accessible, Interoperable, or Reusable to annotate their metrics (namely, AUT, DAT, FDB, FES, SAG, SAT, SET) or do not refer to specific principles nor letters of the acronym for annotating their metrics (namely, HFI). Even among those tools that make explicit connections, some metrics remain detached from any particular FAIR principle or acronym letter, as indicated with ‘n/a’ in Figure 2 and Table A.1.

The figures also document that assessment metrics exist for each FAIR principle, but not every principle is equally covered, and not all the tools implement metrics for all the principles.

When focusing on the 12 tools explicitly referring to principles in metrics declared intents and considering the total amount of metrics exploited by a given tool to perform the assessment, it is easy to observe that some tools use a larger set of metrics than others. For instance, FSH uses 339 distinct metrics, while MAT uses only 13 distinct metrics. The tools having a lower number of metrics tend to overlook some principles.

The distribution of metrics with respect to their target highlights that, for each principle, some kind of check has been conceived, even though with different numbers. They, in fact, range from the A1.2 minimum (covered by 16 metrics) to the F1 maximum (covered by 76 metrics). It is also to be noted that for each group of principles linked to a letter of the FAIR acronym, the largest number of metrics is concentrated on the first of them. In particular, this is evident for the A group, with 71 metrics focusing on A1 and around 20 for the others.

Four principles are somehow considered by all the tools, namely F1, F2, I1, and R1.1. While for F1, F2, and I1, the tools use many metrics for their assessment, for R1.1, few metrics were exploited.

Four principles experience relatively lower emphasis, namely A1.2, F3, A1.1, and A2, with fewer metrics dedicated to their assessment. While at the tool level, A1.2, A2, R1.2, and R1.3 are principles that remain unexplored by several of them. A1.2 is not assessed at all by four tools out of 12; A2 is not assessed at all by three tools out of 12; R1.2 is not assessed at all by three tools out of 12; R1.3 is not assessed at all by two tools out of 12.

4.2.2 Assessment metrics: observed behaviours and FAIR principles discrepancies

In addition to the metrics not linked to a specific FAIR principle, we noticed that the implementation of some metrics was misaligned with the principle it declared to target. The implementation term is used with a comprehensive meaning thus including metrics from manual tools and metrics from automatic tools. By analysing the implementation of the metrics, we assigned to each the FAIR principle or set of principles sounding closer.

We identified three discrepancy cases: (i) from a FAIR principle to another, (ii) from a letter of the FAIR acronym to a FAIR principle of a different letter of the acronym (e.g. from A to R1.1), and (iii) from any declared or undeclared FAIR principle to a formulation that we consider beyond FAIRness (‘none’ in Figure 3 ).

An example of a metric with a discrepancy from one FAIR principle to another is ‘Data access information is machine readable’ declared for the assessment of A1, but rather attributable to I1. Likewise, the metric ‘Metadata is given in a way major search engines can ingest it for their catalogues (JSON-LD, Dublin Core, RDFa)’, declared for F4, can be rather linked to I1, as it leverages a serialisation point of view.

The metric ‘Which of the usage licenses provided by EASY did you choose in order to comply with the access rights attached to the data? Open access (CC0)’ with only the letter ‘A’ declared is instead a case in which the assessment concerns a different principle (i.e. R1.1).

Regarding the discrepancies from any declared or undeclared FAIR principle to a formulation that we consider beyond FAIRness, examples are the metrics ‘Tutorials for the tool are available on the tools homepage’ and ‘The tool’s compatibility information is provided’.

In addition to the three identified types of discrepancies, we also encountered metrics that were not initially assigned a FAIR principle or corresponding letter. However, we mapped these metrics to one of the FAIR principles. An example is the metric ‘Available in a standard machine-readable format’, attributable to I1. The latter case is indicative of how wide the implementation spectrum of the FAIRness assessment can be, to the point of distancing particularly far from the formulation of the principles themselves. These metrics that we have called ‘beyond FAIRness’ do not necessarily betray the objective of the principles, but for sure they ask for technologies or solutions which cannot be strictly considered related to FAIR principles.

Figure 3 shows the distribution of all the metrics in our sample resulting from the analysis and assignment to FAIR principles activity. Appendix A.2 reports the table with the detailed data.

This figure confirms that (a) all the principles are somehow assessed, (b) few tools assess all the principles (namely, EVA, FSH, OFA, and RDA), (c) there is a significant amount of metrics (136 out of 1180) that refer to more than one principle at the same time (the ‘many’), and (d) there is a significant amount of metrics (170 out of 1180) that sounds far from the FAIR principles at all (the ‘none’).

Figure 4 depicts the distribution of declared ( Figure 2 , detailed data in Appendix A.1) and observed ( Figure 3 , detailed data in Appendix A.2) metrics intent with respect to FAIR principles. Apart from the absence of metrics referring to one of the overall areas of FAIR, the distribution of the metrics’ observed intents highlights the great number of metrics that either refer to many FAIR principles or any. Concerning the principles, the graph shows a significant growth in the number of metrics assessing F1 (from 76 to 114), F4 (from 39 to 62), A1.1 (from 26 to 56), I1 (from 56 to 113), R1 (from 56 to 79), R1.1 (from 44 to 73), and R1.2 (from 52 to 74). All in all, for 835 metrics out of the 1180 analysed, the declared metric intent and the observed metric intent correspond (i.e. if (i) the referred principle corresponds, or (ii) the declared intent is either F, A, I, or R while the observed intent is a specific principle of the same class). The cases of misalignment are carefully discussed in the remainder of the section.

Declared vs observed metric intents

Comparison of the metrics distributions with regard to their declared and observed intent.

While the declared metrics intent is always linked to one principle only – or even to only one letter of the FAIR acronym – we noted that 136 metrics can be related to more than one FAIR principle at once. These correspond to the ‘many’ series in Figure 4 counting the number of times we associated more than one FAIR principle to a metric of a tool (see also Table A.2, column ‘many’).

Figure 5 shows the distribution of these co-occurrences among the FAIR principles we observed (see also Table A.3 in Section A.3).

Co-occurrences among FAIR principles

Co-occurrences among metrics observed FAIR principles in numbers and percentages.

Such co-occurrences involve all FAIR principles. In some cases, assessment metrics on a specific principle are also considered to be about many diverse principles, notably: (i) metrics dealing with I1 also deal with either F1, F2, A1, I2, I3, or a Reproducibility-related principle, (ii) metrics dealing with R1.3 also deal with either F2, A1, A1.1, an Interoperability principle or R1.2. The number of different principles we found co-occurring with I1 hints at the importance given to the machine-readability of metadata, which is a recurrent parameter in the assessments, particularly for automated ones, so that it can be considered an implementation prerequisite notwithstanding the FAIR guidelines. The fact that R1.3 is the second principle for the number of co-occurrences with other principles is an indicator of the role of the communities in shaping actual practices and workflows.

In some cases, there is a significant number of co-occurrences between two principles, e.g. we observed that many metrics deal with both F2 and R1 (36) or I1 and R1.3 (35). The co-occurrences between F2 and R1 are strictly connected to the formulation of the principles and symptomatic of a missing clear demarcation between the two. The case of metrics with both I1 and R1.3 is ascribable to the overlapping of the ubiquitous machine-readable requirement and the actual implementation of machine-readable solutions by communities of practice.

We also observed metrics that we could not link to any FAIR principle ( Figure 3 ‘none’ series) because of the parameters used in the assessment. Examples of metrics we considered not matching any FAIR principle include (a) those focusing on the openness of the object since ‘FAIR is not equal to open’ ( Mons et al. 2017 ), (b) those focusing on the downloadability of the object, (c) those focusing on the long-term availability of the object since A2 only requires that the metadata of the object must be preserved, (d) those relying on the concept of data or metadata validity, e.g. a metric verifying that the contact information given is valid, (e) those focusing on trustworthiness (for repositories), and (f) those focusing on multilingualism of the digital object.

To identify the discrepancies between declared intents and observed behaviour, we considered misaligned the metrics with a different FAIR principle declared than the one we observed. In addition, all the metrics counted as none in Figure 3 are discrepancies since they include concepts beyond FAIRness assessment. For the metrics that are referred only to a letter of the FAIR acronym, we based the misalignments on the discordance with the FAIR principle letter. Concerning the metrics linked to more than one FAIR principle, we considered as discrepancies only the cases where the principle or letter declared does not match any of the observed possibilities. Figure 6 documents these discrepancies (detailed data are in Appendix A.4).

Discrepancies among declared and observed metric intents

Discrepancies among metrics declared and observed FAIR principles in numbers and percentages.

When looking at the observed intent, including the metrics in the ‘many’ column, all FAIR principles are in the codomain of the mismatches, except for A1.2 and A2 (in fact, there is no column for that in Figure 6 ). Moreover, misaligned metrics for Findability and Accessibility are always relocated to other letters of the acronym, implying a higher tendency to confusion in the assessment of F and A principles .

While it is possible to observe misalignments in metrics implementations that we linked to more than one principle, no such cases involve accessibility-oriented declared metrics. For metrics pertaining to the other FAIR areas, there are a few cases, mainly involving findability and metrics with no declared intent. No metrics that we could not link to a FAIR principle were found for F4, A1.2, and I2 principles, indicating that the checks on indexing (F4), authentication and authorisation (A1.2), and use of vocabularies (I2) tend to be more unambiguous .

Concerning metrics with findability-oriented declared intents, we did not observe misalignments with any findability principle. Still, we found misalignments with accessibility, interoperability, and reusability principles, including metrics that can be linked with more than one principle and metrics that we could not associate with any principle. Accessibility-related misalignments concern A1 (9), with references to the use of standard protocols to access metadata and to the resolvability of an identifier, and A1.1 (12), because of references to free accessibility to a digital object. Interoperability-related misalignment concern I1 (23) and are linked to references to machine-readability (e.g. the presence of machine-readable metadata, such as the JSON-LD format, or structured metadata in general) and semantic resources (e.g. the use of controlled vocabularies or knowledge representation languages like RDF). Reusability-related misalignments concern R1 (4), because of references to metadata that cannot be easily linked to a findability aim (e.g. the size of a digital object), R1.2 (7), as we observed references to versioning and provenance information, and R1.3 (3), for references to community standards (e.g. community accepted terminologies). Concerning the findability-oriented metrics we classify as ‘many’ (18), we observed they intertwine concepts pertaining to A1, A1.1, I1, I2, or R1.3. About the metrics we could not link to a principle (19), they include references to parameters such as free downloadability and the existence of a landing page.

Concerning metrics with accessibility-oriented declared intents, we did not observe misalignments with an accessibility principle. There is one misalignment with F2, regarding the existence of a title associated with a digital object, and few with I1 (5) because of references to the machine-readability (e.g. machine-readable access information) and semantic artefacts (e.g. controlled vocabularies for access terms). The majority of misalignments are observed with reusability, as we observed metrics involving R1 (9), with references to metadata elements related to access conditions (e.g. dc:rights) and to the current status of a digital object (e.g. owl:deprecated), R1.1 (2), because of mentions to the presence of a licence (e.g. creative commons licence), and R1.2 (2), since there are references to versioning information (e.g. if metadata on versioning information is provided). There are also metrics (43) that we cannot link to any principle, which refer to parameters such as the availability of tutorials, long-term preservation of digital objects, and free downloadability.

Concerning metrics with interoperability-oriented declared intents mismatches concern F1 (11) with references to the use of identifiers (e.g. URI), A1 (2) because of references to the resolvability of a metadata element identifier, I1 (5) for checks limited to the scope of I1 even if declared for assessing I2 or I3 (e.g. metadata represented in an RDF serialisation), and I3 (4) because of checks only aimed at verifying that other semantic resources are used even if declared to assess I2. We also observed metrics declared to assess I2 (2) linked to multiple principles; they intertwine aspects pertaining to F2, I1, and R1.3. Except for I2, there are 20 interoperability-oriented metrics that we could not link to any principle (e.g. citing the availability of source code in the case of software).

Concerning metrics with reusability-oriented declared intents, mismatches regard F4 (1) because of a reference to software hosted in a repository, I1 (6) with references to machine-readability, specific semantic artefacts (e.g. ), or to lists of formats, and I3 (1) as there is a reference to ontology elements defined through a property restriction or an equivalent class, but they mainly involve reusability principles. Looking at reusability to reusability mismatches: (i) for R1 declared metrics, we observed mismatches with R1.1 (2) concerning licences, R1.2 (2) because of references to provenance information, and R1.3 (3) since there are references to community-specific or domain-specific semantic artefacts (e.g. Human Phenotype Ontology); (ii) for R1.1 declared metrics, there are mismatches concerning R1 (3) since there are references to access rights metadata elements (e.g. cc:morePermissions); (iii) for R1.2 declared metrics, we observed mismatches concerning R1 (1) and R1.1 (1) because of references to contact and licence information respectively; (iv) for R1.3 declared metrics mismatches concern R1.1 (2) since there are references to licences. Only in the case of one R1.2 declared metric we observed a link with more than one FAIR principle, F2 and R1, because of references to citation information. The reusability declared metrics we could not link to any principle (40) concern references such as the availability of helpdesk support or the existence of a rationale among the documentation provided for a digital object.

Concerning the metrics whose intent was not declared (80), we observed that 40% (32) are linked to at least one principle, while the remaining 60% (48) are beyond FAIRness. In this set of metrics we found metrics concerning F4 (10), e.g. verifying if a software source code is in a registry; I1 (1) a metric verifying the availability of a standard machine-readable format; R1 (2), e.g. for a reference to terms of service; R1.1 (4), because of references to licences; R1.2 (2), e.g. a metric verifies if all the steps to reproduce the data are provided. Some metrics can be linked to more than one principle (13); these metrics intertwin aspects pertaining to F2, F3, I1, I2, I3, R1, and R1.2. An example is a reference to citation information, which can be linked to F2 and R1.

4.2.3 Assessment metrics: approaches and technologies

Having observed that assessment metrics have been proposed for each FAIR principle, it is important to understand how these metrics have been formulated in practice in terms of approaches and technologies with respect to the specific principles they target.

Analysing the metrics explicitly having one of the FAIR principles as a target of their declared intent (cf. Section 4.2.1), it emerged that some (101 out of 677) are simply implemented by repeating the principle formulation or part of it. These metrics do not give any help or indication to the specific assessment task that remains as generic and open to diverse interpretations as the principle formulation is. The rest of the implementations are summarised in Appendix A.5 together with concrete examples to offer an overview of the wealth of approaches proposed for implementing FAIR assessment rules. These approaches include identifier-centred ones (e.g. checking whether the identifier is compliant with a given format, belongs to a list of controlled values or can be successfully resolved), metadata-element centred ones (e.g. verifying the presence of a specific metadata element), metadata-value centred ones (e.g. verify if a specific value or string is used for compiling a given metadata element), service-based ones (e.g. checking whether an object can be found by a search engine or a registry). All approaches involve more than one FAIR area, except for: (a) policy-centred approaches (i.e. looking for the existence of a policy regarding the identifier persistency) for F1, (b) documentation-centred approaches (i.e. an URL to a document describing the required assessment feature), only used for A1.1, A1.2, and A2 verifications, (c) service-centred approaches (i.e. the presence of a given feature in a registry or in a repository), only used for F4, and (d) metadata schema-centred approaches (i.e. verify that a schema rather than an element of it is used), used for R1.3.

Approaches based on the label of the metadata element employed to describe an object, and those based on an identifier, assigned to the object or identifying a metadata element, are the most prevalent. The former is utilised for assessing 14 out of 15 principles (with the exception of A2), while the latter is applied in the assessment of 13 out of 15 principles (excluding F4 and A2).

By analysing the metrics and, when possible, their implementation, we identified 535 metrics mentioning or using technologies for the specific assessment purpose, with four of them referring only to the generic use of linked data. Of the 535 metrics, 174 declare to assess findability, 92 accessibility, 120 interoperability, 147 reusability, and two are not explicitly linked with any FAIR principle or area. Overall, these metrics refer to 215 distinct technologies (the term ‘technology’ is used in its widest acceptation thus including very diverse typologies ranging from (meta)data formats to standards, semantic technologies, protocols, and services). These do not include a generic reference to the IANA media types mentioned by one metric, which alone are 2007. Selected technologies can be categorised as (i) application programming interfaces (referred by 19 metrics), (ii) formats (referred by 91 metrics), (iii) identifiers (referred by 184 metrics), (iv) software libraries (referred by 22 metrics), (v) licences (referred by two metrics), (vi) semantic artefacts (referred by 291 metrics), (vii) protocols (referred by 29 metrics), (viii) query languages (referred by 5 metrics), (ix) registries (referred by 28 metrics), (x) repositories (referred by 14 metrics), and (xi) search engines (referred by 5 metrics). When referring to the number of metrics per technology class, it should be noted that each metric can mention or use one or more technologies.

Figure 7 depicts how these technologies are exploited across the principles using the metric’s declared intent for classifying the technology.

Technology types per declared metric intent

Technology types per declared metric intent.

The most cited or used technologies in the metrics or their implementations are semantic artefacts and identifiers . In particular, Dublin Core and are the most mentioned, followed by standards related to knowledge representation languages (Web Ontology Language and Resource Description Framework) and ontologies (Ontology Metadata Vocabulary and Metadata for Ontology Description). The most cited identifier is the uniform resource locator (URL), followed by mentions of uniform resource identifiers (even if technically all URLs are URIs) and, among persistent identifiers, digital object identifiers (DOI).

Semantic artefacts are among the most cited for findability assessments (e.g., Dublin Core, , Web Ontology Language, Metadata for Ontology Description, Ontology Metadata Vocabulary, Friend of a Friend, and Vann), followed by identifiers (URL, DOI, URI).

Identifiers are the most cited technologies for accessibility assessments (URL, URI, Handle, DOI, InChi key), followed by protocols (HTTP, OAI-PMH), semantic artefacts (Web Ontology Language, Dublin Core), and formats (XML).

The most mentioned technologies for interoperability assessments are semantic artefacts (Ontology Metadata Vocabulary, Dublin Core, Friend of a Friend, Web Ontology Language) and formats (JSON-LD, XML, RDF/XML, turtle), followed by identifiers (URI, DOI, Handle).

For reusability assessments, besides Dublin Core, , Metadata for Ontology Description (MOD), Datacite metadata schema, and Open Graph, also figure semantic artefacts that are specific for provenance (Provenance Ontology and Provenance, Authoring and Versioning) and licensing (Creative Commons Rights Expression Language). Identifiers (URLs) and formats (XML) are also among the most used technologies for reusability purposes.

Ultimately, HTTP-based and linked data technologies are the most used technologies in the metrics, either if considering all metrics at once or just focusing on a single dimension of the FAIR principles.

5 Discussion

The current state of FAIR assessment practices is characterised by different issues, linked to the way the assessment is performed both at a tool and metric level. In the remainder of this section, we critically discuss what emerged in Section 4 concerning assessment tools and assessment metrics.

5.1 Assessment tools

The variety of the tools and their characteristics discussed in Section 4.1 demonstrates the various flavours of solutions that can be envisaged for FAIR assessment. This variety is due to several factors, namely (a) the willingness to assess diverse objects (from any digital object to software), (b) the need to rely on either automatic, manual or hybrid approaches, (c) the necessity to respond to specific settings by adaptability or by being natively designed to be discipline-specific or community-specific. This denotes a certain discretionality in the interpretation and application of the principles themselves, in addition to producing different results and scores for the same product ( Krans et al. 2022 ). In other words, the aspirational formulation of the FAIR principles is hardly reconcilable with punctual measurement .

The characteristics of the tools and their assessment approaches impact assessment tasks and results. Manual assessment tools rely on the assessor’s knowledge, so they do not typically need to be as specific as the automated ones, e.g. when citing a specific technology expected to be exploited to implement a principle, they do not have to clarify how that technology is expected to be used thus catering for diverse interpretations from different assessors. Manual assessment practices tend to be subjective, making it challenging to achieve a unanimous consensus on results. Automatic assessment tools require that (meta)data are machine-readable and only apparently solve the subjectivity issue. While it is true that automatic assessments have to rely on a defined and granular process, which does not leave space for interpretations, every automated tool actually proposes its own FAIRness implementation by defining the granular process itself, especially those tools that do not allow the creation and integration of user-defined metrics. Consequently, the assessment process is objective, but the results are still subjective and biassed by the specific FAIR principles interpretation implemented by the tool developer.

Although the trends observed for tools characteristics in Section 4.1 seems to suggest some tendencies (namely, in the last years more automatic tools than manual ones were developed, more non-adaptable tools than adaptable ones were released, and discipline-agnostic and community-agnostic were emerging over the others) it is almost impossible to figure out whether tools with these characteristics are actually better serving the needs of communities than others. The specific nature of FAIRness assessment is likely to promote the development of tools where community-specific FAIR implementation choices can be easily and immediately vehiculated into assessment pipelines, no matter the tool design decisions regarding methodology, adaptability, etc.

5.2 Assessment metrics

The following three subsections retrace the analysis of assessment metrics as discussed in Section 4.2 subsections to give some reasoning about them. In particular, they elaborate on the findings stemming from the analysis of the gaps between declared metrics intents and the FAIR principles, the discrepancies between declared intents and observed behaviours, and the set of technologies cited for assessing FAIRness, respectively.

5.2.1 Assessment approaches: gaps with respect to FAIR principles

The results reported in Section 4.2.1 highlighted the apparently comprehensive coverage of proposed metrics with respect to principles, the fuzziness of some metrics as well as the variety of metrics implementations for assessing the same principle.

Regarding the coverage , the fact that there exist metrics to assess any principle while the number of metrics per principle and per tool is diverse depends on the principle and tool characteristics. It does not guarantee that all principles are equally assessed. Some principles are multifaceted by formulation, which might lead to many metrics to assess them. This is the case of F1 requiring uniqueness and persistence of identifiers; the number of metrics dedicated to assessing it was the highest we found (Table A.3). However, F1 also has the ‘ (Meta)data ’ multifaceted formulation that is occurring in many other principles without leading to a proliferation of assessment metrics. R1.1 is similar to F1 since it has the (meta)data aspect as well as the accessibility and intelligibility of the licence, yet this is not causing the proliferation of metrics. In contrast with these two principles that are explicitly assessed by all the tools declaring an association among metrics and principles (together with F2 and I1), there are multifaceted principles, like A1.2 and R1.2, that were not explicitly assessed by some tools, actually by automatic tools that are probably facing issues in assessing them programmatically. This diversity of approaches for assessing the same principle further demonstrates the gaps among the principles and their many implementations, thus making any attempt to assess FAIRness in absolute terms almost impossible and meaningless .

Regarding the fuzziness , we observed metrics that either replicate or rephrase the principle itself, thus remaining as generic as the principles are. The effectiveness of these metrics is also limited in the case of manual assessment tools. In practice, using these metrics, the actual assessment check remains hidden either in the assessor’s understanding or in the tool implementation.

Regarding the variety of implementations , every implementation of a metric inevitably comes with implementation choices impacting the scope of cases passing the assessment check. In fact, it is not feasible to implement metrics capturing all the different real-world cases that can be considered suitable for a positive assessment of a given principle. Consequently, even if ‘FAIR is not equal to RDF, Linked Data, or the Semantic Web’ ( Mons et al. 2017 ), linked data technologies are understandably among the main adopted solutions for creating assessment metric implementations. However, the reuse of common implementations across tools is not promoted or facilitated; FAIR Implementation Profiles (FIP) ( Schultes et al. 2020 ) and metadata templates ( Musen et al. 2022 ) could facilitate this by identifying sets of community standards and requirements to be then exploited by various tools. The availability of ‘implementation profiles’ could help to deal with the principles requiring ‘rich metadata’ (namely F2 and R1), whose dedicated metrics seem quite poor for both discoverability and reusability aspects.

5.2.2 Assessment metrics: observed behaviours and FAIR principles discrepancies

The results reported in Section 4.2.2 revealed 345 misaligned metrics ( Figure 6 , Table A.4). Overall, we found metrics that seemed to be very discretionary and not immediately adhering to the FAIR principles, also injecting in assessment pipelines checks going beyond FAIRness . Although these misalignments result from our reading of the FAIR principles, they reveal the following recurring issues characterising metrics implementations realising surprising/unexpected interpretations of FAIR principles aspects.

Access rights. Checks verifying the existence of access rights or access condition metadata are used for assessing accessibility, in particular, the A1 principle. This is problematic because (a) the accessibility principles focus on something different, e.g. the protocols used and the long-term availability of (meta)data, and (b) they overlook the equal treatment A1 envisages for both data and metadata.

Long-term preservation. It is used to assess digital objects rather than just metadata (as requested by A2). In particular, long-term preservation-oriented metrics were proposed for assessing accessibility and reusability (R1.3), thus introducing an extensive interpretation of principles requiring (domain-oriented and community-oriented) standardised way for accessing the metadata.

Openness and free downloadability. These recur among the metrics and are also used contextually for assessing adherence to community standards (R1.3). When used alone, we observed that openness-related metrics are employed for assessing reusability, while free-download-related metrics are used for assessing findability and accessibility (in particular for A1.1). Strictly speaking, it was already clarified that none of the FAIR principles necessitate data being ‘open’ or ‘free’ ( Mons et al. 2017 ). Nonetheless, there is a tendency to give a positive, or more positive, assessment when the object is open. While this is in line with the general intentions of the principles (increasing reusability and re-use of data or other research products), this may be at odds with the need to protect certain types of data (e.g. sensitive data, commercial data, etc.).

Machine-readability. This metadata characteristic is found in metrics assessing findability (F2, F4), accessibility (A1), and reusability (R1.3). As the FAIR principles were conceived for lowering the barriers of data discovery and reuse for both humans and machines, machine-readability is at the very core of the requirements for the FAIRification of a research object. While it is understandably emphasised across the assessment metrics, the concept is frequently used as an additional assessment parameter in metrics assessing other principles rather than the ones defined for interoperability.

Resolvability of identifiers. This aspect characterises metrics assessing findability (specifically for F1, F2, and F3) and interoperability (I2). While resolvability is widely associated with persistent and unique identifiers and is indeed a desirable characteristic, we argue that it is not inherently connected to an identifier itself. URNs are a typical example of this. In the context of the FAIR principles, resolvability should be considered an aspect of accessibility, specifically related to A1, which concerns retrievability through an identifier and the use of a standardised communication protocol.

Validity. Metadata or information validity is used for assessing findability, accessibility, interoperability (specifically I3), and reusability (in particular R1), i.e. FAIR aspects that call for ‘rich’ metadata or metadata suitable for a certain scope. However, although metadata is indeed expected to be ‘valid’ to play their envisaged role, in reality, FAIR advocates and requires a plurality of metadata to facilitate the exploitation of the objects in a wider variety of contexts, without tackling data quality issues.

Versions. The availability of version information or different versions of a digital object is used for assessing findability and accessibility (specifically the A2 principle).

5.2.3 Assessment metrics: approaches and technologies

The fact that the vast majority of approaches encompass more than one FAIR area (Section 4.2.3) is indicative of an assessment that is inherently metadata-oriented. It is indeed the metadata, rather than the object itself, that are used in the verifications. This also explains why there are metrics developed for data assessment tools that are applicable for evaluating any digital object.

Challenges arise when evaluating accessibility principles (namely, A1.1, A1.2, and A2), which are the only ones for which an approach based on the availability of documentation pertaining to an assessment criterion (e.g. a metadata retention policy) is found. This approach further highlights the persistent obstacles in developing automated solutions that address all the FAIR principles comprehensively.

The results reported in Section 4.2.3 about the technologies referred in metrics implementations suggest there is an evident gap between the willingness to provide communities with FAIR assessment tools and metrics and the specific decisions and needs characterising the processes of FAIRification and FAIRness assessment in community settings. There is no single technology that is globally considered suitable for implementing any of the FAIR principles , and each community is entitled to pick any technology they deem suitable for implementing a FAIR principle by the formulation of the principle. The fact that some tools cater for injecting community-specific assessment metrics into their assessment pipelines aims at compensating this gap by bringing the risk of ‘implicit knowledge’, i.e. when a given technology is a de-facto standard in a context or for a community, it is likely that this technology is taken for granted and disappear from the assessment practices produced by the community itself.

5.3 FAIR assessment prospects

The findings and discussions reported so far allow us to envisage some potential enhancements that might make future FAIR assessments more effective. It is desirable for forthcoming FAIR assessment tools to perform their assessment by (a) making the assessment process as automatic as possible, (b) making openly available the assessment process specification, including details on the metrics exploited, (c) allowing assessors to inject context-specific assessment specifications and metrics, (d) providing assessors with concrete suggestions (eventually AI-based) aiming at augmenting the FAIRness of the assessed objects. All in all, assessment tools should contribute to refrain from the diffusion of the feeling that FAIRness is a ‘yes’ or ‘no’ feature; every FAIR assessment exercise or FAIRness indicator associated with the object should always be accompanied with context-related documentation clarifying the settings leading to it.

It is also desirable to gradually reduce the need for FAIR assessment tools by developing data production and publication pipelines that are FAIR ‘ by design ’. Although any of such pipelines will indeed implement a specific interpretation of the FAIR principles, the one deemed suitable for the specific context, it will certainly result in a new generation of datasets, more generally resources, that are born with a flavour of FAIRness. These datasets should be accompanied by metadata clarifying the specification implemented by the pipeline to make them FAIR (this was already envisaged in R1.2). The richer and wider in scope the specification driving the FAIR by design pipelines is, the larger will be the set of contexts benefitting from the FAIRification. Data Management Plans might play a crucial role ( David et al. 2023 ; Salazar et al. 2023 ; Specht et al. 2023 ) in promoting the development of documented FAIR by design management pipelines. The FIP2DMP pipeline can be used to automatically inform Data Management Plans about the decisions taken by a community regarding the use of FAIR Enabling Resources ( Hettne et al. 2023 ). This can facilitate easier adoption of community standards by the members of that community and promote FAIR by design data management practices.

In the development of FAIR by design pipelines, community involvement is pivotal. Indeed, it is within each community that the requirements for a FAIR implementation profile to be followed can be established. Since it is ultimately the end-user who verifies the FAIRness of a digital object, particularly in terms of reusability, it is essential for each community to foster initiatives that define actual FAIR implementations through a bottom to top process, aiming to achieve an informed consensus on machine-actionable specifics. An example in this direction is NASA, which, as a community, has committed to establishing interpretative boundaries and actions to achieve and measure the FAIRness of their research products in the context of their data infrastructures ( SMD Data Repository Standards and Guidelines Working Group 2024 ).

Community-tailored FAIR by design pipelines would, on one hand, overcome the constraints of a top-down defined FAIRness, which may not suit the broad spectrum of existing scenarios. One of these constraints is exemplified by the number of technologies that a rule-based assessment tool ought to incorporate. While a community may establish reference technologies, it is far more challenging for a checklist to suffice for the needs of diverse communities. On the other hand, community-tailored FAIR by design pipelines can aid in establishing a concept of minimum requirements for absolute FAIRness, derived from the intersection of different specifications, or, on the contrary, in proving its unfeasibility.

Instead of attempting to devise a tool for a generic FAIR assessment within a rule-based control context, which cannot cover the different scenarios in which research outputs are produced, it may be more advantageous to focus on community-specific assessment tools. Even in this scenario, the modularity of the tools and the granularity of the assessments performed would be essential for creating an adaptable instrument that changes with the ever-evolving technologies and standards.

For examining the FAIRness of an object from a broad standpoint, large language models (LLMs) could serve as an initial benchmark for a preliminary FAIR evaluation. Such an approach would have the advantage of not being bound to a rule-based verification, since the model would be based on a comprehensive training set, allowing it to identify a wide range of possibilities, while managing to provide a consistent and close interpretation of the FAIR principles through different scenarios.

6 Conclusion

This study analysed 20 FAIR assessment tools and their related 1180 metrics to answer four research questions to develop a comprehensive and up-to-date view of the FAIR assessment.

The tools were analysed along seven axes (assessment unit, assessment methodology, adaptability, discipline specificity, community specificity, and provisioning mode), highlighting the emergence of trends over time: the increasing variety in the assessment units and the preference for automatic assessment methodologies, non-adaptable assessment methods, discipline and community generality, and the as-a-Service provisioning model. The inherent subjectivity in interpreting and applying the FAIR principles leads to a spectrum of assessment solutions, underscoring the challenge of reconciling the aspirational nature of the FAIR principles with precise measurement. Manual assessment practices fail to yield consistent results for the same reason that they constitute a valuable resource, that is, they facilitate the adaptability to the variety of assessment contexts by avoiding extensional formulations. Automated tools, although objective in their processes, are not immune to subjectivity as they reflect the biases and interpretations of their developers. This is particularly evident in tools that do not support user-defined metrics, which could otherwise allow for a more nuanced FAIR assessment.

The metrics were analysed with respect to the FAIR principles’ coverage, the discrepancies between the declared intent of the metrics and the actual aspects assessed, and the approaches and technologies employed for the assessment. This revealed gaps, discrepancies, and high heterogeneity among the existing metrics and the principles. This was quite expected and depended on the difference of intents among a set of aspirational principles by design that was oriented to allow many different approaches to rendering the target items FAIR and metrics called to assess in practice concrete implementations of FAIR principles. Principles do not represent a standard to adhere to ( Mons et al. 2017 ) and some of them are multifaceted, while metrics have to be implemented by making decisions on principles implementations to make the assessment useful or remain at the same level of genericity of the principle, thus leaving room for interpretation from the assessor and making the assessment exposed to personal biases. Multifaceted principles are not uniformly assessed, with tools, especially automated ones, struggling to evaluate them programmatically. Accessibility principles, in particular, are not consistently addressed. The controls envisaged for assessing FAIRness also encompass aspects that extend beyond the original intentions of the principles’ authors. Concepts such as open, free, and valid are in fact employed within the context of FAIR assessment, reflecting a shifting awareness of the interconnected yet distinct issues associated with data management practices. Just as closed digital objects can be FAIR, data and metadata that are not valid may comply with the principles as well, depending on the context they were produced. The diversity of assessment approaches for the same principle and the absence of a universally accepted technology for implementing FAIR principles, reflecting the diverse needs and preferences of scientific communities, further highlights the variability in interpretation, ultimately rendering absolute assessments of FAIRness impractical and, arguably, nonsensical.

Forthcoming FAIR assessment tools should include among their features the possibility of implementing new checks and allow user-defined assessment profiles. The ‘publication’ of metrics will allow the development of a repository or a registry for FAIR assessment implementations, fostering their peer review process and the reuse or repurposing of them by different assessment tools, ultimately being an effective solution for enabling and promoting the awareness of the available solutions without depending on a specific tool. The recently proposed FAIR Cookbook (Life Science) ( Rocca-Serra et al. 2023 ) goes in this direction. In addition, the need for assessment tools will likely be limited if FAIR-by-design data production and publication pipelines are developed, thus leading to FAIR-born items. Of course, the FAIR-born items are not universally FAIR, they are simply compliant with the specific implementation choices decided by the data publishing community in their FAIR-by-design pipeline. Rather than trying to define a FAIRness that can fit all purposes, shifting the focus from generic FAIR assessment solutions to community-specific FAIR assessment solutions would bring better results in the long run. A bottom-up approach would yield greater benefits, both short-term and long-term, as it would enable the immediate production of results that are informed by the specific needs of each community, thus ensuring immediate reusability. Furthermore, it would facilitate the identification of commonalities, thereby allowing for a shared definition of a broader FAIRness. LLMs could bring advantages to FAIR assessment processes by untying them from rule-based constraints and by ensuring a consistent interpretation of the FAIR principles amidst the variety characterising scientific settings and outputs.

All in all, we argue that FAIRness is a valuable concept yet FAIR is by design far from being a standard or a concrete specification whose compliance can be univocally assessed and measured. FAIR principles were proposed to guide data producers and publishers; thus FAIRness assessment tools are expected to help these key players to identify possible limitations in their data management practices with respect to good data management and stewardship.

Data Accessibility Statements

The data that support the findings of this study are openly available on Zenodo at .

Additional File

The additional file for this article can be found as follows:

Appendixes A.1 to A.5. DOI:

Funding Statement

Funded by: European Union’s Horizon 2020 and Horizon Europe research and innovation programmes.


We really thank D. Castelli (CNR-ISTI) for her valuable support and the many helpful comments she gave during the preparation of the manuscript. We sincerely thank the anonymous reviewers for their valuable feedback.

Funding information

This work has received funding from the European Union’s Horizon 2020 and Horizon Europe research and innovation programmes under the Blue Cloud project (grant agreement No. 862409), the Blue-Cloud 2026 project (grant agreement No. 101094227), the Skills4EOSC project (grant agreement No. 101058527), and the SoBigData-PlusPlus project (grant agreement No. 871042).

Competing Interests

The authors have no competing interests to declare.

Author Contributions

  • LC: Conceptualization, Funding acquisition, Methodology, Supervision, Validation, Visualization, Writing.
  • DM: Data curation, Formal Analysis, Investigation, Writing.
  • GP: Data curation, Formal Analysis, Investigation, Writing.

Aguilar Gómez, F 2022 FAIR EVA (Evaluator, Validator & Advisor). Spanish National Research Council. DOI:  

Amdouni, E, Bouazzouni, S and Jonquet, C 2022 O’FAIRe: Ontology FAIRness Evaluator in the AgroPortal Semantic Resource Repository. In: Groth, P, et al. (eds.), The Semantic Web: ESWC 2022 Satellite Events . Cham: Springer International Publishing (Lecture Notes in Computer Science). pp. 89–94. DOI:  

Ammar, A, et al. 2020 A semi-automated workflow for fair maturity indicators in the life sciences. Nanomaterials , 10(10): 2068. DOI:  

Bahim, C, et al. 2020 The FAIR Data Maturity Model: An approach to harmonise FAIR Assessments. Data Science Journal , 19: 41. DOI:  

Bahim, C, Dekkers, M and Wyns, B 2019 Results of an Analysis of Existing FAIR Assessment Tools . RDA Report. DOI:  

Bonello, J, Cachia, E and Alfino, N 2022 AutoFAIR-A portal for automating FAIR assessments for bioinformatics resources. Biochimica et Biophysica Acta (BBA) – Gene Regulatory Mechanisms , 1865(1): 194767. DOI:  

Clarke, D J B, et al. 2019 FAIRshake: Toolkit to Evaluate the FAIRness of Research Digital Resources. Cell Systems , 9(5): 417–421. DOI:  

Czerniak, A, et al. 2021 Lightweight FAIR assessment in the OpenAIRE Validator. In: Open Science Fair 2021 . Available at: .  

David, R, et al. 2023 Umbrella Data Management Plans to integrate FAIR Data: Lessons from the ISIDORe and BY-COVID Consortia for Pandemic Preparedness. Data Science Journal , 22: 35. DOI:  

d’Aquin, M, et al. 2023 FAIREST: A framework for assessing research repositories. Data Intelligence , 5(1): 202–241. DOI:  

De Miranda Azevedo, R and Dumontier, M 2020 considerations for the conduction and interpretation of FAIRness evaluations. Data Intelligence , 2(1–2): 285–292. DOI:  

Devaraju, A and Huber, R 2020 F-UJI – An automated FAIR Data Assessment tool. Zenodo . DOI:  

Gaignard, A, et al. 2023 FAIR-Checker: Supporting digital resource findability and reuse with Knowledge Graphs and Semantic Web standards. Journal of Biomedical Semantics , 14(1): 7. DOI:  

Garijo, D, Corcho, O and Poveda-Villalòn, M 2021 FOOPS!: An ontology pitfall scanner for the FAIR Principles. [Posters, Demos, and Industry Tracks]. In: International Semantic Web Conference (ISWC) 2021.  

Gehlen, K P, et al. 2022 Recommendations for discipline-specific FAIRness Evaluation derived from applying an ensemble of evaluation tools. Data Science Journal , 21: 7. DOI:  

Goble, C, et al. 2020 FAIR Computational Workflows. Data Intelligence , 2(1–2): 108–121. DOI:  

González, E, Benítez, A and Garijo, D 2022 FAIROs: Towards FAIR Assessment in research objects. Lecture Notes in Computer Science, vol 13541 In: Silvello, G, et al. (eds.), Linking Theory and Practice of Digital Libraries . Cham: Springer International Publishing. pp. 68–80. DOI:  

Hettne, K M, et al. 2023 FIP2DMP: Linking data management plans with FAIR implementation profiles. FAIR Connect , 1(1): 23–27. DOI:  

Jacobsen, A, et al. 2020 FAIR Principles: Interpretations and implementation considerations. Data Intelligence , 2(1–2): 10–29. DOI:  

Katz, D S, Gruenpeter, M and Honeyman, T 2021 Taking a fresh look at FAIR for research software. Patterns , 2(3): 100222. DOI:  

Krans, N A, et al. 2022 FAIR assessment tools: evaluating use and performance. NanoImpact , 27: 100402. DOI:  

Lamprecht, A L, et al. 2020 Towards FAIR principles for research software. Data Science , 3(1): 37–59. DOI:  

Mangione, D, Candela, L and Castelli, D 2022 A taxonomy of tools and approaches for FAIRification, In: 18th Italian Research Conference on Digital Libraries. IRCDL, Padua, Italy, 2022.  

Matentzoglu, N, et al. 2018 MIRO: guidelines for minimum information for the reporting of an ontology. Journal of Biomedical Semantics , 9(1):. 6. DOI:  

Mons, B, et al. 2017 Cloudy, increasingly FAIR; revisiting the FAIR Data guiding principles for the European Open Science Cloud. Information Services & Use , 37(1): 49–56. DOI:  

Musen, M A, O’Connor, M J, Schultes, E, et al. 2022 Modeling community standards for metadata as templates makes data FAIR. Sci Data , 9: 696. DOI:  

Rocca-Serra, P, et al. 2023 The FAIR Cookbook – The essential resource for and by FAIR doers. Scientific Data , 10(1): 292. DOI:  

Salazar, A, et al. 2023 How research data management plans can help in harmonizing open science and approaches in the digital economy. Chemistry – A European Journal , 29(9): e202202720. DOI:  

Schultes, E, Magagna, B, Hettne, K M, Pergl, R, Suchánek, M and Kuhn, T. 2020 Reusable FAIR implementation profiles as accelerators of FAIR convergence. In: Grossmann, G and Ram, S (eds.), Advances in Conceptual Modeling . ER 2020, Lecture Notes in Computer Science, Vol. 12584. Cham: Springer. DOI:  

SMD Data Repository Standards and Guidelines Working Group 2024 How to make NASA Science Data more FAIR . Available at: .  

Soiland-Reyes, S, et al. 2022 Packaging research artefacts with RO-Crate. Data Science , 5(2): 97–138. DOI:  

Specht, A, et al. 2023 The Value of a data and digital object management plan (D(DO)MP) in fostering sharing practices in a multidisciplinary multinational project. Data Science Journal , 22: 38. DOI:  

Sun, C, Emonet, V and Dumontier, M 2022 A comprehensive comparison of automated FAIRness Evaluation Tools, In: Semantic Web Applications and Tools for Health Care and Life Sciences. 13th International Conference on Semantic Web Applications and Tools for Health Care and Life Sciences. Leiden, Netherlands (Virtual Event) on 10th–14th January 2022, pp. 44–53.  

Thompson, M, et al. 2020 Making FAIR easy with FAIR Tools: From creolization to convergence. Data Intelligence , 2(1–2): 87–95. DOI:  

Wilkinson, M D, et al. 2016 The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data , 3(1): 160018. DOI:  

Wilkinson, M D, et al. 2019 Evaluating FAIR maturity through a scalable, automated, community-governed framework. Scientific Data , 6(1): 174. DOI:  

Statement from the Society for Science

During the 2024 Regeneron ISEF competition, Krish Pai, 17, was announced as one of this year’s $50,000 Young Scientist Award winners. Soon thereafter, the Society for Science received allegations regarding the scientific integrity of Mr. Pai’s research project. Per the Society’s defined process, a thorough investigation was immediately conducted. The Society presented its findings and additional questions to Mr. Pai, who then opted to withdraw his project and decline the awards.

Because no runner up was determined during the onsite judging process, this award will not be reassigned.

science fair research report

Research brief: Making evidence go further in health-science research

May 28, 2024.

Health-science research could benefit from pairing two unique methods of study, argue SFI External Professor Ross Hammond and Sharin Barkin in a May 15 perspective in PNAS . When used together, traditional trials and computational models “offer powerful synergy,” write the authors.

Randomized controlled trials, a longstanding approach in health sciences, help researchers establish clear links between treatments and outcomes. But RCTs have inherent limitations. They can be hard to generalize, may not identify a mechanism of action, and rely on necessary assumptions. In addition, cost, feasibility, and ethics can constrain the tests. 

An alternative is agent-based models — computational tools that can simulate an intervention in various dynamic environments and across a diverse and interacting population. But ABMs are also limited; real-world data from experiments or observation are necessary to validate the models. 

It’s still rare for health science research to pair these two methods, but recent case studies illustrate their potential. Integrating RCTs and ABMs through iterative work could help health science researchers better understand complex diseases and expand the impact of limited resources. 

Read the paper “Making evidence go further: Advancing synergy between agent-based modeling and randomized control trials” in PNAS (May 15, 2024). DOI: 10.1073/pnas.2314993121  

  • Sign Up For SFI News

News Media Contact

Santa fe institute.

Office of Communications [email protected] 505-984-8800

  • SFI News Release

More SFI News

Research brief: hierarchy in dynamic environments, mahzarin banaji receives honorary doctorate from yale, the value of failure in conservation programs, new work extends the thermodynamic theory of computation, new series presents pivotal papers in complexity science, research news brief: random processes shape science and math, sfi receives award from academia film olomouc, in memoriam: daniel c. dennett, new book: the time for complexity economics has come, karen willcox winner of the 2024 theodore von kármán prize, tim kohler to deliver linda s. cordell lecture, to accelerate biosphere science, reconnect three scientific cultures, mirta galesic receives prestigious erc advanced grant, carlo rovelli receives 2024 lewis thomas prize, research news brief: defining a city using cell-phone data, complexity tools for usda nutritional guidelines, quantifying the potential value of data, carlo rovelli joins sfi's fractal faculty, new book offers thoughtful approach to modeling complex social systems.

National Science Foundation logo.


Research and development: u.s. trends and international comparisons.

  • Report PDF (808 KB)
  • Report - All Formats .ZIP (5.2 MB)
  • Supplemental Materials - All Formats .ZIP (513 KB)
  • Share on X/Twitter
  • Share on Facebook
  • Share on LinkedIn
  • Send as Email


Executive Summary

Key takeaways:

  • In 2022, the United States performed an estimated $885.6 billion in research and development (R&D) in current U.S. dollars. This is an increase from 2021 of 12% in current (nominal) dollars and a 5% increase in constant (inflation-adjusted) dollars.
  • The business sector is by far the largest performer of U.S. R&D. In 2022, this sector performed an estimated $692.7 billion in domestic R&D (current U.S. dollars), or 78% of U.S. R&D, a 14% increase from the $608.6 billion performed in 2021 (6% increase in constant dollars).
  • The second-largest performing sector in 2022 was higher education, with $91.4 billion (or 10% of the U.S. R&D total). This represented a 7% increase from 2021 in current dollars, but performance stagnated in constant dollars (-0.4% change). In 2022, the federal government performed $73.3 billion, for an 8% share of U.S. R&D, compared with $66.8 billion in 2021, for a 10% increase (3% in constant dollars).
  • In addition to being the largest performer, the business sector is also the largest R&D funder in the United States. In 2022, the sector funded $672.9 billion, or 76% of total U.S. R&D, up from 69% in 2000 and 61% in 2010.
  • The federal government funded 18% of U.S. R&D ($159.8 billion dollars) in 2022 as the second-largest source. The federal government funds the largest proportion of U.S. basic research performance (40%). The largest recipient sector of federal R&D funding in 2022 was higher education (30%), followed by intramural federal R&D (29%).
  • The United States has had an R&D intensity , a measure of R&D expenditures relative to gross domestic product (GDP), above 3.0% since 2019. In 2022, the United States had an R&D intensity of 3.4%, based on National Patterns of R&D Resources statistics.
  • Five industries accounted for 79% of the $602.5 billion of U.S. business R&D performed by companies with 10 or more domestic employees in 2021: information (including software publishing) at 25%; chemicals manufacturing (including pharmaceuticals and medicines) at 18%; computer and electronic products manufacturing (including semiconductors) at 17%; professional, scientific, and technical services (including R&D services) at 11%; and transportation equipment manufacturing (including motor vehicles and aerospace products and parts) at 8%.
  • U.S. semiconductor and other electronic components manufacturing was one of the most R&D-intensive industries in 2021 (20% R&D-to-sales ratio). That year, semiconductor business R&D increased 9.8% in current U.S. dollars to $47.4 billion, after increasing 22.8% in 2020.
  • In FY 2022, the Department of Health and Human Services (HHS) and the Department of Defense together accounted for around three-fourths of the $196.6 billion in federal obligations for R&D and R&D plant.
  • Across all agencies in FY 2022, 24% of federal R&D obligations were devoted to basic research ($45.4 billion), 25% to applied research ($48.4 billion), and 51% to experimental development ($96.6 billion).
  • Federal research obligations (basic plus applied research) reached $93.8 billion in FY 2022 across all science and engineering (S&E) fields. Funding for life sciences research was the highest among S&E fields across agencies at $41.6 billion (44% of the total), primarily from HHS.
  • The Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act of 2022 appropriated $52.7 billion to revitalize the U.S. semiconductor industry along the supply chain, including $13.7 billion supporting R&D, workforce development, and related programs.
  • Based on internationally comparable estimates, the United States had the highest gross domestic expenditures on R&D (GERD) in 2021, at $806.0 billion, followed by China, with $667.6 billion in current U.S. purchasing power parity dollars. The top five R&D-performing economies (including Japan, Germany, and South Korea) accounted for 73% of the 2021 global R&D total.

Related Content


  1. Science Fair Project Final Report

    For a Good Science Fair Project Final Report, You Should Answer "Yes" to Every Question: Does your abstract include a short summary of the hypothesis, materials & procedures, results, and conclusion? ... Background research (your Research Paper). Materials list. Experimental procedure. Data analysis and discussion (including data table and ...

  2. PDF Writing a Research Paper for Your Science Fair Project

    The purpose of your research paper is to give you the information to understand why your experiment turns out the way it does. The research paper should include: The history of similar experiments or inventions. Definitions of all important words and concepts that describe your experiment. Answers to all your background research plan questions.

  3. How to Write a Science Fair Project Report

    Pay attention to margins, avoid fonts that are difficult to read or are too small or too large, use clean paper, and make print the report cleanly on as good a printer or copier as you can. Your science fair project may require a lab report or essay. This is a general outline showing how to prepare a report to document your research.

  4. Writing a Science Project Report or Research Paper

    Your report should include a title page, statement of purpose, hypothesis, materials and procedures, results and conclusions, discussion, and credits and bibliography. If applicable, graphs, tables, or charts should be included with the results portion of your report. 2. Cause and effect. This is another common science experiment research paper ...

  5. Step 8: Write your research paper :: Science Fair Wizard

    Paper should be double-spaced, single-sided, with one inch margins on all sides, and in a standard font such as Times New Roman 10 pt. or 12 pt. All pages should be numbered. Important: Check out the Science Fair Handbook for detailed instructions regarding the content of the research paper. The handbook also includes examples of the title page ...

  6. How to Write a Convincing Science Fair Research Proposal

    Step-By-Step Guide to Creating a Research Proposal. 1. Narrow down the subject area. Before you go into your project in any sort of depth, you'll need a fairly good idea of what your project's focus will be. In order to narrow this down, you should consider a few different angles.

  7. PDF GSEF Student Guide- How to Do a Science Fair Project

    1. Get a bound notebook to use as a logbook and number the pages. 2. Select a topic. 3. Narrow the topic to a specific problem, stated as a research question, with a single variable. 4. Conduct a literature review of the topic and problem and write a draft of the research report. 5.

  8. PDF Writing the Science Fair Project Report

    Writing the Science Fair Project Report The purpose of your science fair project report, and of any scientific paper, is to persuade the reader that the conclusions you have drawn are correct. This goal can be accomplished if you write clearly and concisely. Your project report must be typed, 12 point font, Times New Roman.

  9. PDF Science Buddies: Sample Science Fair Research Paper

    The paper you are reading is posted as an example on the Science Buddies website. Companies have made improvements in their batteries so they are better in high drain devices. A high drain device is a thing that takes a lot of current. ... Science Buddies: Sample Science Fair Research Paper Created Date: 3/5/2007 1:53:33 PM ...

  10. PDF Science Fair Research Report TEMPLATE

    Science Fair Research Report TEMPLATE How to Use this Document Text in BLACK type stays in the document exactly as shown. Text in BLUE type should be REMOVED and REPLACE with your writing. Make sure to change the color of the ENTIRE text to BLACK once completed. This document is READ ONLY, which means you may not save it to the same file name.

  11. Science Fair Paper Guidelines

    Science Fair Paper * Using your notes you can make a first-class science fair project by writing a good paper explaining what you did. Some teachers/judges require less and others more, but it should be organized something like this: ... Hypothesis and Background Research *State your PURPOSE in more detail, what made you think of this project ...

  12. How to Write Results for a Science Fair Project

    Writing the results for a science fair project report can feel challenging, but the scientific method gives science students a format to follow. Excellent results sections include a summary of the experiment, address the hypothesis, analyze the experiment, and make suggestions for further study.


    Science Fair Written Report The written report is a summary of everything that you did to investigate your topic. The written report provides ... Research: This is the part of the report that contains all the background information that you collected about your topic. Any books or articles read from the internet/journal, authorities on the ...

  14. The Basics

    This science fair project guide published by Science Buddies can help you get started. This 15-minute animated video, by a young artist named Kevin Temmer, provides a great introduction to preparing for a science fair. Now that you know what to do, choose a topic and then: Research the topic. This means becoming a mini-expert on the topic.

  15. Do a Science Fair Project!

    Your science fair project may do one of three things: test an idea (hypothesis), answer a question, and/or show how nature works. ... Design and carry out your research, keeping careful records of everything you do or see. ... and your conclusions. Write a short report that also states the same things as the exhibit or display, and also gives ...

  16. PDF Writing a Research Paper for Your Science Fair Project

    The purpose of your research paper is to give you the information to understand why your experiment turns out the way it does. The research paper should include: The history of similar experiments or inventions. Definitions of all important words and concepts that describe your experiment. Answers to all your background research plan questions.

  17. The Ultimate Science Fair Project Guide

    The science fair might be something you have to do for school. Or maybe it sounded cool! ... The information you gather to answer these research questions can be used in your report or in your board. ... and save in a Google Doc for later. Additional Research Tips. For your own science fair project, there will likely be rules that will already ...

  18. How to Write a Research Report for a Science Fair

    At a science fair, there are usually at least two requirements of each participant: a research report and a visual accompaniment. Your research report should accompany your data or project representation at a science fair. For example, if you made a tri-fold project board, each section noted on that board should be ...

  19. PDF Science Fair Research Paper

    Definition. Science fairs are an exposition of scientific and engineering research, completed by an individual or small team (2 or 3), with the subsequent display and verbal explanation of the work to judges. Students are evaluated by established guidelines.

  20. List of Science Fair Ideas and Experiments You Can Do

    Remember, find something that interests you, and have fun with it. To download and print this list of ideas CLICK HERE. Here's a list of over 30 Science Fair ideas to get you started. Then download science experiments, and watch experiment videos to inspire your project.

  21. How to write a scientific report at university

    Final steps. Check that your report satisfies the formatting requirements of your department or degree programme. Check for grammatical errors, misspellings, informal language, punctuation, typos and repetition or omission. Ask fellow students to read your report critically. Then rewrite it.

  22. The FAIR Assessment Conundrum: Reflections on Tools and Metrics

    The CODATA Data Science Journal is a peer-reviewed, open access, electronic journal, publishing papers on the management, dissemination, use and reuse of research data and databases across all research domains, including science, technology, the humanities and the arts. The scope of the journal includes descriptions of data systems, their implementations and their publication, applications ...

  23. Statement from the Society for Science

    Statement from the Society for Science. May 24, 2024. During the 2024 Regeneron ISEF competition, Krish Pai, 17, was announced as one of this year's $50,000 Young Scientist Award winners. Soon thereafter, the Society for Science received allegations regarding the scientific integrity of Mr. Pai's research project. Per the Society's ...

  24. Research brief: Making evidence go further in health science research

    Health science research could benefit from pairing two unique methods of study, argue SFI External Professor Ross Hammond and Sharin Barkin in a May 15 perspective in PNAS . ... Read the paper "Making evidence go further: Advancing synergy between agent-based modeling and randomized control trials" in PNAS (May 15, 2024). DOI: 10.1073/pnas ...

  25. Making the case for fair compensation in clinical trials

    But if there were more than three trial visits per year, that could result in a tax liability for the participant and potential loss of eligibility for government benefits. For example, the income ...

  26. Vivek Singh Wins Best Full Paper award at the ACM Web Science Conference

    A study describing a new method to ensure that online content analysis algorithms, such as those detecting harmful speech, remain fair and accurate over time, has won the Best Full Paper Award at the 16th ACM Web Science Conference 2024.The conference, held on May 21-24, 2024, in Stuttgart, Germany, brought together experts in Web Science, an interdisciplinary field dedicated to understanding ...

  27. Research and Development: U.S. Trends and International Comparisons

    Investment in research and development (R&D) is essential for a country's success in the global economy and for its ability to address challenges and opportunities. R&D contributes to innovation and competitiveness. In 2021, the business sector was the leading performer and funder of U.S. R&D. The federal government was the second-largest overall funding source and the largest funding source ...