• Privacy Policy

Research Method

Home » Research Results Section – Writing Guide and Examples

Research Results Section – Writing Guide and Examples

Table of Contents

Research Results

Research Results

Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.

Results Section in Research

The results section of the research paper presents the findings of the study. It is the part of the paper where the researcher reports the data collected during the study and analyzes it to draw conclusions.

In the results section, the researcher should describe the data that was collected, the statistical analysis performed, and the findings of the study. It is important to be objective and not interpret the data in this section. Instead, the researcher should report the data as accurately and objectively as possible.

Structure of Research Results Section

The structure of the research results section can vary depending on the type of research conducted, but in general, it should contain the following components:

  • Introduction: The introduction should provide an overview of the study, its aims, and its research questions. It should also briefly explain the methodology used to conduct the study.
  • Data presentation : This section presents the data collected during the study. It may include tables, graphs, or other visual aids to help readers better understand the data. The data presented should be organized in a logical and coherent way, with headings and subheadings used to help guide the reader.
  • Data analysis: In this section, the data presented in the previous section are analyzed and interpreted. The statistical tests used to analyze the data should be clearly explained, and the results of the tests should be presented in a way that is easy to understand.
  • Discussion of results : This section should provide an interpretation of the results of the study, including a discussion of any unexpected findings. The discussion should also address the study’s research questions and explain how the results contribute to the field of study.
  • Limitations: This section should acknowledge any limitations of the study, such as sample size, data collection methods, or other factors that may have influenced the results.
  • Conclusions: The conclusions should summarize the main findings of the study and provide a final interpretation of the results. The conclusions should also address the study’s research questions and explain how the results contribute to the field of study.
  • Recommendations : This section may provide recommendations for future research based on the study’s findings. It may also suggest practical applications for the study’s results in real-world settings.

Outline of Research Results Section

The following is an outline of the key components typically included in the Results section:

I. Introduction

  • A brief overview of the research objectives and hypotheses
  • A statement of the research question

II. Descriptive statistics

  • Summary statistics (e.g., mean, standard deviation) for each variable analyzed
  • Frequencies and percentages for categorical variables

III. Inferential statistics

  • Results of statistical analyses, including tests of hypotheses
  • Tables or figures to display statistical results

IV. Effect sizes and confidence intervals

  • Effect sizes (e.g., Cohen’s d, odds ratio) to quantify the strength of the relationship between variables
  • Confidence intervals to estimate the range of plausible values for the effect size

V. Subgroup analyses

  • Results of analyses that examined differences between subgroups (e.g., by gender, age, treatment group)

VI. Limitations and assumptions

  • Discussion of any limitations of the study and potential sources of bias
  • Assumptions made in the statistical analyses

VII. Conclusions

  • A summary of the key findings and their implications
  • A statement of whether the hypotheses were supported or not
  • Suggestions for future research

Example of Research Results Section

An Example of a Research Results Section could be:

  • This study sought to examine the relationship between sleep quality and academic performance in college students.
  • Hypothesis : College students who report better sleep quality will have higher GPAs than those who report poor sleep quality.
  • Methodology : Participants completed a survey about their sleep habits and academic performance.

II. Participants

  • Participants were college students (N=200) from a mid-sized public university in the United States.
  • The sample was evenly split by gender (50% female, 50% male) and predominantly white (85%).
  • Participants were recruited through flyers and online advertisements.

III. Results

  • Participants who reported better sleep quality had significantly higher GPAs (M=3.5, SD=0.5) than those who reported poor sleep quality (M=2.9, SD=0.6).
  • See Table 1 for a summary of the results.
  • Participants who reported consistent sleep schedules had higher GPAs than those with irregular sleep schedules.

IV. Discussion

  • The results support the hypothesis that better sleep quality is associated with higher academic performance in college students.
  • These findings have implications for college students, as prioritizing sleep could lead to better academic outcomes.
  • Limitations of the study include self-reported data and the lack of control for other variables that could impact academic performance.

V. Conclusion

  • College students who prioritize sleep may see a positive impact on their academic performance.
  • These findings highlight the importance of sleep in academic success.
  • Future research could explore interventions to improve sleep quality in college students.

Example of Research Results in Research Paper :

Our study aimed to compare the performance of three different machine learning algorithms (Random Forest, Support Vector Machine, and Neural Network) in predicting customer churn in a telecommunications company. We collected a dataset of 10,000 customer records, with 20 predictor variables and a binary churn outcome variable.

Our analysis revealed that all three algorithms performed well in predicting customer churn, with an overall accuracy of 85%. However, the Random Forest algorithm showed the highest accuracy (88%), followed by the Support Vector Machine (86%) and the Neural Network (84%).

Furthermore, we found that the most important predictor variables for customer churn were monthly charges, contract type, and tenure. Random Forest identified monthly charges as the most important variable, while Support Vector Machine and Neural Network identified contract type as the most important.

Overall, our results suggest that machine learning algorithms can be effective in predicting customer churn in a telecommunications company, and that Random Forest is the most accurate algorithm for this task.

Example 3 :

Title : The Impact of Social Media on Body Image and Self-Esteem

Abstract : This study aimed to investigate the relationship between social media use, body image, and self-esteem among young adults. A total of 200 participants were recruited from a university and completed self-report measures of social media use, body image satisfaction, and self-esteem.

Results: The results showed that social media use was significantly associated with body image dissatisfaction and lower self-esteem. Specifically, participants who reported spending more time on social media platforms had lower levels of body image satisfaction and self-esteem compared to those who reported less social media use. Moreover, the study found that comparing oneself to others on social media was a significant predictor of body image dissatisfaction and lower self-esteem.

Conclusion : These results suggest that social media use can have negative effects on body image satisfaction and self-esteem among young adults. It is important for individuals to be mindful of their social media use and to recognize the potential negative impact it can have on their mental health. Furthermore, interventions aimed at promoting positive body image and self-esteem should take into account the role of social media in shaping these attitudes and behaviors.

Importance of Research Results

Research results are important for several reasons, including:

  • Advancing knowledge: Research results can contribute to the advancement of knowledge in a particular field, whether it be in science, technology, medicine, social sciences, or humanities.
  • Developing theories: Research results can help to develop or modify existing theories and create new ones.
  • Improving practices: Research results can inform and improve practices in various fields, such as education, healthcare, business, and public policy.
  • Identifying problems and solutions: Research results can identify problems and provide solutions to complex issues in society, including issues related to health, environment, social justice, and economics.
  • Validating claims : Research results can validate or refute claims made by individuals or groups in society, such as politicians, corporations, or activists.
  • Providing evidence: Research results can provide evidence to support decision-making, policy-making, and resource allocation in various fields.

How to Write Results in A Research Paper

Here are some general guidelines on how to write results in a research paper:

  • Organize the results section: Start by organizing the results section in a logical and coherent manner. Divide the section into subsections if necessary, based on the research questions or hypotheses.
  • Present the findings: Present the findings in a clear and concise manner. Use tables, graphs, and figures to illustrate the data and make the presentation more engaging.
  • Describe the data: Describe the data in detail, including the sample size, response rate, and any missing data. Provide relevant descriptive statistics such as means, standard deviations, and ranges.
  • Interpret the findings: Interpret the findings in light of the research questions or hypotheses. Discuss the implications of the findings and the extent to which they support or contradict existing theories or previous research.
  • Discuss the limitations : Discuss the limitations of the study, including any potential sources of bias or confounding factors that may have affected the results.
  • Compare the results : Compare the results with those of previous studies or theoretical predictions. Discuss any similarities, differences, or inconsistencies.
  • Avoid redundancy: Avoid repeating information that has already been presented in the introduction or methods sections. Instead, focus on presenting new and relevant information.
  • Be objective: Be objective in presenting the results, avoiding any personal biases or interpretations.

When to Write Research Results

Here are situations When to Write Research Results”

  • After conducting research on the chosen topic and obtaining relevant data, organize the findings in a structured format that accurately represents the information gathered.
  • Once the data has been analyzed and interpreted, and conclusions have been drawn, begin the writing process.
  • Before starting to write, ensure that the research results adhere to the guidelines and requirements of the intended audience, such as a scientific journal or academic conference.
  • Begin by writing an abstract that briefly summarizes the research question, methodology, findings, and conclusions.
  • Follow the abstract with an introduction that provides context for the research, explains its significance, and outlines the research question and objectives.
  • The next section should be a literature review that provides an overview of existing research on the topic and highlights the gaps in knowledge that the current research seeks to address.
  • The methodology section should provide a detailed explanation of the research design, including the sample size, data collection methods, and analytical techniques used.
  • Present the research results in a clear and concise manner, using graphs, tables, and figures to illustrate the findings.
  • Discuss the implications of the research results, including how they contribute to the existing body of knowledge on the topic and what further research is needed.
  • Conclude the paper by summarizing the main findings, reiterating the significance of the research, and offering suggestions for future research.

Purpose of Research Results

The purposes of Research Results are as follows:

  • Informing policy and practice: Research results can provide evidence-based information to inform policy decisions, such as in the fields of healthcare, education, and environmental regulation. They can also inform best practices in fields such as business, engineering, and social work.
  • Addressing societal problems : Research results can be used to help address societal problems, such as reducing poverty, improving public health, and promoting social justice.
  • Generating economic benefits : Research results can lead to the development of new products, services, and technologies that can create economic value and improve quality of life.
  • Supporting academic and professional development : Research results can be used to support academic and professional development by providing opportunities for students, researchers, and practitioners to learn about new findings and methodologies in their field.
  • Enhancing public understanding: Research results can help to educate the public about important issues and promote scientific literacy, leading to more informed decision-making and better public policy.
  • Evaluating interventions: Research results can be used to evaluate the effectiveness of interventions, such as treatments, educational programs, and social policies. This can help to identify areas where improvements are needed and guide future interventions.
  • Contributing to scientific progress: Research results can contribute to the advancement of science by providing new insights and discoveries that can lead to new theories, methods, and techniques.
  • Informing decision-making : Research results can provide decision-makers with the information they need to make informed decisions. This can include decision-making at the individual, organizational, or governmental levels.
  • Fostering collaboration : Research results can facilitate collaboration between researchers and practitioners, leading to new partnerships, interdisciplinary approaches, and innovative solutions to complex problems.

Advantages of Research Results

Some Advantages of Research Results are as follows:

  • Improved decision-making: Research results can help inform decision-making in various fields, including medicine, business, and government. For example, research on the effectiveness of different treatments for a particular disease can help doctors make informed decisions about the best course of treatment for their patients.
  • Innovation : Research results can lead to the development of new technologies, products, and services. For example, research on renewable energy sources can lead to the development of new and more efficient ways to harness renewable energy.
  • Economic benefits: Research results can stimulate economic growth by providing new opportunities for businesses and entrepreneurs. For example, research on new materials or manufacturing techniques can lead to the development of new products and processes that can create new jobs and boost economic activity.
  • Improved quality of life: Research results can contribute to improving the quality of life for individuals and society as a whole. For example, research on the causes of a particular disease can lead to the development of new treatments and cures, improving the health and well-being of millions of people.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write the Results/Findings Section in Research

research results for

What is the research paper Results section and what does it do?

The Results section of a scientific research paper represents the core findings of a study derived from the methods applied to gather and analyze information. It presents these findings in a logical sequence without bias or interpretation from the author, setting up the reader for later interpretation and evaluation in the Discussion section. A major purpose of the Results section is to break down the data into sentences that show its significance to the research question(s).

The Results section appears third in the section sequence in most scientific papers. It follows the presentation of the Methods and Materials and is presented before the Discussion section —although the Results and Discussion are presented together in many journals. This section answers the basic question “What did you find in your research?”

What is included in the Results section?

The Results section should include the findings of your study and ONLY the findings of your study. The findings include:

  • Data presented in tables, charts, graphs, and other figures (may be placed into the text or on separate pages at the end of the manuscript)
  • A contextual analysis of this data explaining its meaning in sentence form
  • All data that corresponds to the central research question(s)
  • All secondary findings (secondary outcomes, subgroup analyses, etc.)

If the scope of the study is broad, or if you studied a variety of variables, or if the methodology used yields a wide range of different results, the author should present only those results that are most relevant to the research question stated in the Introduction section .

As a general rule, any information that does not present the direct findings or outcome of the study should be left out of this section. Unless the journal requests that authors combine the Results and Discussion sections, explanations and interpretations should be omitted from the Results.

How are the results organized?

The best way to organize your Results section is “logically.” One logical and clear method of organizing research results is to provide them alongside the research questions—within each research question, present the type of data that addresses that research question.

Let’s look at an example. Your research question is based on a survey among patients who were treated at a hospital and received postoperative care. Let’s say your first research question is:

results section of a research paper, figures

“What do hospital patients over age 55 think about postoperative care?”

This can actually be represented as a heading within your Results section, though it might be presented as a statement rather than a question:

Attitudes towards postoperative care in patients over the age of 55

Now present the results that address this specific research question first. In this case, perhaps a table illustrating data from a survey. Likert items can be included in this example. Tables can also present standard deviations, probabilities, correlation matrices, etc.

Following this, present a content analysis, in words, of one end of the spectrum of the survey or data table. In our example case, start with the POSITIVE survey responses regarding postoperative care, using descriptive phrases. For example:

“Sixty-five percent of patients over 55 responded positively to the question “ Are you satisfied with your hospital’s postoperative care ?” (Fig. 2)

Include other results such as subcategory analyses. The amount of textual description used will depend on how much interpretation of tables and figures is necessary and how many examples the reader needs in order to understand the significance of your research findings.

Next, present a content analysis of another part of the spectrum of the same research question, perhaps the NEGATIVE or NEUTRAL responses to the survey. For instance:

  “As Figure 1 shows, 15 out of 60 patients in Group A responded negatively to Question 2.”

After you have assessed the data in one figure and explained it sufficiently, move on to your next research question. For example:

  “How does patient satisfaction correspond to in-hospital improvements made to postoperative care?”

results section of a research paper, figures

This kind of data may be presented through a figure or set of figures (for instance, a paired T-test table).

Explain the data you present, here in a table, with a concise content analysis:

“The p-value for the comparison between the before and after groups of patients was .03% (Fig. 2), indicating that the greater the dissatisfaction among patients, the more frequent the improvements that were made to postoperative care.”

Let’s examine another example of a Results section from a study on plant tolerance to heavy metal stress . In the Introduction section, the aims of the study are presented as “determining the physiological and morphological responses of Allium cepa L. towards increased cadmium toxicity” and “evaluating its potential to accumulate the metal and its associated environmental consequences.” The Results section presents data showing how these aims are achieved in tables alongside a content analysis, beginning with an overview of the findings:

“Cadmium caused inhibition of root and leave elongation, with increasing effects at higher exposure doses (Fig. 1a-c).”

The figure containing this data is cited in parentheses. Note that this author has combined three graphs into one single figure. Separating the data into separate graphs focusing on specific aspects makes it easier for the reader to assess the findings, and consolidating this information into one figure saves space and makes it easy to locate the most relevant results.

results section of a research paper, figures

Following this overall summary, the relevant data in the tables is broken down into greater detail in text form in the Results section.

  • “Results on the bio-accumulation of cadmium were found to be the highest (17.5 mg kgG1) in the bulb, when the concentration of cadmium in the solution was 1×10G2 M and lowest (0.11 mg kgG1) in the leaves when the concentration was 1×10G3 M.”

Captioning and Referencing Tables and Figures

Tables and figures are central components of your Results section and you need to carefully think about the most effective way to use graphs and tables to present your findings . Therefore, it is crucial to know how to write strong figure captions and to refer to them within the text of the Results section.

The most important advice one can give here as well as throughout the paper is to check the requirements and standards of the journal to which you are submitting your work. Every journal has its own design and layout standards, which you can find in the author instructions on the target journal’s website. Perusing a journal’s published articles will also give you an idea of the proper number, size, and complexity of your figures.

Regardless of which format you use, the figures should be placed in the order they are referenced in the Results section and be as clear and easy to understand as possible. If there are multiple variables being considered (within one or more research questions), it can be a good idea to split these up into separate figures. Subsequently, these can be referenced and analyzed under separate headings and paragraphs in the text.

To create a caption, consider the research question being asked and change it into a phrase. For instance, if one question is “Which color did participants choose?”, the caption might be “Color choice by participant group.” Or in our last research paper example, where the question was “What is the concentration of cadmium in different parts of the onion after 14 days?” the caption reads:

 “Fig. 1(a-c): Mean concentration of Cd determined in (a) bulbs, (b) leaves, and (c) roots of onions after a 14-day period.”

Steps for Composing the Results Section

Because each study is unique, there is no one-size-fits-all approach when it comes to designing a strategy for structuring and writing the section of a research paper where findings are presented. The content and layout of this section will be determined by the specific area of research, the design of the study and its particular methodologies, and the guidelines of the target journal and its editors. However, the following steps can be used to compose the results of most scientific research studies and are essential for researchers who are new to preparing a manuscript for publication or who need a reminder of how to construct the Results section.

Step 1 : Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study.

  • The guidelines will generally outline specific requirements for the results or findings section, and the published articles will provide sound examples of successful approaches.
  • Note length limitations on restrictions on content. For instance, while many journals require the Results and Discussion sections to be separate, others do not—qualitative research papers often include results and interpretations in the same section (“Results and Discussion”).
  • Reading the aims and scope in the journal’s “ guide for authors ” section and understanding the interests of its readers will be invaluable in preparing to write the Results section.

Step 2 : Consider your research results in relation to the journal’s requirements and catalogue your results.

  • Focus on experimental results and other findings that are especially relevant to your research questions and objectives and include them even if they are unexpected or do not support your ideas and hypotheses.
  • Catalogue your findings—use subheadings to streamline and clarify your report. This will help you avoid excessive and peripheral details as you write and also help your reader understand and remember your findings. Create appendices that might interest specialists but prove too long or distracting for other readers.
  • Decide how you will structure of your results. You might match the order of the research questions and hypotheses to your results, or you could arrange them according to the order presented in the Methods section. A chronological order or even a hierarchy of importance or meaningful grouping of main themes or categories might prove effective. Consider your audience, evidence, and most importantly, the objectives of your research when choosing a structure for presenting your findings.

Step 3 : Design figures and tables to present and illustrate your data.

  • Tables and figures should be numbered according to the order in which they are mentioned in the main text of the paper.
  • Information in figures should be relatively self-explanatory (with the aid of captions), and their design should include all definitions and other information necessary for readers to understand the findings without reading all of the text.
  • Use tables and figures as a focal point to tell a clear and informative story about your research and avoid repeating information. But remember that while figures clarify and enhance the text, they cannot replace it.

Step 4 : Draft your Results section using the findings and figures you have organized.

  • The goal is to communicate this complex information as clearly and precisely as possible; precise and compact phrases and sentences are most effective.
  • In the opening paragraph of this section, restate your research questions or aims to focus the reader’s attention to what the results are trying to show. It is also a good idea to summarize key findings at the end of this section to create a logical transition to the interpretation and discussion that follows.
  • Try to write in the past tense and the active voice to relay the findings since the research has already been done and the agent is usually clear. This will ensure that your explanations are also clear and logical.
  • Make sure that any specialized terminology or abbreviation you have used here has been defined and clarified in the  Introduction section .

Step 5 : Review your draft; edit and revise until it reports results exactly as you would like to have them reported to your readers.

  • Double-check the accuracy and consistency of all the data, as well as all of the visual elements included.
  • Read your draft aloud to catch language errors (grammar, spelling, and mechanics), awkward phrases, and missing transitions.
  • Ensure that your results are presented in the best order to focus on objectives and prepare readers for interpretations, valuations, and recommendations in the Discussion section . Look back over the paper’s Introduction and background while anticipating the Discussion and Conclusion sections to ensure that the presentation of your results is consistent and effective.
  • Consider seeking additional guidance on your paper. Find additional readers to look over your Results section and see if it can be improved in any way. Peers, professors, or qualified experts can provide valuable insights.

One excellent option is to use a professional English proofreading and editing service  such as Wordvice, including our paper editing service . With hundreds of qualified editors from dozens of scientific fields, Wordvice has helped thousands of authors revise their manuscripts and get accepted into their target journals. Read more about the  proofreading and editing process  before proceeding with getting academic editing services and manuscript editing services for your manuscript.

As the representation of your study’s data output, the Results section presents the core information in your research paper. By writing with clarity and conciseness and by highlighting and explaining the crucial findings of their study, authors increase the impact and effectiveness of their research manuscripts.

For more articles and videos on writing your research manuscript, visit Wordvice’s Resources page.

Wordvice Resources

  • How to Write a Research Paper Introduction 
  • Which Verb Tenses to Use in a Research Paper
  • How to Write an Abstract for a Research Paper
  • How to Write a Research Paper Title
  • Useful Phrases for Academic Writing
  • Common Transition Terms in Academic Papers
  • Active and Passive Voice in Research Papers
  • 100+ Verbs That Will Make Your Research Writing Amazing
  • Tips for Paraphrasing in Research Papers
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 7. The Results
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The results section is where you report the findings of your study based upon the methodology [or methodologies] you applied to gather information. The results section should state the findings of the research arranged in a logical sequence without bias or interpretation. A section describing results should be particularly detailed if your paper includes data generated from your own research.

Annesley, Thomas M. "Show Your Cards: The Results Section and the Poker Game." Clinical Chemistry 56 (July 2010): 1066-1070.

Importance of a Good Results Section

When formulating the results section, it's important to remember that the results of a study do not prove anything . Findings can only confirm or reject the hypothesis underpinning your study. However, the act of articulating the results helps you to understand the problem from within, to break it into pieces, and to view the research problem from various perspectives.

The page length of this section is set by the amount and types of data to be reported . Be concise. Use non-textual elements appropriately, such as figures and tables, to present findings more effectively. In deciding what data to describe in your results section, you must clearly distinguish information that would normally be included in a research paper from any raw data or other content that could be included as an appendix. In general, raw data that has not been summarized should not be included in the main text of your paper unless requested to do so by your professor.

Avoid providing data that is not critical to answering the research question . The background information you described in the introduction section should provide the reader with any additional context or explanation needed to understand the results. A good strategy is to always re-read the background section of your paper after you have written up your results to ensure that the reader has enough context to understand the results [and, later, how you interpreted the results in the discussion section of your paper that follows].

Bavdekar, Sandeep B. and Sneha Chandak. "Results: Unraveling the Findings." Journal of the Association of Physicians of India 63 (September 2015): 44-46; Brett, Paul. "A Genre Analysis of the Results Section of Sociology Articles." English for Specific Speakers 13 (1994): 47-59; Go to English for Specific Purposes on ScienceDirect;Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008; Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section. San Francisco Edit; "Reporting Findings." In Making Sense of Social Research Malcolm Williams, editor. (London;: SAGE Publications, 2003) pp. 188-207.

Structure and Writing Style

I.  Organization and Approach

For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results . Both approaches are appropriate in how you report your findings, but use only one approach.

  • Present a synopsis of the results followed by an explanation of key findings . This approach can be used to highlight important findings. For example, you may have noticed an unusual correlation between two variables during the analysis of your findings. It is appropriate to highlight this finding in the results section. However, speculating as to why this correlation exists and offering a hypothesis about what may be happening belongs in the discussion section of your paper.
  • Present a result and then explain it, before presenting the next result then explaining it, and so on, then end with an overall synopsis . This is the preferred approach if you have multiple results of equal significance. It is more common in longer papers because it helps the reader to better understand each finding. In this model, it is helpful to provide a brief conclusion that ties each of the findings together and provides a narrative bridge to the discussion section of the your paper.

NOTE :   Just as the literature review should be arranged under conceptual categories rather than systematically describing each source, you should also organize your findings under key themes related to addressing the research problem. This can be done under either format noted above [i.e., a thorough explanation of the key results or a sequential, thematic description and explanation of each finding].

II.  Content

In general, the content of your results section should include the following:

  • Introductory context for understanding the results by restating the research problem underpinning your study . This is useful in re-orientating the reader's focus back to the research problem after having read a review of the literature and your explanation of the methods used for gathering and analyzing information.
  • Inclusion of non-textual elements, such as, figures, charts, photos, maps, tables, etc. to further illustrate key findings, if appropriate . Rather than relying entirely on descriptive text, consider how your findings can be presented visually. This is a helpful way of condensing a lot of data into one place that can then be referred to in the text. Consider referring to appendices if there is a lot of non-textual elements.
  • A systematic description of your results, highlighting for the reader observations that are most relevant to the topic under investigation . Not all results that emerge from the methodology used to gather information may be related to answering the " So What? " question. Do not confuse observations with interpretations; observations in this context refers to highlighting important findings you discovered through a process of reviewing prior literature and gathering data.
  • The page length of your results section is guided by the amount and types of data to be reported . However, focus on findings that are important and related to addressing the research problem. It is not uncommon to have unanticipated results that are not relevant to answering the research question. This is not to say that you don't acknowledge tangential findings and, in fact, can be referred to as areas for further research in the conclusion of your paper. However, spending time in the results section describing tangential findings clutters your overall results section and distracts the reader.
  • A short paragraph that concludes the results section by synthesizing the key findings of the study . Highlight the most important findings you want readers to remember as they transition into the discussion section. This is particularly important if, for example, there are many results to report, the findings are complicated or unanticipated, or they are impactful or actionable in some way [i.e., able to be pursued in a feasible way applied to practice].

NOTE:   Always use the past tense when referring to your study's findings. Reference to findings should always be described as having already happened because the method used to gather the information has been completed.

III.  Problems to Avoid

When writing the results section, avoid doing the following :

  • Discussing or interpreting your results . Save this for the discussion section of your paper, although where appropriate, you should compare or contrast specific results to those found in other studies [e.g., "Similar to the work of Smith [1990], one of the findings of this study is the strong correlation between motivation and academic achievement...."].
  • Reporting background information or attempting to explain your findings. This should have been done in your introduction section, but don't panic! Often the results of a study point to the need for additional background information or to explain the topic further, so don't think you did something wrong. Writing up research is rarely a linear process. Always revise your introduction as needed.
  • Ignoring negative results . A negative result generally refers to a finding that does not support the underlying assumptions of your study. Do not ignore them. Document these findings and then state in your discussion section why you believe a negative result emerged from your study. Note that negative results, and how you handle them, can give you an opportunity to write a more engaging discussion section, therefore, don't be hesitant to highlight them.
  • Including raw data or intermediate calculations . Ask your professor if you need to include any raw data generated by your study, such as transcripts from interviews or data files. If raw data is to be included, place it in an appendix or set of appendices that are referred to in the text.
  • Be as factual and concise as possible in reporting your findings . Do not use phrases that are vague or non-specific, such as, "appeared to be greater than other variables..." or "demonstrates promising trends that...." Subjective modifiers should be explained in the discussion section of the paper [i.e., why did one variable appear greater? Or, how does the finding demonstrate a promising trend?].
  • Presenting the same data or repeating the same information more than once . If you want to highlight a particular finding, it is appropriate to do so in the results section. However, you should emphasize its significance in relation to addressing the research problem in the discussion section. Do not repeat it in your results section because you can do that in the conclusion of your paper.
  • Confusing figures with tables . Be sure to properly label any non-textual elements in your paper. Don't call a chart an illustration or a figure a table. If you are not sure, go here .

Annesley, Thomas M. "Show Your Cards: The Results Section and the Poker Game." Clinical Chemistry 56 (July 2010): 1066-1070; Bavdekar, Sandeep B. and Sneha Chandak. "Results: Unraveling the Findings." Journal of the Association of Physicians of India 63 (September 2015): 44-46; Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008;  Caprette, David R. Writing Research Papers. Experimental Biosciences Resources. Rice University; Hancock, Dawson R. and Bob Algozzine. Doing Case Study Research: A Practical Guide for Beginning Researchers . 2nd ed. New York: Teachers College Press, 2011; Introduction to Nursing Research: Reporting Research Findings. Nursing Research: Open Access Nursing Research and Review Articles. (January 4, 2012); Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section. San Francisco Edit ; Ng, K. H. and W. C. Peh. "Writing the Results." Singapore Medical Journal 49 (2008): 967-968; Reporting Research Findings. Wilder Research, in partnership with the Minnesota Department of Human Services. (February 2009); Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Schafer, Mickey S. Writing the Results. Thesis Writing in the Sciences. Course Syllabus. University of Florida.

Writing Tip

Why Don't I Just Combine the Results Section with the Discussion Section?

It's not unusual to find articles in scholarly social science journals where the author(s) have combined a description of the findings with a discussion about their significance and implications. You could do this. However, if you are inexperienced writing research papers, consider creating two distinct sections for each section in your paper as a way to better organize your thoughts and, by extension, your paper. Think of the results section as the place where you report what your study found; think of the discussion section as the place where you interpret the information and answer the "So What?" question. As you become more skilled writing research papers, you can consider melding the results of your study with a discussion of its implications.

Driscoll, Dana Lynn and Aleksandra Kasztalska. Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

  • << Previous: Insiderness
  • Next: Using Non-Textual Elements >>
  • Last Updated: Apr 24, 2024 10:51 AM
  • URL: https://libguides.usc.edu/writingguide

UCI Libraries Mobile Site

  • Langson Library
  • Science Library
  • Grunigen Medical Library
  • Law Library
  • Connect From Off-Campus
  • Accessibility
  • Gateway Study Center

Libaries home page

Email this link

Writing a scientific paper.

  • Writing a lab report
  • INTRODUCTION

Writing a "good" results section

Figures and Captions in Lab Reports

"Results Checklist" from: How to Write a Good Scientific Paper. Chris A. Mack. SPIE. 2018.

Additional tips for results sections.

  • LITERATURE CITED
  • Bibliography of guides to scientific writing and presenting
  • Peer Review
  • Presentations
  • Lab Report Writing Guides on the Web

This is the core of the paper. Don't start the results sections with methods you left out of the Materials and Methods section. You need to give an overall description of the experiments and present the data you found.

  • Factual statements supported by evidence. Short and sweet without excess words
  • Present representative data rather than endlessly repetitive data
  • Discuss variables only if they had an effect (positive or negative)
  • Use meaningful statistics
  • Avoid redundancy. If it is in the tables or captions you may not need to repeat it

A short article by Dr. Brett Couch and Dr. Deena Wassenberg, Biology Program, University of Minnesota

  • Present the results of the paper, in logical order, using tables and graphs as necessary.
  • Explain the results and show how they help to answer the research questions posed in the Introduction. Evidence does not explain itself; the results must be presented and then explained. 
  • Avoid: presenting results that are never discussed;  presenting results in chronological order rather than logical order; ignoring results that do not support the conclusions; 
  • Number tables and figures separately beginning with 1 (i.e. Table 1, Table 2, Figure 1, etc.).
  • Do not attempt to evaluate the results in this section. Report only what you found; hold all discussion of the significance of the results for the Discussion section.
  • It is not necessary to describe every step of your statistical analyses. Scientists understand all about null hypotheses, rejection rules, and so forth and do not need to be reminded of them. Just say something like, "Honeybees did not use the flowers in proportion to their availability (X2 = 7.9, p<0.05, d.f.= 4, chi-square test)." Likewise, cite tables and figures without describing in detail how the data were manipulated. Explanations of this sort should appear in a legend or caption written on the same page as the figure or table.
  • You must refer in the text to each figure or table you include in your paper.
  • Tables generally should report summary-level data, such as means ± standard deviations, rather than all your raw data.  A long list of all your individual observations will mean much less than a few concise, easy-to-read tables or figures that bring out the main findings of your study.  
  • Only use a figure (graph) when the data lend themselves to a good visual representation.  Avoid using figures that show too many variables or trends at once, because they can be hard to understand.

From:  https://writingcenter.gmu.edu/guides/imrad-results-discussion

  • << Previous: METHODS
  • Next: DISCUSSION >>
  • Last Updated: Aug 4, 2023 9:33 AM
  • URL: https://guides.lib.uci.edu/scientificwriting

Off-campus? Please use the Software VPN and choose the group UCIFull to access licensed content. For more information, please Click here

Software VPN is not available for guests, so they may not have access to some content when connecting from off-campus.

Writing up a Research Report

  • First Online: 04 January 2024

Cite this chapter

research results for

  • Stefan Hunziker 3 &
  • Michael Blankenagel 3  

324 Accesses

A research report is one big argument about how and why you came up with your conclusions. To make it a convincing argument, a typical guiding structure has developed. In the different chapters, there are distinct issues that need to be addressed to explain to the reader why your conclusions are valid. The governing principle for writing the report is full disclosure: to explain everything and ensure replicability by another researcher.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Barros, L. O. (2016). The only academic phrasebook you’ll ever need . Createspace Independent Publishing Platform.

Google Scholar  

Field, A. (2016). An adventure in statistics. The reality enigma . SAGE.

Field, A. (2020). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE.

Früh, M., Keimer, I., & Blankenagel, M. (2019). The impact of Balanced Scorecard excellence on shareholder returns. IFZ Working Paper No. 0003/2019. https://zenodo.org/record/2571603#.YMDUafkzZaQ . Accessed: 9 June 2021.

Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect. Basic Books.

Yin, R. K. (2013). Case study research: Design and methods (5th ed.). SAGE.

Download references

Author information

Authors and affiliations.

Wirtschaft/IFZ, Campus Zug-Rotkreuz, Hochschule Luzern, Zug-Rotkreuz, Zug, Switzerland

Stefan Hunziker & Michael Blankenagel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stefan Hunziker .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Hunziker, S., Blankenagel, M. (2024). Writing up a Research Report. In: Research Design in Business and Management. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-42739-9_4

Download citation

DOI : https://doi.org/10.1007/978-3-658-42739-9_4

Published : 04 January 2024

Publisher Name : Springer Gabler, Wiesbaden

Print ISBN : 978-3-658-42738-2

Online ISBN : 978-3-658-42739-9

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Sacred Heart University Library

Organizing Academic Research Papers: 7. The Results

  • Purpose of Guide
  • Design Flaws to Avoid
  • Glossary of Research Terms
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Executive Summary
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tertiary Sources
  • What Is Scholarly vs. Popular?
  • Qualitative Methods
  • Quantitative Methods
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Annotated Bibliography
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • How to Manage Group Projects
  • Multiple Book Review Essay
  • Reviewing Collected Essays
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Research Proposal
  • Acknowledgements

The results section of the research paper is where you report the findings of your study based upon the information gathered as a result of the methodology [or methodologies] you applied. The results section should simply state the findings, without bias or interpretation, and arranged in a logical sequence. The results section should always be written in the past tense. A section describing results [a.k.a., "findings"] is particularly necessary if your paper includes data generated from your own research.

Importance of a Good Results Section

When formulating the results section, it's important to remember that the results of a study do not prove anything . Research results can only confirm or reject the research problem underpinning your study. However, the act of articulating the results helps you to understand the problem from within, to break it into pieces, and to view the research problem from various perspectives.

The page length of this section is set by the amount and types of data to be reported . Be concise, using non-textual elements, such as figures and tables, if appropriate, to present results more effectively. In deciding what data to describe in your results section, you must clearly distinguish material that would normally be included in a research paper from any raw data or other material that could be included as an appendix. In general, raw data should not be included in the main text of your paper unless requested to do so by your professor.

Avoid providing data that is not critical to answering the research question . The background information you described in the introduction section should provide the reader with any additional context or explanation needed to understand the results. A good rule is to always re-read the background section of your paper after you have written up your results to ensure that the reader has enough context to understand the results [and, later, how you interpreted the results in the discussion section of your paper].

Bates College; Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008; Results . The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College.

Structure and Writing Style

I. Structure and Approach

For most research paper formats, there are two ways of presenting and organizing the results .

  • Present the results followed by a short explanation of the findings . For example, you may have noticed an unusual correlation between two variables during the analysis of your findings. It is correct to point this out in the results section. However, speculating as to why this correlation exists, and offering a hypothesis about what may be happening, belongs in the discussion section of your paper.
  • Present a section and then discuss it, before presenting the next section then discussing it, and so on . This is more common in longer papers because it helps the reader to better understand each finding. In this model, it can be helpful to provide a brief conclusion in the results section that ties each of the findings together and links to the discussion.

NOTE: The discussion section should generally follow the same format chosen in presenting and organizing the results.

II.  Content

In general, the content of your results section should include the following elements:

  • An introductory context for understanding the results by restating the research problem that underpins the purpose of your study.
  • A summary of your key findings arranged in a logical sequence that generally follows your methodology section.
  • Inclusion of non-textual elements, such as, figures, charts, photos, maps, tables, etc. to further illustrate the findings, if appropriate.
  • In the text, a systematic description of your results, highlighting for the reader observations that are most relevant to the topic under investigation [remember that not all results that emerge from the methodology that you used to gather the data may be relevant].
  • Use of the past tense when refering to your results.
  • The page length of your results section is guided by the amount and types of data to be reported. However, focus only on findings that are important and related to addressing the research problem.

Using Non-textual Elements

  • Either place figures, tables, charts, etc. within the text of the result, or include them in the back of the report--do one or the other but never do both.
  • In the text, refer to each non-textual element in numbered order [e.g.,  Table 1, Table 2; Chart 1, Chart 2; Map 1, Map 2].
  • If you place non-textual elements at the end of the report, make sure they are clearly distinguished from any attached appendix materials, such as raw data.
  • Regardless of placement, each non-textual element must be numbered consecutively and complete with caption [caption goes under the figure, table, chart, etc.]
  • Each non-textual element must be titled, numbered consecutively, and complete with a heading [title with description goes above the figure, table, chart, etc.].
  • In proofreading your results section, be sure that each non-textual element is sufficiently complete so that it could stand on its own, separate from the text.

III. Problems to Avoid

When writing the results section, avoid doing the following :

  • Discussing or interpreting your results . Save all this for the next section of your paper, although where appropriate, you should compare or contrast specific results to those found in other studies [e.g., "Similar to Smith [1990], one of the findings of this study is the strong correlation between motivation and academic achievement...."].
  • Reporting background information or attempting to explain your findings ; this should have been done in your Introduction section, but don't panic! Often the results of a study point to the need to provide additional background information or to explain the topic further, so don't think you did something wrong. Revise your introduction as needed.
  • Ignoring negative results . If some of your results fail to support your hypothesis, do not ignore them. Document them, then state in your discussion section why you believe a negative result emerged from your study. Note that negative results, and how you handle them, often provides you with the opportunity to write a more engaging discussion section, therefore, don't be afraid to highlight them.
  • Including raw data or intermediate calculations . Ask your professor if you need to include any raw data generated by your study, such as transcripts from interviews or data files. If raw data is to be included, place it in an appendix or set of appendices that are referred to in the text.
  • Be as factual and concise as possible in reporting your findings . Do not use phrases that are vague or non-specific, such as, "appeared to be greater or lesser than..." or "demonstrates promising trends that...."
  • Presenting the same data or repeating the same information more than once . If you feel the need to highlight something, you will have a chance to do that in the discussion section.
  • Confusing figures with tables . Be sure to properly label any non-textual elements in your paper. If you are not sure, look up the term in a dictionary.

Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008;  Caprette, David R. Writing Research Papers . Experimental Biosciences Resources. Rice University; Hancock, Dawson R. and Bob Algozzine. Doing Case Study Research: A Practical Guide for Beginning Researchers . 2nd ed. New York: Teachers College Press, 2011; Introduction to Nursing Research: Reporting Research Findings. Nursing Research: Open Access Nursing Research and Review Articles. (January 4, 2012); Reporting Research Findings. Wilder Research, in partnership with the Minnesota Department of Human Services. (February 2009); Results . The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Schafer, Mickey S. Writing the Results . Thesis Writing in the Sciences. Course Syllabus. University of Florida.

Writing Tip

Why Don't I Just Combine the Results Section with the Discussion Section?

It's not unusual to find articles in social science journals where the author(s) have combined a description of the findings from the study with a discussion about their implications. You could do this. However, if you are inexperienced writing research papers, consider creating two sections for each element in your paper as a way to better organize your thoughts and, by extension, your  paper. Think of the results section as the place where you report what your study found; think of the discussion section as the place where you interpret your data and answer the "so what?" question. As you become more skilled writing research papers, you may want to meld the results of your study with a discussion of its implications.

  • << Previous: Quantitative Methods
  • Next: Using Non-Textual Elements >>
  • Last Updated: Jul 18, 2023 11:58 AM
  • URL: https://library.sacredheart.edu/c.php?g=29803
  • QuickSearch
  • Library Catalog
  • Databases A-Z
  • Publication Finder
  • Course Reserves
  • Citation Linker
  • Digital Commons
  • Our Website

Research Support

  • Ask a Librarian
  • Appointments
  • Interlibrary Loan (ILL)
  • Research Guides
  • Databases by Subject
  • Citation Help

Using the Library

  • Reserve a Group Study Room
  • Renew Books
  • Honors Study Rooms
  • Off-Campus Access
  • Library Policies
  • Library Technology

User Information

  • Grad Students
  • Online Students
  • COVID-19 Updates
  • Staff Directory
  • News & Announcements
  • Library Newsletter

My Accounts

  • Interlibrary Loan
  • Staff Site Login

Sacred Heart University

FIND US ON  

Illustration

  • Dissertation & Thesis Guides
  • Basics of Dissertation & Thesis Writing
  • How to Write a Results Section for a Dissertation or Research Paper: Guide & Examples
  • Speech Topics
  • Basics of Essay Writing
  • Essay Topics
  • Other Essays
  • Main Academic Essays
  • Research Paper Topics
  • Basics of Research Paper Writing
  • Miscellaneous
  • Chicago/ Turabian
  • Data & Statistics
  • Methodology
  • Admission Writing Tips
  • Admission Advice
  • Other Guides
  • Student Life
  • Studying Tips
  • Understanding Plagiarism
  • Academic Writing Tips

Illustration

  • Essay Guides
  • Research Paper Guides
  • Formatting Guides
  • Basics of Research Process
  • Admission Guides

How to Write a Results Section for a Dissertation or Research Paper: Guide & Examples

Dissertation Results

Table of contents

Illustration

Use our free Readability checker

A results section is a crucial part of a research paper or dissertation, where you analyze your major findings. This section goes beyond simply presenting study outcomes. You should also include a comprehensive statistical analysis and interpret the collected data in detail.

Without dissertation research results, it is impossible to imagine a scientific work. Your task here is to present your study findings. What are qualitative or quantitative indicators? How to use tables and diagrams? How to describe data? Our article answers all these questions and many more. So, read further to discover how to analyze and describe your research indexes or contact or professionals for dissertation help from StudyCrumb.

What Is a Results Section of Dissertation?

The results section of a dissertation is a data statement from your research. Here you should present the main findings of your study to your readers. This section aims to show information objectively, systematically, concisely. It is allowed using text supplemented with illustrations.  In general, this section's length is not limited but should include all necessary data. Interpretations or conclusions should not be included in this section. Therefore, in theory, this is one of your shortest sections. But it can also be one of the most challenging sections.  The introduction presents a research topic and answers the question "why?". The Methods section explains the data collection process and answers "how?". Meanwhile, the result section shows actual data gained from experiments and tells "what?" Thus, this part plays a critical role in highlighting study's relevance. This chapter gives reader study relevance with novelty. So, you should figure out how to write it correctly. Here are main tasks that you should keep in mind while writing:

  • Results answer the question "What was found in your research?"
  • Results contain only your study's outcome. They do not include comments or interpretations.
  • Results must always be presented accurately & objectively.
  • Tables & figures are used to draw readers' attention. But the same data should never be presented in the form of a table and a figure. Don't repeat anything from a table also in text.

Dissertation: Results vs Discussion vs Conclusion

Results and discussion sections of a dissertation are often confused among researchers. Sometimes both these parts are mixed up with a conclusion for thesis . Figured out what is covered in each of these important chapters. Your readers should see that you notice how different they are. A clear understanding of differences will help you write your dissertation more effectively. 5 differences between Results VS Discussion VS Conclusion:

Wanna figure out the actual difference between discussion vs conclusion? Check out our helpful articles about Dissertation Discussion or Dissertation Conclusion.

Present Your Findings When Writing Results Section of Dissertation

Now it's time to understand how to arrange the results section of the dissertation. First, present most general findings, then narrow it down to a more specific one. Describe both qualitative & quantitative results. For example, imagine you are comparing the behavior of hamsters and mice. First, say a few words about the behavioral type of mammals that you studied. Then, mention rodents in general. At end, describe specific species of animals you carried out an experiment on.

Qualitative Results Section in Dissertation

In your dissertation results section, qualitative data may not be directly related to specific sub-questions or hypotheses. You can structure this chapter around main issues that arise when analyzing data. For each question, make a general observation of what data show. For example, you may recall recurring agreements or differences, patterns, trends. Personal answers are the basis of your research. Clarify and support these views with direct quotes. Add more information to the thesis appendix if it's needed.

Quantitative Results Section in a Dissertation

The easiest way to write a quantitative dissertation results section is to build it around a sub-question or hypothesis of your research. For each subquery, provide relevant results and include statistical analysis . Then briefly evaluate importance & reliability. Notice how each result relates to the problem or whether it supports the hypothesis. Focus on key trends, differences, and relationships between data. But don't speculate about their meaning or consequences. This should be put in the discussion vs conclusion section. Suppose your results are not directly related to answering your questions. Maybe there is additional information that helps readers understand how you collect data. In that case, you can include them in the appendix. It is often helpful to include visual elements such as graphs, charts, and tables. But only if they accurately support your results and add value.

Tables and Figures in Results Section in Dissertation

We recommend you use tables or figures in the dissertation results section correctly. Such interpretation can effectively present complex data concisely and visually. It allows readers to quickly gain a statistical overview. On the contrary, poorly designed graphs can confuse readers. That will reduce the effectiveness of your article.  Here are our recommendations that help you understand how to use tables and figures:

  • Make sure tables and figures are self-explanatory. Sometimes, your readers may look at tables and figures before reading the entire text. So they should make sense as separate elements.
  • Do not repeat the content of tables and figures in text. Text can be used to highlight key points from tables and figures. But do not repeat every element.
  • Make sure that values ​​or information in tables and text are consistent. Make sure that abbreviations, group names, interpretations are the same as in text.
  • Use clear, informative titles for tables and figures. Do not leave any table or figure without a title or legend. Otherwise, readers will not be able to understand data's meaning. Also, make sure column names, labels, figures are understandable.
  • Check accuracy of data presented in tables and figures. Always double-check tables and figures to make sure numbers converge.
  • Tables should not contain redundant information. Make sure tables in the article are not too crowded. If you need to provide extensive data, use Appendixes.
  • Make sure images are clear. Make sure images and all parts of drawings are precise. Lettering should be in a standard font and legible against the background of the picture.
  • Ask for permission to use illustrations. If you use illustrations, be sure to ask copyright holders and indicate them.

Tips on How to Write a Results Section

We have prepared several tips on how to write the results section of the dissertation!  Present data collected during study objectively, logically, and concisely. Highlight most important results and organize them into specific sections. It is an excellent way to show that you have covered all the descriptive information you need. Correct usage of visual elements effectively helps your readers with understanding. So, follow main 3 rules for writing this part:

  • State only actual results. Leave explanations and comments for Discussion.
  • Use text, tables, and pictures to orderly highlight key results.
  • Make sure that contents of tables and figures are not repeated in text.

In case you have questions about a  conceptual framework in research , you will find a blog dedicated to this issue in our database.

What to Avoid When Writing the Results Section of a Dissertation

Here we will discuss how NOT to write the results section of a dissertation. Or simply, what points to avoid:

  • Do not make your research too complicated. Your paper, tables, and graphs should be clearly marked and follow order. So that they can exist independently without further explanation.
  • Do not include raw data. Remember, you are summarizing relevant results, not reporting them in detail. This chapter should briefly summarize your findings. Avoid complete introduction to each number and calculation.
  • Do not contradict errors or false results. Explain these errors and contradictions in conclusions. This often happens when different research methods have been used.
  • Do not write a conclusion or discussion. Instead, this part should contain summaries of findings.
  • Do not tend to include explanations and inferences from results. Such an approach can make this chapter subjective, unclear, and confusing to the reader.
  • Do not forget about novelty. Its lack is one of the main reasons for the paper's rejection.

Dissertation Results Section Example

Let's take a look at some good results section of dissertation examples. Remember that this part shows fundamental research you've done in detail. So, it has to be clear and concise, as you can see in the sample.

Illustration

Final Thoughts on Writing Results Section of Dissertation

When writing a results section of a dissertation, highlight your achievements by data. The main chapter's task is to convince the reader of conclusions' validity of your research. You should not overload text with too detailed information. Never use words whose meanings you do not understand. Also, oversimplification may seem unconvincing for readers. But on the other hand, writing this part can even be fun. You can directly see your study results, which you'll interpret later. So keep going, and we wish you courage!

Illustration

Writing any academic paper is long and thorough work. But StudyCrumb got you back! Our professional writers will deliver any type of work quickly and excellently! 

Joe_Eckel_1_ab59a03630.jpg

Joe Eckel is an expert on Dissertations writing. He makes sure that each student gets precious insights on composing A-grade academic writing.

Illustration

Grad Coach

How To Write The Results/Findings Chapter

For qualitative studies (dissertations & theses).

By: Jenna Crossley (PhD Cand). Expert Reviewed By: Dr. Eunice Rautenbach | August 2021

So, you’ve collected and analysed your qualitative data, and it’s time to write up your results chapter – exciting! But where do you start? In this post, we’ll guide you through the qualitative results chapter (also called the findings chapter), step by step.  

Overview: Qualitative Results Chapter

  • What (exactly) the qualitative results chapter is
  • What to include in your results chapter
  • How to write up your results chapter
  • A few tips and tricks to help you along the way

What exactly is the results chapter?

The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and discuss its meaning), depending on your university’s preference.  We’ll treat the two chapters as separate, as that’s the most common approach.

In contrast to a quantitative results chapter that presents numbers and statistics, a qualitative results chapter presents data primarily in the form of words . But this doesn’t mean that a qualitative study can’t have quantitative elements – you could, for example, present the number of times a theme or topic pops up in your data, depending on the analysis method(s) you adopt.

Adding a quantitative element to your study can add some rigour, which strengthens your results by providing more evidence for your claims. This is particularly common when using qualitative content analysis. Keep in mind though that qualitative research aims to achieve depth, richness and identify nuances , so don’t get tunnel vision by focusing on the numbers. They’re just cream on top in a qualitative analysis.

So, to recap, the results chapter is where you objectively present the findings of your analysis, without interpreting them (you’ll save that for the discussion chapter). With that out the way, let’s take a look at what you should include in your results chapter.

Only present the results, don't interpret them

What should you include in the results chapter?

As we’ve mentioned, your qualitative results chapter should purely present and describe your results , not interpret them in relation to the existing literature or your research questions . Any speculations or discussion about the implications of your findings should be reserved for your discussion chapter.

In your results chapter, you’ll want to talk about your analysis findings and whether or not they support your hypotheses (if you have any). Naturally, the exact contents of your results chapter will depend on which qualitative analysis method (or methods) you use. For example, if you were to use thematic analysis, you’d detail the themes identified in your analysis, using extracts from the transcripts or text to support your claims.

While you do need to present your analysis findings in some detail, you should avoid dumping large amounts of raw data in this chapter. Instead, focus on presenting the key findings and using a handful of select quotes or text extracts to support each finding . The reams of data and analysis can be relegated to your appendices.

While it’s tempting to include every last detail you found in your qualitative analysis, it is important to make sure that you report only that which is relevant to your research aims, objectives and research questions .  Always keep these three components, as well as your hypotheses (if you have any) front of mind when writing the chapter and use them as a filter to decide what’s relevant and what’s not.

Need a helping hand?

research results for

How do I write the results chapter?

Now that we’ve covered the basics, it’s time to look at how to structure your chapter. Broadly speaking, the results chapter needs to contain three core components – the introduction, the body and the concluding summary. Let’s take a look at each of these.

Section 1: Introduction

The first step is to craft a brief introduction to the chapter. This intro is vital as it provides some context for your findings. In your introduction, you should begin by reiterating your problem statement and research questions and highlight the purpose of your research . Make sure that you spell this out for the reader so that the rest of your chapter is well contextualised.

The next step is to briefly outline the structure of your results chapter. In other words, explain what’s included in the chapter and what the reader can expect. In the results chapter, you want to tell a story that is coherent, flows logically, and is easy to follow , so make sure that you plan your structure out well and convey that structure (at a high level), so that your reader is well oriented.

The introduction section shouldn’t be lengthy. Two or three short paragraphs should be more than adequate. It is merely an introduction and overview, not a summary of the chapter.

Pro Tip – To help you structure your chapter, it can be useful to set up an initial draft with (sub)section headings so that you’re able to easily (re)arrange parts of your chapter. This will also help your reader to follow your results and give your chapter some coherence.  Be sure to use level-based heading styles (e.g. Heading 1, 2, 3 styles) to help the reader differentiate between levels visually. You can find these options in Word (example below).

Heading styles in the results chapter

Section 2: Body

Before we get started on what to include in the body of your chapter, it’s vital to remember that a results section should be completely objective and descriptive, not interpretive . So, be careful not to use words such as, “suggests” or “implies”, as these usually accompany some form of interpretation – that’s reserved for your discussion chapter.

The structure of your body section is very important , so make sure that you plan it out well. When planning out your qualitative results chapter, create sections and subsections so that you can maintain the flow of the story you’re trying to tell. Be sure to systematically and consistently describe each portion of results. Try to adopt a standardised structure for each portion so that you achieve a high level of consistency throughout the chapter.

For qualitative studies, results chapters tend to be structured according to themes , which makes it easier for readers to follow. However, keep in mind that not all results chapters have to be structured in this manner. For example, if you’re conducting a longitudinal study, you may want to structure your chapter chronologically. Similarly, you might structure this chapter based on your theoretical framework . The exact structure of your chapter will depend on the nature of your study , especially your research questions.

As you work through the body of your chapter, make sure that you use quotes to substantiate every one of your claims . You can present these quotes in italics to differentiate them from your own words. A general rule of thumb is to use at least two pieces of evidence per claim, and these should be linked directly to your data. Also, remember that you need to include all relevant results , not just the ones that support your assumptions or initial leanings.

In addition to including quotes, you can also link your claims to the data by using appendices , which you should reference throughout your text. When you reference, make sure that you include both the name/number of the appendix , as well as the line(s) from which you drew your data.

As referencing styles can vary greatly, be sure to look up the appendix referencing conventions of your university’s prescribed style (e.g. APA , Harvard, etc) and keep this consistent throughout your chapter.

Consistency is key

Section 3: Concluding summary

The concluding summary is very important because it summarises your key findings and lays the foundation for the discussion chapter . Keep in mind that some readers may skip directly to this section (from the introduction section), so make sure that it can be read and understood well in isolation.

In this section, you need to remind the reader of the key findings. That is, the results that directly relate to your research questions and that you will build upon in your discussion chapter. Remember, your reader has digested a lot of information in this chapter, so you need to use this section to remind them of the most important takeaways.

Importantly, the concluding summary should not present any new information and should only describe what you’ve already presented in your chapter. Keep it concise – you’re not summarising the whole chapter, just the essentials.

Tips and tricks for an A-grade results chapter

Now that you’ve got a clear picture of what the qualitative results chapter is all about, here are some quick tips and reminders to help you craft a high-quality chapter:

  • Your results chapter should be written in the past tense . You’ve done the work already, so you want to tell the reader what you found , not what you are currently finding .
  • Make sure that you review your work multiple times and check that every claim is adequately backed up by evidence . Aim for at least two examples per claim, and make use of an appendix to reference these.
  • When writing up your results, make sure that you stick to only what is relevant . Don’t waste time on data that are not relevant to your research objectives and research questions.
  • Use headings and subheadings to create an intuitive, easy to follow piece of writing. Make use of Microsoft Word’s “heading styles” and be sure to use them consistently.
  • When referring to numerical data, tables and figures can provide a useful visual aid. When using these, make sure that they can be read and understood independent of your body text (i.e. that they can stand-alone). To this end, use clear, concise labels for each of your tables or figures and make use of colours to code indicate differences or hierarchy.
  • Similarly, when you’re writing up your chapter, it can be useful to highlight topics and themes in different colours . This can help you to differentiate between your data if you get a bit overwhelmed and will also help you to ensure that your results flow logically and coherently.

If you have any questions, leave a comment below and we’ll do our best to help. If you’d like 1-on-1 help with your results chapter (or any chapter of your dissertation or thesis), check out our private dissertation coaching service here or book a free initial consultation to discuss how we can help you.

research results for

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Quantitative results chapter in a dissertation

20 Comments

David Person

This was extremely helpful. Thanks a lot guys

Aditi

Hi, thanks for the great research support platform created by the gradcoach team!

I wanted to ask- While “suggests” or “implies” are interpretive terms, what terms could we use for the results chapter? Could you share some examples of descriptive terms?

TcherEva

I think that instead of saying, ‘The data suggested, or The data implied,’ you can say, ‘The Data showed or revealed, or illustrated or outlined’…If interview data, you may say Jane Doe illuminated or elaborated, or Jane Doe described… or Jane Doe expressed or stated.

Llala Phoshoko

I found this article very useful. Thank you very much for the outstanding work you are doing.

Oliwia

What if i have 3 different interviewees answering the same interview questions? Should i then present the results in form of the table with the division on the 3 perspectives or rather give a results in form of the text and highlight who said what?

Rea

I think this tabular representation of results is a great idea. I am doing it too along with the text. Thanks

Nomonde Mteto

That was helpful was struggling to separate the discussion from the findings

Esther Peter.

this was very useful, Thank you.

tendayi

Very helpful, I am confident to write my results chapter now.

Sha

It is so helpful! It is a good job. Thank you very much!

Nabil

Very useful, well explained. Many thanks.

Agnes Ngatuni

Hello, I appreciate the way you provided a supportive comments about qualitative results presenting tips

Carol Ch

I loved this! It explains everything needed, and it has helped me better organize my thoughts. What words should I not use while writing my results section, other than subjective ones.

Hend

Thanks a lot, it is really helpful

Anna milanga

Thank you so much dear, i really appropriate your nice explanations about this.

Wid

Thank you so much for this! I was wondering if anyone could help with how to prproperly integrate quotations (Excerpts) from interviews in the finding chapter in a qualitative research. Please GradCoach, address this issue and provide examples.

nk

what if I’m not doing any interviews myself and all the information is coming from case studies that have already done the research.

FAITH NHARARA

Very helpful thank you.

Philip

This was very helpful as I was wondering how to structure this part of my dissertation, to include the quotes… Thanks for this explanation

Aleks

This is very helpful, thanks! I am required to write up my results chapters with the discussion in each of them – any tips and tricks for this strategy?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

research results for

How to Visualize Your Qualitative User Research Results for Maximum Impact

When thinking about visualization of research results, many people will automatically have an image of a graph in mind. Do you have that image, too? You would be right in thinking that many research results benefit from a graph-like visualization, showing trends and anomalies. But this is mainly true for results from quantitative user research . Graphs are often not the best way to communicate the results from qualitative user research methods such as interviews or observations. Frequently, the number of participants in these types of studies is too low to create meaningful graphs. Moreover, the insights you will want to communicate sometimes don’t translate to a clean number. Let’s show you how to visualize more subjective and fuzzy data from qualitative user research methods, in a way that communicates the essential insights to other stakeholders , so they don’t have to plow through voluminous research reports.

“The purpose of visualization is insight, not pictures.” — Ben Shneiderman, Distinguished university professor in computer science

When you’re sharing results from qualitative user research efforts, you’re most likely focusing on creating an understanding for the lives people lead, the tasks that they need to fulfill, and the interactions they must effect so as to achieve what they need or want to do. This holds true whether you’re using the research in the beginning phases of a design process (getting to know what to design), or using it in the final stages (understanding how well a design is meeting its targets). Depending on the people you’re communicating with (such as your design team or a client) and the type of understanding you need them to have (in other words, a deep empathy for the user needs or a global feeling for the context in which a product will be used), you need to determine what type of visualization suits your results best.

Imagine that you’ve conducted several interviews with people from your target group: overworked and worried informal caregivers of seniors with early signs of dementia. They have shared some essential information with you, regarding the fears they have about a new product that’s supposed to help them be more independent in the care they provide to their loved ones. You used a thematic analysis technique with lots of Post-it notes to make sense of the data, and you found four categories of fears that are relevant to consider when designing the new product: changes in the relationship, a constant feeling of worrying, lack of competencies , and lack of personal time. You need to share your insights with your design team—so that everyone is on the same page and continues the design process with the same level of empathy for this fragile target group. Also, you need to communicate these insights to your clients: the management team of a healthcare organization. They are hoping to engage informal caregivers more into the care process, since they need to reorganize their budgets and unburden their employees. How would you go about communicating the results that you found? Would you simply give them that short list of four fears? Would you give them a pie diagram, showing how often a certain category of fears was mentioned in the interviews? We would argue that this does not lead to the deep understanding you’re aiming for. A list is not immersive enough to trigger any type of empathy. Here, we’ll show you three ways of visualizing your results that are much more effective.

Affinity Diagram

By using Post-it notes for the thematic analysis technique to come to your conclusions on the four main fears that your target group struggles with, you’ve already used a visualization method that we would recommend: an affinity diagram. You have taken quotes and notes from the interviews and have written each of them on a separate Post-it. Then, you started to reorganize them according to similarities, creating themes as you went along. There’s a tremendous amount of information present in the diagram you’ve created as an analysis tool. However, you will need to clean up this diagram so that it better reflects the insights you want to communicate.

You can quickly decide that the categories should reflect the four main fears that you discovered. You then need to ask yourself what pieces of information will help your fellow designers and your client understand what these fears entail. What impact do they have on your users’ lives? When is this fear most prominent? What triggers this fear? Do you have some insight into what can reduce this fear? All this information will already be present in the Post-it notes you collected within a theme. Now you simply have to filter out the most important ones, and present them in a clear and visually appealing way to accommodate the people you’re communicating this to. You can use quotes or keywords, and—if you happen to have made some observations as well—illustrate them with pictures or drawings. The image below shows what an affinity diagram for this purpose could look like.

4 sticky notes about the most common fears. It includes quotes from participants, classified into 4 categories: Changes in the relationship, a constant feeling of worrying, a lack of competencies, and a lack of personal time.

© Teo Yu Siang and Interaction Design Foundation, CC BY-NC-SA 3.0

Want to learn more about how to create an affinity diagram? Read our article “Affinity Diagrams – Learn How to Cluster and Bundle Ideas and Facts” , or download our affinity diagram template below:

Affinity Diagrams

Empathy Map

An empathy map is a great way to create a clear overview of four major areas that we as designers should focus on so as to gain empathy for our target group: what people said , did , thought , and felt . This is also very relevant for our client in the case of informal caregivers—the management team of a healthcare organization—as they might have some preconceptions based on the usual interactions they have with the target group. The empathy map has the potential to trigger discussion within that management team, and force them to admit that they often have to adjust their perspective. In healthcare (but this holds true for many other contexts as well), professionals feel that they can speak for the patient or their family, as their main job is to take care of them. They tend to forget that they only have a limited view on their lives, and therefore might not understand all their needs as well as they would need for a design process.

To create an empathy map based on the findings from your interviews, you go through the notes and other materials that you have from your qualitative user research. For each quadrant—or each focus area—you select the relevant quotes and images, or you synthesize the appropriate insights based on them. As you can see in the image below, the resulting empathy map draws on the same data as the affinity diagram we created before, but communicates different insights. Both visualizations can be relevant in our design case.

An example of an empathy map with participant quotes and the researcher's observations and notes.

Want to learn more about how to create an empathy map? Read our article “Empathy Map – Why and How to Use It” , or download our empathy map template below:

Empathy Map

User Journey Map

Let’s revisit the design case we used to illustrate how to visualize your qualitative user research results. You’re creating a new product to help informal caregivers of seniors with mild dementia symptoms to be more independent in the care that they provide. Your client is the management team of the healthcare organization involved with these seniors. One of the subjects that you are likely to have focused on during your user research is the context in which informal caregivers provide their care. You might have asked yourself questions like: Which tasks do they perform? When do they perform these tasks? What other activities do they have before and after performing these tasks? How do they feel while providing care to their loved ones? What is relevant in your results is not only the straightforward answers to these questions but also the flow that they create throughout the lives of these informal caregivers. For example, it’s important to know whether the care they provide can be planned well in advance, or if people are often disrupted by other activities. A very powerful way to communicate this flow over time involves making a user journey map.

The user journey map you see in the image below shows a period of one day. You can choose this period according to what makes sense in your project; sometimes a week or a month would be more appropriate. You can map out the steps involved in taking care of a senior throughout a typical day, by creating separate paths for the doing, thinking, and feeling elements that you also used in your empathy map. Furthermore, you should indicate any touchpoints with the current service provided by the healthcare organization, or any other entity involved. Focus on showing the motion of a user through the different touchpoints across the day, and how the user feels about each interaction on that journey. Ultimately, you should be able to communicate to your team and the client which interactions should change, disappear, or be introduced.

A user journey map outlining the steps taken by a caregiver: Call the doctor's office to make an appointment for his mother; Checks phone to see if there are any missed calls; Gets phone call from mother while in meeting to ask when he will visit again; Stops by the pharmacy on his way to his mother; Visits his mom to talk and warm up a frozen dinner he prepared over the weekend; and Calls his mother to say goodnight before going to bed.

Want to learn more about how to create a user journey map? Read our article “ Customer Journey Maps – Walking a Mile in Your Customer’s Shoes” here.

The Take Away

Information visualization is a powerful technique to communicate the results from qualitative user research to your fellow designers or the client. There are three types of visualizations you could use. Affinity diagrams resemble your data analysis outcomes most, but you must rework them to provide more clarity to the people who need to understand the insights. Empathy maps give your audience a great overview of four relevant areas of user understanding: what people say, do, think, and feel. Finally, user journey maps introduce user flow over time. You can use these three visualizations side by side to elicit the deep feeling of empathy that will bring your design project to the next level.

References & Where to Learn More

Read this insightful article on how to use affinity diagrams to collaboratively clarify fuzzy data here .

Learn more about empathy maps here .

For more on journey mapping and how it is done in the industry, refer to this research here .

Hero Image: © Pexels, CC0

Design Thinking: The Ultimate Guide

research results for

Get Weekly Design Insights

Topics in this article, what you should read next, stage 2 in the design thinking process: define the problem and interpret the results.

research results for

  • 1.3k shares

A Simple Introduction to Lean UX

research results for

  • 3 years ago

How to Do a Thematic Analysis of User Interviews

research results for

  • 1.2k shares

Affinity Diagrams: How to Cluster Your Ideas and Reveal Insights

research results for

14 UX Deliverables: What will I be making as a UX designer?

research results for

How to Conduct User Interviews

research results for

Empathy Map – Why and How to Use It

research results for

What are Customer Touchpoints & Why Do They Matter?

research results for

7 Great, Tried and Tested UX Research Techniques

research results for

Information Overload, Why it Matters and How to Combat It

research results for

  • 1.1k shares

Open Access—Link to us!

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this article , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share Knowledge, Get Respect!

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this article.

New to UX Design? We’re giving you a free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

New to UX Design? We’re Giving You a Free ebook!

How To Present Your Market Research Results And Reports In An Efficient Way

Market research reports blog by datapine

Table of Contents

1) What Is A Market Research Report?

2) Market Research Reports Examples

3) Why Do You Need Market Research Reports

4) How To Make A Market Research Report?

5) Types Of Market Research Reports

6) Challenges & Mistakes Market Research Reports

Market research analyses are the go-to solution for many professionals, and for good reason: they save time, offer fresh insights, and provide clarity on your business. In turn, market research reports will help you to refine and polish your strategy. Plus, a well-crafted report will give your work more credibility while adding weight to any marketing recommendations you offer a client or executive.

But, while this is the case, today’s business world still lacks a way to present market-based research results efficiently. The static, antiquated nature of PowerPoint makes it a bad choice for presenting research discoveries, yet it is still widely used to present results. 

Fortunately, things are moving in the right direction. There are online data visualization tools that make it easy and fast to build powerful market research dashboards. They come in handy to manage the outcomes, but also the most important aspect of any analysis: the presentation of said outcomes, without which it becomes hard to make accurate, sound decisions. 

Here, we consider the benefits of conducting research analyses while looking at how to write and present market research reports, exploring their value, and, ultimately, getting the very most from your research results by using professional market research software .

Let’s get started.

What Is a Market Research Report?

A market research report is an online reporting tool used to analyze the public perception or viability of a company, product, or service. These reports contain valuable and digestible information like customer survey responses and social, economic, and geographical insights.

On a typical market research results example, you can interact with valuable trends and gain insight into consumer behavior and visualizations that will empower you to conduct effective competitor analysis. Rather than adding streams of tenuous data to a static spreadsheet, a full market research report template brings the outcomes of market-driven research to life, giving users a data analysis tool to create actionable strategies from a range of consumer-driven insights.

With digital market analysis reports, you can make your business more intelligent more efficient, and, ultimately, meet the needs of your target audience head-on. This, in turn, will accelerate your commercial success significantly.

Your Chance: Want to test a market research reporting software? Explore our 14-day free trial & benefit from interactive research reports!

How To Present Your Results: 4 Essential Market Research Report Templates

When it comes to sharing rafts of invaluable information, research dashboards are invaluable.

Any market analysis report example worth its salt will allow everyone to get a firm grip on their results and discoveries on a single page with ease. These dynamic online dashboards also boast interactive features that empower the user to drill down deep into specific pockets of information while changing demographic parameters, including gender, age, and region, filtering the results swiftly to focus on the most relevant insights for the task at hand.

These four market research report examples are different but equally essential and cover key elements required for market survey report success. You can also modify each and use it as a client dashboard .

While there are numerous types of dashboards that you can choose from to adjust and optimize your results, we have selected the top 3 that will tell you more about the story behind them. Let’s take a closer look.

1. Market Research Report: Brand Analysis

Our first example shares the results of a brand study. To do so, a survey has been performed on a sample of 1333 people, information that we can see in detail on the left side of the board, summarizing the gender, age groups, and geolocation.

Market research report on a brand analysis showing the sample information, brand awareness, top 5 branding themes, etc.

**click to enlarge**

At the dashboard's center, we can see the market-driven research discoveries concerning first brand awareness with and without help, as well as themes and celebrity suggestions, to know which image the audience associates with the brand.

Such dashboards are extremely convenient to share the most important information in a snapshot. Besides being interactive (but it cannot be seen on an image), it is even easier to filter the results according to certain criteria without producing dozens of PowerPoint slides. For instance, I could easily filter the report by choosing only the female answers, only the people aged between 25 and 34, or only the 25-34 males if that is my target audience.

Primary KPIs:

a) Unaided Brand Awareness

The first market research KPI in this most powerful report example comes in the form of unaided brand awareness. Presented in a logical line-style chart, this particular market study report sample KPI is invaluable, as it will give you a clear-cut insight into how people affiliate your brand within their niche.

Unaided brand awareness answering the question: When you think about outdoor gear products - what brands come to your mind? The depicted sample size is 1333.

As you can see from our example, based on a specific survey question, you can see how your brand stacks up against your competitors regarding awareness. Based on these outcomes, you can formulate strategies to help you stand out more in your sector and, ultimately, expand your audience.

b) Aided Brand Awareness

This market survey report sample KPI focuses on aided brand awareness. A visualization that offers a great deal of insight into which brands come to mind in certain niches or categories, here, you will find out which campaigns and messaging your target consumers are paying attention to and engaging with.

Aided brand awareness answering the question: Have you heard of the following brands? - The sample size is 1333 people.

By gaining access to this level of insight, you can conduct effective competitor research and gain valuable inspiration for your products, promotional campaigns, and marketing messages.

c) Brand image

Market research results on the brand image and categorized into 5 different levels of answering: totally agree, agree, maybe, disagree, and totally disagree.

When it comes to research reporting, understanding how others perceive your brand is one of the most golden pieces of information you could acquire. If you know how people feel about your brand image, you can take informed and very specific actions that will enhance the way people view and interact with your business.

By asking a focused question, this visual of KPIs will give you a definitive idea of whether respondents agree, disagree, or are undecided on particular descriptions or perceptions related to your brand image. If you’re looking to present yourself and your message in a certain way (reliable, charming, spirited, etc.), you can see how you stack up against the competition and find out if you need to tweak your imagery or tone of voice - invaluable information for any modern business.

d) Celebrity analysis

Market research report example of a celebrity analysis for a brand

This indicator is a powerful part of our research KPI dashboard on top, as it will give you a direct insight into the celebrities, influencers, or public figures that your most valued consumers consider when thinking about (or interacting with) your brand.

Displayed in a digestible bar chart-style format, this useful metric will not only give you a solid idea of how your brand messaging is perceived by consumers (depending on the type of celebrity they associate with your brand) but also guide you on which celebrities or influencers you should contact.

By working with the right influencers in your niche, you will boost the impact and reach of your marketing campaigns significantly, improving your commercial awareness in the process. And this is the KPI that will make it happen.

2. Market Research Results On Customer Satisfaction

Here, we have some of the most important data a company should care about: their already-existing customers and their perception of their relationship with the brand. It is crucial when we know that it is five times more expensive to acquire a new consumer than to retain one.

Market research report example on customers' satisfaction with a brand

This is why tracking metrics like the customer effort score or the net promoter score (how likely consumers are to recommend your products and services) is essential, especially over time. You need to improve these scores to have happy customers who will always have a much bigger impact on their friends and relatives than any of your amazing ad campaigns. Looking at other satisfaction indicators like the quality, pricing, and design, or the service they received is also a best practice: you want a global view of your performance regarding customer satisfaction metrics .

Such research results reports are a great tool for managers who do not have much time and hence need to use them effectively. Thanks to these dashboards, they can control data for long-running projects anytime.

Primary KPIs :

a) Net Promoter Score (NPS)

Another pivotal part of any informative research presentation is your NPS score, which will tell you how likely a customer is to recommend your brand to their peers.

The net promoter score is shown on a gauge chart by asking the question: on a scale of 1-10, how likely is it that you would recommend our service to a friend?

Centered on overall customer satisfaction, your NPS Score can cover the functions and output of many departments, including marketing, sales, and customer service, but also serve as a building block for a call center dashboard . When you’re considering how to present your research effectively, this balanced KPI offers a masterclass. It’s logical, it has a cohesive color scheme, and it offers access to vital information at a swift glance. With an NPS Score, customers are split into three categories: promoters (those scoring your service 9 or 10), passives (those scoring your service 7 or 8), and detractors (those scoring your service 0 to 6). The aim of the game is to gain more promoters. By gaining an accurate snapshot of your NPS Score, you can create intelligent strategies that will boost your results over time.

b) Customer Satisfaction Score (CSAT)

The next in our examples of market research reports KPIs comes in the form of the CSAT. The vast majority of consumers that have a bad experience will not return. Honing in on your CSAT is essential if you want to keep your audience happy and encourage long-term consumer loyalty.

Visual representation of a customer satisfaction score (CSAT) metric

This magnificent, full report KPI will show how satisfied customers are with specific elements of your products or services. Getting to grips with these scores will allow you to pinpoint very specific issues while capitalizing on your existing strengths. As a result, you can take measures to improve your CSAT score while sharing positive testimonials on your social media platforms and website to build trust.

c) Customer Effort Score (CES)

When it comes to presenting research findings, keeping track of your CES Score is essential. The CES Score KPI will give you instant access to information on how easy or difficult your audience can interact with or discover your company based on a simple scale of one to ten.

The customer effort score (CES) helps you in figuring out how easy and fast it is to make business with your company according to your customers

By getting a clear-cut gauge of how your customers find engagement with your brand, you can iron out any weaknesses in your user experience (UX) offerings while spotting any friction, bottlenecks, or misleading messaging. In doing so, you can boost your CES score, satisfy your audience, and boost your bottom line.

3. Market Research Results On Product Innovation

This final market-driven research example report focuses on the product itself and its innovation. It is a useful report for future product development and market potential, as well as pricing decisions.

Market research results report on product innovation, useful for product development and pricing decisions

Using the same sample of surveyed people as for the first market-focused analytical report , they answer questions about their potential usage and purchase of the said product. It is good primary feedback on how the market would receive the new product you would launch. Then comes the willingness to pay, which helps set a price range that will not be too cheap to be trusted nor too expensive for what it is. That will be the main information for your pricing strategy.

a) Usage Intention

The first of our product innovation KPI-based examples comes in the form of usage intention. When you’re considering how to write a market research report, including metrics centered on consumer intent is critical.

This market analysis report shows the usage intention that resulted in 41% of a target group would use a product of the newest generation in comparison to competing or older products

This simple yet effective visualization will allow you to understand not only how users see your product but also whether they prefer previous models or competitor versions . While you shouldn’t base all of your product-based research on this KPI, it is very valuable, and you should use it to your advantage frequently.

b) Purchase Intention

Another aspect to consider when looking at how to present market research data is your audience’s willingness or motivation to purchase your product. Offering percentage-based information, this effective KPI provides a wealth of at-a-glance information to help you make accurate forecasts centered on your product and service offerings.

The purchase intention is showing the likelihood of buying a product in  percentage

Analyzing this information regularly will give you the confidence and direction to develop strategies that will steer you to a more prosperous future, meeting the ever-changing needs of your audience on an ongoing basis.

c) Willingness To Pay (WPS)

Willingness to pay is depicted on a pie chart with additional explanations of the results

Our final market research example KPI is based on how willing customers are to pay for a particular service or product based on a specific set of parameters. This dynamic visualization, represented in an easy-to-follow pie chart, will allow you to realign the value of your product (USPs, functions, etc.) while setting price points that are most likely to result in conversions. This is a market research presentation template that every modern organization should use to its advantage.

4. Market Research Report On Customer Demographics 

This particular example of market research report, generated with a modern dashboard creator , is a powerful tool, as it displays a cohesive mix of key demographic information in one intuitive space.

Market research reports example for a customer demographics study

By breaking down these deep pockets of consumer-centric information, you can gain the power to develop more impactful customer communications while personalizing every aspect of your target audience’s journey across every channel or touchpoint. As a result, you can transform theoretical insights into actionable strategies that will result in significant commercial growth. 

Every section of this responsive marketing research report works in unison to build a profile of your core audience in a way that will guide your company’s consumer-facing strategies with confidence. With in-depth visuals based on gender, education level, and tech adoption, you have everything you need to speak directly to your audience at your fingertips.

Let’s look at the key performance indicators (KPIs) of this invaluable market research report example in more detail.

a) Customer By Gender

Straightforward market research reports showing the number of customers by gender

This KPI is highly visual and offers a clear-cut representation of your company’s gender share over time. By gaining access to this vital information, you can deliver a more personalized experience to specific audience segments while ensuring your messaging is fair, engaging, and inclusive.

b) Customers by education level

Number of customers by education level as an example of a market research report metric

The next market analysis report template is a KPI that provides a logical breakdown of your customers’ level of education. By using this as a demographic marker, you can refine your products to suit the needs of your audience while crafting your content in a way that truly resonates with different customer groups.

c) Customers by technology adoption

Market research report template showing customers technology adoption for the past 5 years

Particularly valuable if you’re a company that sells tech goods or services, this linear KPI will show you where your customers are in terms of technological know-how or usage. By getting to grips with this information over time, you can develop your products or services in a way that offers direct value to your consumers while making your launches or promotions as successful as possible.

d) Customer age groups

Number of customers by age group as a key demographic metric of a market research report

By understanding your customers’ age distribution in detail, you can gain a deep understanding of their preferences. And that’s exactly what this market research report sample KPI does. Presented in a bar chart format, this KPI will give you a full breakdown of your customers’ age ranges, allowing you to build detailed buyer personas and segment your audience effectively.

Why Do You Need Market Research Reports?

As the adage goes, “Look before you leap“ – which is exactly what a research report is here for. As the headlights of a car, they will show you the pitfalls and fast lanes on your road to success: likes and dislikes of a specific market segment in a certain geographical area, their expectations, and readiness. Among other things, a research report will let you:

  • Get a holistic view of the market : learn more about the target market and understand the various factors involved in the buying decisions. A broader view of the market lets you benchmark other companies you do not focus on. This, in turn, will empower you to gather the industry data that counts most. This brings us to our next point.
  • Curate industry information with momentum: Whether you’re looking to rebrand, improve on an existing service, or launch a new product, time is of the essence. By working with the best market research reports created with modern BI reporting tools , you can visualize your discoveries and data, formatting them in a way that not only unearths hidden insights but also tells a story - a narrative that will gain a deeper level of understanding into your niche or industry. The features and functionality of a market analysis report will help you grasp the information that is most valuable to your organization, pushing you ahead of the pack in the process.
  • Validate internal research: Doing the internal analysis is one thing, but double-checking with a third party also greatly helps avoid getting blinded by your own data.
  • Use actionable data and make informed decisions: Once you understand consumer behavior as well as the market, your competitors, and the issues that will affect the industry in the future, you are better armed to position your brand. Combining all of it with the quantitative data collected will allow you to more successful product development. To learn more about different methods, we suggest you read our guide on data analysis techniques .
  • Strategic planning: When you want to map out big-picture organizational goals, launch a new product development, plan a geographic market expansion, or even a merger and acquisition – all of this strategic thinking needs solid foundations to fulfill the variety of challenges that come along.
  • Consistency across the board: Collecting, presenting, and analyzing your results in a way that’s smarter, more interactive, and more cohesive will ensure your customer communications, marketing campaigns, user journey, and offerings meet your audience’s needs consistently across the board. The result? Faster growth, increased customer loyalty, and more profit.
  • Better communication: The right market research analysis template (or templates) will empower everyone in the company with access to valuable information - the kind that is relevant and comprehensible. When everyone is moving to the beat of the same drum, they will collaborate more effectively and, ultimately, push the venture forward thanks to powerful online data analysis techniques.
  • Centralization: Building on the last point, using a powerful market research report template in the form of a business intelligence dashboard will make presenting your findings to external stakeholders and clients far more effective, as you can showcase a wealth of metrics, information, insights, and invaluable feedback from one centralized, highly visual interactive screen. 
  • Brand reputation: In the digital age, brand reputation is everything. By making vital improvements in all of the key areas above, you will meet your customers’ needs head-on with consistency while finding innovative ways to stand out from your competitors. These are the key ingredients of long-term success.

How To Present Market Research Analysis Results?

15 best practices and tips on how to present market research analysis results

Here we look at how you should present your research reports, considering the steps it takes to connect with the outcomes you need to succeed:

  • Collect your data 

As with any reporting process, you first and foremost need to collect the data you’ll use to conduct your studies. Businesses conduct research studies to analyze their brand awareness, identity, and influence in the market. For product development and pricing decisions, among many others. That said, there are many ways to collect information for a market research report. Among some of the most popular ones, we find: 

  • Surveys: Probably the most common way to collect research data, surveys can come in the form of open or closed questions that can be answered anonymously. They are the cheapest and fastest way to collect insights about your customers and business. 
  • Interviews : These are face-to-face discussions that allow the researcher to analyze responses as well as the body language of the interviewees. This method is often used to define buyer personas by analyzing the subject's budget, job title, lifestyle, wants, and needs, among other things. 
  • Focus groups : This method involves a group of people discussing a topic with a mediator. It is often used to evaluate a new product or new feature or to answer a specific question that the researcher might have. 
  • Observation-based research : In this type of research, the researcher or business sits back and watches customers interact with the product without any instructions or help. It allows us to identify pain points as well as strong features. 
  • Market segmentation : This study allows you to identify and analyze potential market segments to target. Businesses use it to expand into new markets and audiences. 

These are just a few of the many ways in which you can gather your information. The important point is to keep the research objective as straightforward as possible. Supporting yourself with professional BI solutions to clean, manage, and present your insights is probably the smartest choice.

2. Hone in on your research:

When looking at how to source consumer research in a presentation, you should focus on two areas: primary and secondary research. Primary research comes from your internal data, monitoring existing organizational practices, the effectiveness of sales, and the tools used for communication, for instance. Primary research also assesses market competition by evaluating the company plans of the competitors. Secondary research focuses on existing data collected by a third party, information used to perform benchmarking and market analysis. Such metrics help in deciding which market segments are the ones the company should focus its efforts on or where the brand is standing in the minds of consumers. Before you start the reporting process, you should set your goals, segmenting your research into primary and secondary segments to get to grips with the kind of information you need to work with to achieve effective results.

3. Segment your customers:

To give your market research efforts more context, you should segment your customers into different groups according to the preferences outlined in the survey or feedback results or by examining behavioral or demographic data.

If you segment your customers, you can tailor your market research and analysis reports to display only the information, charts, or graphics that will provide actionable insights into their wants, needs, or industry-based pain points. 

  • Identify your stakeholders:

Once you’ve drilled down into your results and segmented your consumer groups, it’s important to consider the key stakeholders within the organization that will benefit from your information the most. 

By looking at both internal and external stakeholders, you will give your results a path to effective presentation, gaining the tools to understand which areas of feedback or data are most valuable, as well as most redundant. As a consequence, you will ensure your results are concise and meet the exact information needs of every stakeholder involved in the process.

  • Set your KPIs:

First, remember that your reports should be concise and accurate - straight to the point without omitting any essential information. Work to ensure your insights are clean and organized, with participants grouped into relevant categories (demographics, profession, industry, education, etc.). Once you’ve organized your research, set your goals, and cleaned your data, you should set your KPIs to ensure your report is populated with the right visualizations to get the job done. Explore our full library of interactive KPI examples for inspiration.

  • Include competitor’s analysis 

Whether you are doing product innovation research, customer demographics, pricing, or any other, including some level of insights about competitors in your reports is always recommended as it can help your business or client better understand where they stand in the market. That being said, competitor analysis is not as easy as picking a list of companies in the same industry and listing them. Your main competitor can be just a company's division in an entirely different industry. For example, Apple Music competes with Spotify even though Apple is a technology company. Therefore, it is important to carefully analyze competitors from a general but detailed level. 

Providing this kind of information in your reports can also help you find areas that competitors are not exploiting or that are weaker and use them to your advantage to become a market leader. 

  • Produce your summary:

To complement your previous efforts, writing an executive summary of one or two pages that will explain the general idea of the report is advisable. Then come the usual body parts:

  • An introduction providing background information, target audience, and objectives;
  • The qualitative research describes the participants in the research and why they are relevant to the business;
  • The survey research outlines the questions asked and answered;
  • A summary of the insights and metrics used to draw the conclusions, the research methods chosen, and why;
  • A presentation of the findings based on your research and an in-depth explanation of these conclusions.
  • Use a mix of visualizations:

When presenting your results and discoveries, you should aim to use a balanced mix of text, graphs, charts, and interactive visualizations.

Using your summary as a guide, you should decide which type of visualization will present each specific piece of market research data most effectively (often, the easier to understand and more accessible, the better).

Doing so will allow you to create a story that will put your research information into a living, breathing context, providing a level of insight you need to transform industry, competitor, or consumer info or feedback into actionable strategies and initiatives.

  • Be careful not to mislead 

Expanding on the point above, using a mix of visuals can prove highly valuable in presenting your results in an engaging and understandable way. That being said, when not used correctly, graphs and charts can also become misleading. This is a popular practice in the media, news, and politics, where designers tweak the visuals to manipulate the masses into believing a certain conclusion. This is a very unethical practice that can also happen by mistake when you don’t pick the right chart or are not using it in the correct way. Therefore, it is important to outline the message you are trying to convey and pick the chart type that will best suit those needs. 

Additionally, you should also be careful with the data you choose to display, as it can also become misleading. This can happen if you, for example, cherry-pick data, which means only showing insights that prove a conclusion instead of the bigger picture. Or confusing correlation with causation, which means assuming that because two events happened simultaneously, one caused the other. 

Being aware of these practices is of utmost importance as objectivity is crucial when it comes to dealing with data analytics, especially if you are presenting results to clients. Our guides on misleading statistics and misleading data visualizations can help you learn more about this important topic. 

  • Use professional dashboards:

To optimize your market research discoveries, you must work with a dynamic business dashboard . Not only are modern dashboards presentable and customizable, but they will offer you past, predictive, and real-time insights that are accurate, interactive, and yield long-lasting results.

All market research reports companies or businesses gathering industry or consumer-based information will benefit from professional dashboards, as they offer a highly powerful means of presenting your data in a way everyone can understand. And when that happens, everyone wins.

Did you know? The interactive nature of modern dashboards like datapine also offers the ability to quickly filter specific pockets of information with ease, offering swift access to invaluable insights.

  • Prioritize interactivity 

The times when reports were static are long gone. Today, to extract the maximum value out of your research data, you need to be able to explore the information and answer any critical questions that arise during the presentation of results. To do so, modern reporting tools provide multiple interactivity features to help you bring your research results to life. 

For instance, a drill-down filter lets you go into lower levels of hierarchical data without generating another graph. For example, imagine you surveyed customers from 10 different countries. In your report, you have a chart displaying the number of customers by country, but you want to analyze a specific country in detail. A drill down filter would enable you to click on a specific country and display data by city on that same chart. Even better, a global filter would allow you to filter the entire report to show only results for that specific country. 

Through the use of interactive filters, such as the one we just mentioned, you’ll not only make the presentation of results more efficient and profound, but you’ll also avoid generating pages-long reports to display static results. All your information will be displayed in a single interactive page that can be filtered and explored upon need.  

  • Customize the reports 

This is a tip that is valuable for any kind of research report, especially when it comes to agencies that are reporting to external clients. Customizing the report to match your client’s colors, logo, font, and overall branding will help them grasp the data better, thanks to a familiar environment. This is an invaluable tip as often your audience will not feel comfortable dealing with data and might find it hard to understand or intimidating. Therefore, providing a familiar look that is also interactive and easier to understand will keep them engaged and collaborative throughout the process. 

Plus, customizing the overall appearance of the report will also make your agency look more professional, adding extra value to your service. 

  • Know your design essentials 

When you’re presenting your market research reports sample to internal or external stakeholders, having a firm grasp on fundamental design principles will make your metrics and insights far more persuasive and compelling.

By arranging your metrics in a balanced and logical format, you can guide users toward key pockets of information exactly when needed. In turn, this will improve decision-making and navigation, making your reports as impactful as possible.

For essential tips, read our 23 dashboard design principles & best practices to enhance your analytics process.

  • Think of security and privacy 

Cyberattacks are increasing at a concerning pace, making security a huge priority for organizations of all sizes today. The costs of having your sensitive information leaked are not only financial but also reputational, as customers might not trust you again if their data ends up in the wrong hands. Given that market research analysis is often performed by agencies that handle data from clients, security and privacy should be a top priority.  

To ensure the required security and privacy, it is necessary to invest in the right tools to present your research results. For instance, tools such as datapine offer enterprise-level security protocols that ensure your information is encrypted and protected at all times. Plus, the tool also offers additional security features, such as being able to share your reports through a password-protected URL or to set viewer rights to ensure only the right people can access and manipulate the data. 

  • Keep on improving & evolving

Each time you gather or gain new marketing research reports or market research analysis report intel, you should aim to refine your existing dashboards to reflect the ever-changing landscape around you.

If you update your reports and dashboards according to the new research you conduct and new insights you connect with, you will squeeze maximum value from your metrics, enjoying consistent development in the process.

Types of Market Research Reports: Primary & Secondary Research

With so many market research examples and such little time, knowing how to best present your insights under pressure can prove tricky.

To squeeze every last drop of value from your market research efforts and empower everyone with access to the right information, you should arrange your information into two main groups: primary research and secondary research.

A. Primary research

Primary research is based on acquiring direct or first-hand information related to your industry or sector and the customers linked to it.

Exploratory primary research is an initial form of information collection where your team might set out to identify potential issues, opportunities, and pain points related to your business or industry. This type of research is usually carried out in the form of general surveys or open-ended consumer Q&As, which nowadays are often performed online rather than offline . 

Specific primary research is definitive, with information gathered based on the issues, information, opportunities, or pain points your business has already uncovered. When doing this kind of research, you can drill down into a specific segment of your customers and seek answers to the opportunities, issues, or pain points in question.

When you’re conducting primary research to feed into your market research reporting efforts, it’s important to find reliable information sources. The most effective primary research sources include:

  • Consumer-based statistical data
  • Social media content
  • Polls and Q&A
  • Trend-based insights
  • Competitor research
  • First-hand interviews

B. Secondary research

Secondary research refers to every strand of relevant data or public records you have to gain a deeper insight into your market and target consumers. These sources include trend reports, market stats, industry-centric content, and sales insights you have at your disposal.  Secondary research is an effective way of gathering valuable intelligence about your competitors. 

You can gather very precise, insightful secondary market research insights from:

  • Public records and resources like Census data, governmental reports, or labor stats
  • Commercial resources like Gartner, Statista, or Forrester
  • Articles, documentaries, and interview transcripts

Another essential branch of both primary and secondary research is internal intelligence. When it comes to efficient market research reporting examples that will benefit your organization, looking inward is a powerful move. 

Existing sales, demographic, or marketing performance insights will lead you to valuable conclusions. Curating internal information will ensure your market research discoveries are well-rounded while helping you connect with the information that will ultimately give you a panoramic view of your target market. 

By understanding both types of research and how they can offer value to your business, you can carefully choose the right informational sources, gather a wide range of intelligence related to your specific niche, and, ultimately, choose the right market research report sample for your specific needs.

If you tailor your market research report format to the type of research you conduct, you will present your visualizations in a way that provides the right people with the right insights, rather than throwing bundles of facts and figures on the wall, hoping that some of them stick.

Taking ample time to explore a range of primary and secondary sources will give your discoveries genuine context. By doing so, you will have a wealth of actionable consumer and competitor insights at your disposal at every stage of your organization’s development (a priceless weapon in an increasingly competitive digital age). 

Dynamic market research is the cornerstone of business development, and a dashboard builder is the vessel that brings these all-important insights to life. Once you get into that mindset, you will ensure that your research results always deliver maximum value.

Common Challenges & Mistakes Of Market Research Reporting & Analysis

We’ve explored different types of market research analysis examples and considered how to conduct effective research. Now, it’s time to look at the key mistakes of market research reporting.  Let’s start with the mistakes.

The mistakes

One of the biggest mistakes that stunt the success of a company’s market research efforts is strategy. Without taking the time to gather an adequate mix of insights from various sources and define your key aims or goals, your processes will become disjointed. You will also suffer from a severe lack of organizational vision.

For your market research-centric strategy to work, everyone within the company must be on the same page. Your core aims and objectives must align throughout the business, and everyone must be clear on their specific role. If you try to craft a collaborative strategy and decide on your informational sources from the very start of your journey, your strategy will deliver true growth and intelligence.

  • Measurement

Another classic market research mistake is measurement – or, more accurately, a lack of precise measurement. When embarking on market intelligence gathering processes, many companies fail to select the right KPIs and set the correct benchmarks for the task at hand. Without clearly defined goals, many organizations end up with a market analysis report format that offers little or no value in terms of decision-making or market insights.

To drive growth with your market research efforts, you must set clearly defined KPIs that align with your specific goals, aims, and desired outcomes.

  • Competition

A common mistake among many new or scaling companies is failing to explore and examine the competition. This will leave you with gaping informational blindspots. To truly benefit from market research, you must gather valuable nuggets of information from every key source available. Rather than solely looking at your consumers and the wider market (which is incredibly important), you should take the time to see what approach your direct competitors have adopted while getting to grips with the content and communications.

One of the most effective ways of doing so (and avoiding such a monumental market research mistake) is by signing up for your competitors’ mailing lists, downloading their apps, and examining their social media content. This will give you inspiration for your own efforts while allowing you to exploit any gaps in the market that your competitors are failing to fill.

The challenges

  • Informational quality

We may have an almost infinite wealth of informational insights at our fingertips, but when it comes to market research, knowing which information to trust can prove an uphill struggle.

When working with metrics, many companies risk connecting with inaccurate insights or leading to a fruitless informational rabbit hole, wasting valuable time and resources in the process. To avoid such a mishap, working with a trusted modern market research and analysis sample is the only way forward.

  • Senior buy-in

Another pressing market research challenge that stunts organizational growth is the simple case of senior buy-in. While almost every senior decision-maker knows that market research is an essential component of a successful commercial strategy, many are reluctant to invest an ample amount of time or money in the pursuit.

The best way to overcome such a challenge is by building a case that defines exactly how your market research strategies will offer a healthy ROI to every key aspect of the organization, from marketing and sales to customer experience (CX) and beyond.

  • Response rates

Low interview, focus group, or poll response rates can have a serious impact on the success and value of your market research strategy. Even with adequate senior buy-in, you can’t always guarantee that you will get enough responses from early-round interviews or poll requests. If you don’t, your market research discoveries run the risk of being shallow or offering little in the way of actionable insight.

To overcome this common challenge, you can improve the incentive you offer your market research prospects while networking across various platforms to discover new contact opportunities. Changing the tone of voice of your ads or emails will also help boost your consumer or client response rates.

Bringing Your Reports a Step Further

Even if it is still widespread for market-style research results presentation, using PowerPoint at this stage is a hassle and presents many downsides and complications. When busy managers or short-on-time top executives grab a report, they want a quick overview that gives them an idea of the results and the big picture that addresses the objectives: they need a dashboard. This can be applied to all areas of a business that need fast and interactive data visualizations to support their decision-making.

We all know that a picture conveys more information than simple text or figures, so managing to bring it all together on an actionable dashboard will convey your message more efficiently. Besides, market research dashboards have the incredible advantage of always being up-to-date since they work with real-time insights: the synchronization/updating nightmare of dozens of PowerPoint slides doesn’t exist for you anymore. This is particularly helpful for tracking studies performed over time that recurrently need their data to be updated with more recent ones.

In today’s fast-paced business environment, companies must identify and grab new opportunities as they arise while staying away from threats and adapting quickly. In order to always be a step further and make the right decisions, it is critical to perform market research studies to get the information needed and make important decisions with confidence.

We’ve asked the question, “What is a market research report?”, and examined the dynamics of a modern market research report example, and one thing’s for sure: a visual market research report is the best way to understand your customer and thus increase their satisfaction by meeting their expectations head-on. 

From looking at a sample of a market research report, it’s also clear that modern dashboards help you see what is influencing your business with clarity, understand where your brand is situated in the market, and gauge the temperature of your niche or industry before a product or service launch. Once all the studies are done, you must present them efficiently to ensure everyone in the business can make the right decisions that result in real progress. Market research reports are your key allies in the matter.

To start presenting your results with efficient, interactive, dynamic research reports and win on tomorrow’s commercial battlefield, try our dashboard reporting software and test every feature with our 14-day free trial !

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on Health Sciences Policy; Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories; Downey AS, Busta ER, Mancher M, et al., editors. Returning Individual Research Results to Participants: Guidance for a New Research Paradigm. Washington (DC): National Academies Press (US); 2018 Jul 10.

Cover of Returning Individual Research Results to Participants

Returning Individual Research Results to Participants: Guidance for a New Research Paradigm.

  • Hardcopy Version at National Academies Press

5 Advancing Practices for Returning Individual Research Results

In the previous chapters, the committee addresses why returning results provides value to participants and scientific stakeholders, what research results could be returned, and the timing of returning individual research results. This chapter focuses on the “how.” As discussed earlier in this report, the return of individual research results is a natural progression in the push for increasing transparency in the research enterprise, and the committee envisions a future where participants have greater access to their individual research results. The committee acknowledges, however, that expanding the return of research results places new demands on the research enterprise, including the development of needed expertise on study teams and assembling the resources needed to offer and return individual research results appropriately. Inconsistency in practices will need to be addressed in order to minimize the risk of harm from the return of results, an evidence base will be needed for the development of best practices for returning results, best practices will need to be developed and disseminated, and these best practices will need to be broadly implemented in order to prevent inequities. Recognizing that it will take time to fully implement best practices for the return of results—and that in the immediate term this will be an aspirational target—the committee sees opportunity for incremental progress. In the beginning, a number of relatively simple measures (“low-hanging fruit”) could be implemented in ongoing and near-term studies without prohibitive investments of time or resources. These early steps have the potential to help the research enterprise begin to develop an evidence base for the return of results and will be important when working toward the committee's vision of a broad return of research results, as discussed in the previous chapters.

In this chapter the committee provides some concrete strategies for advancing practices for offering and returning results, including setting appropriate expectations for participants (for example, in the consent process) and incorporating established principles for effective communication into the return-of-results process. The chapter also discusses how the appropriate return of individual research results requires investment and careful forethought regarding the necessary contextualizing information, takeaway messages, and disclaimers. To return research results effectively will require research stakeholders to consider how to communicate in ways that are appropriate for participants with different needs, resources, and backgrounds. Returning research results can be done (and it can be done well), up-front investments can be scalable, and the development of best practices over time will improve the consistency and quality of the process of returning individual research results.

  • OPPORTUNITIES TO IMPROVE THE RETURN OF RESEARCH RESULTS: LEARNING FROM CURRENT PRACTICES

Given the complexity and uncertainty often inherent in research results, research teams would benefit from guidance on how to accomplish the challenging task of accurately communicating research results to individual participants. Investigators will need to understand how to effectively enable understanding and simultaneously communicate how to use individual research results when appropriate and how to caution against overuse. Importantly, previous experiences with returning results in health care and research settings can inform future best practice and guidance development by helping pinpoint what is effective and what is not with different groups of participants. In addition, principles for the disclosure of risks and benefits in the informed consent process will need to be adapted for use in best practices for the return of results.

Learning from the Return of Clinical Test Results: Opportunities and Limitations

The health care enterprise has considerable experience with the generation, interpretation, and return of clinical test results. In most clinical contexts, the flow of information passes through a clinician before reaching the patient. The clinician's role, therefore, has been one part gatekeeper and one part interpreter. It is important to note, however, that information technology is increasingly changing this pattern. In many health care systems, patients can access laboratory test results directly through patient portals to electronic record systems, thereby reviewing these data without a clinician present to explain the results and their significance ( AHA, 2016 ). Furthermore, health systems vary in the degree that clinicians are required to review or annotate results before they are released to patients. Direct-to-consumer testing represents another model for the direct return of results to an individual, one in which a clinician may not even know that a test has been conducted until the patient presents the result report to their physician ( O'Connor, 2016 ).

While the health care delivery experience may offer lessons for the return of research results, this is not to say that best practices for communication are always (or even usually) applied in clinical practice. Research indicates that the current level of information provided with clinical test results may be insufficient to enable patients to understand their meaning ( O'Kane et al., 2015 ). Clinical biomarker results, for example, are generally returned in numerical or tabular form with a standard reference range. However, recent evidence suggests that many patients struggle to determine whether a result is inside or outside of the standard reference range, which is the most basic form of understanding needed for meaningful use ( Zikmund-Fisher et al., 2014 ). Sometimes (but not always) results are also accompanied by an interpretive statement from the ordering clinician, but the language used in such statements may vary across clinicians and situations. Despite this, there are situations in which clinical results are returned with additional contextual information where the purpose of the test and the information generated during the test are addressed. For example, in clinical genetics patients are often given substantial contextual information (e.g., counseling, the meaning of a negative result, clear statements of known impact of particular mutations) to help them understand their results ( Haga et al., 2014 ). The same practices may be appropriate for research-based genetic testing, although research results may be associated with greater uncertainty, which may require further clarification.

Challenges communicating clinical test results and other medical information effectively may stem, in part, from gaps in health literacy 1 and other forms of literacy, such as graph literacy and health numeracy. 2 In 2006 the National Center for Education Statistics released a National Assessment of Adult Literacy and found that “the majority of adults (53 percent) had intermediate health literacy while about 22 percent had basic and 14 percent had below basic health literacy” ( National Center for Education Statistics, 2006 , p. v). Extensive research shows that low health literacy, poor numeracy, poor graphical literacy ( Joint Commission, 2007 ), and language barriers all impede an individual's ability to interpret and use information such as test result communications ( Rodríguez et al., 2013 ; Zikmund-Fisher et al., 2014 , 2017 ). This underscores the importance of understanding the limitations that poor literacy may impose on understanding and emphasizes the importance of clear communication in the provision of health information ( Joint Commission, 2007 ), including clinical and research test results. To address literacy and numeracy barriers, such information needs to be provided in a format and with content that is accessible to the target audience ( Parker et al., 2016 ). This may entail

  • creating materials in users' primary languages and considering language-based sources of misunderstanding to address language barriers,
  • creating materials that reflect participants' preferences regarding terminology,
  • using plain language to overcome low literacy ( CDC, 2016 ; IOM, 2014 ), and
  • using evidence-based formats that facilitate understanding of quantitative information by those with low numeracy and graphical literacy ( IOM, 2014 ).

In addition, the National Academies of Sciences, Engineering, and Medicine's Roundtable on Health Literacy published a perspective on health literacy and precision medicine, which concluded that “participant input into the crafting of clear, navigable, and useful messages and processes” is a hard-learned lesson from the field of health literacy ( Parker et al., 2016 , p. 3). While those in the field of health care have acknowledged these gaps in practice which inhibit patient understanding and have made strides to correct this, there are still areas where improvements can be made to the processes of clinical test return and messaging. Research has the opportunity to learn from both the good and the bad in clinical test return. Doing so will allow the research enterprise to shape the return of research results into a practice that simultaneously benefits the participant most fully and is done in a way that does not burden the investigator. However, research sponsors and funding agencies will need to support an assessment of best practices and how to apply these to a research context first.

CONCLUSION: Many existing practices in the return of clinical results are potentially applicable to the return of individual research results, but they will need to be critically evaluated before they are adopted in the return-of-research-results context.

Learning from Current Practices in Return of Individual Research Results

Research results differ substantially from clinical test results in a number of ways, which limits the degree to which clinical experience can offer guidance on the return of research results. Most notably, research results are often associated with a greater degree of uncertainty as a result of incomplete scientific knowledge, and the uncertainties present at the level of individual results are even larger than the uncertainties present in aggregate results. However, as research continues, quality management systems are adopted by research laboratories, and evidence accumulates, the uncertainty in research test results can be reduced.

When patients' results are returned by the treating clinicians or clinical laboratories, the results are often accompanied by well-established population distributions or reference ranges 3 that enable interpretation by the patient and clinician ( Medscape, 2014 ). Expected reference ranges for clinical tests (e.g., blood counts) are known because the results are generated by standardized procedures used across broad populations of patients which allow for the establishment of normal result ranges for different patient characteristics, such as age or gender. In contrast, because of the significant variability in practices used in research settings, a result may need to be accompanied by documentation on what was actually done or not done in order to evaluate its meaning (and potential value or actionability). Moreover, reference information (e.g., standard ranges) for research results is often unavailable, non-representative, or unreliable for understanding whether a result is normal or abnormal and for guiding decision making. As discussed in more detail later in this chapter, research teams will need to think carefully about what reference information is available and potentially valuable for use in communicating with participants about the meaning of their individual results.

Uncertainty is difficult to communicate, particularly when it relates to something that is already probabilistic in nature, such as genetic-related risk; therefore, uncertainty is often ignored ( Han et al., 2011 ). A critical part of the return of research results, uncertainty needs to be conveyed effectively, or else investigators risk the participant putting too much or too little trust in the results. As discussed in more detail later in this chapter, attention needs to be paid to providing reference information that enables participants (and, in some cases, their treating physicians) to be able to interpret and understand the potential (or lack thereof) for using the research results.

Although the return of individual results is not currently widespread among research studies, certain investigators are already returning research results to individual participants. This is particularly true in the fields of genetics and environmental health (discussed in the sections “Returning Individual Genetic Research Results” and “Environmental Health and the Return of Individual Research Results”). These fields' experiences with the return of research results may be valuable in the development of best practices and guidance for other types of research results.

Returning Individual Genetic Research Results

In the field of genetics, some research investigators and direct-to-consumer (DTC) companies have been using and exploring methods for returning individual results for years. Numerous surveys have been done to assess customer comprehension and interpretation and the psychological effects on customers of receiving their genetic results. While usability research has helped to mitigate concerns, the possibility that customers may not fully comprehend or will misunderstand results is always a worry. For example, the Food and Drug Administration (FDA) decision summaries for 23andMe carrier screening and genetic health risk tests include special controls that describe not only the criteria for user comprehension studies and the required performance on comprehension assessments, but the specific language that must be included when reporting results to the lay user to convey the likelihood that a particular positive test was in fact positive ( FDA, 2015 , 2017a , b ). These studies find that consumers may overrate their ability to interpret test results, which may help explain why consumers are not likely to consult health professionals for assistance with test interpretation, even when such services are made available (e.g., genetic counseling offered via telephone) ( Roberts and Ostergren, 2013 ). One important conclusion from studies evaluating consumer comprehension of DTC genome testing is that

there may not be a one-size-fits-all approach to communicating genetic test information. Greater tailoring of the presentation of personal genetic testing information based on individual characteristics and type of test result may be needed—especially when results are not delivered in a clinical setting or via a trained health care professional. ( Ostergren et al., 2015 , p. 9)

In the 1990s, when the link between BRCA and breast and ovarian cancer was being established (prior to the development of a clinical test), a group at the University of Michigan developed a process for returning results to family members involved in a linkage study. 4 The process involved pre-counseling education and assessment, during which the risks and benefits of receiving results were explained and informed consent was obtained, and also a post-testing disclosure of results with clinical counseling by a multidisciplinary team ( Biesecker et al., 1993 ).

Similarly, a survey of investigators who planned to return genetic research results found that the investigators frequently used more than one method for return, with the results most commonly returned using a genetic counselor or other trained professional ( Heaney et al., 2010 ). The genetic counseling community is a rich source of expertise and experience in explaining laboratory test results to individuals. These professionals have skills and an understanding of genetic disorders combined with an education in laboratory methods that allows them to communicate effectively about test results, accuracy, interpretation, and limitations (what the test results do and do not mean) ( Doyle et al., 2016 ; Miller et al., 2014 ; Patch and Middleton, 2018 ). In addition, these professionals focus on tailoring the return of complex information so as to respect the cultural, religious, and ethnic beliefs of the participants ( Warren, 2011 ; Weil, 2001 ). It may be useful to engage genetic counselors once discussions progress to the design and implementation of return-of-results communication plans. Other methods used by investigators for the return of results by telephone, via mail, in person, via referral to a physician, or by e-mail. While some investigators were more inclined to return results if they had a medical degree and were able to provide detailed information to the participant in the context of the participant's personal health care, other investigators found that it was not always necessary to use a care provider to return results and interact with the participant.

A number of studies have emphasized the importance of the relationship between researchers and clinicians. For example, in the Framingham Heart Study results are given to the treating physician, who interprets results for the participant. 5 Geisinger Health System places genetic results in the electronic health records (EHRs) and notifies the primary care physician, who then discusses the results with their patient. 6 Additionally, a study returning results for genome sequences associated with pancreatic cancer emphasized that the ideal scenario for return would be one in which a close relationship existed between researchers and clinicians in order to enable full communication among investigators, clinical teams, and the participant ( Johns et al., 2014 ).

However, this level of face-to-face communication with the input of a physician is not always possible, nor always necessary. Wendy Chung, the Kennedy Family Professor of Pediatrics and Medicine at the Columbia University College of Physicians and Surgeons, has discussed the variety of methods used by her team to return research results in their studies of the genetic basis of human diseases ( Wynn et al., 2017 ). 7 The communication methods employed included giving participants the option of receiving results with a genetic counselor present to enable in-depth interpretation and contextualization of the genetic results or providing participants their nucleotide sequence data in a BAM file, 8 leaving interpretation up to the participant (perhaps through the use of outside interpretive services the participant could pay for) ( Wynn et al., 2017 ). In providing a BAM file to the participants, Chung said, she was not concerned that they would not understand the results, but rather she was concerned about perpetuating health disparities—by providing only the sequence data to participants, it could put those who could not afford outside services for analysis at a disadvantage. 9 However, Chung did caution against providing participants a VCF 10 file containing a list of their genetic variants because the genetics community is not in consensus about what many variants mean, so providing these files could lead to misunderstanding on the part of the participants. 11 Similarly, Jessica Langbaum of the Banner Alzheimer's Institute described options for returning genetic results, including in-person counseling, telemedicine, and Web modules. She said that the field is still struggling to determine what delivery modalities are available, scalable, and most appropriate and that further work needs to be done. 12

The various practices discussed above ultimately demonstrate that the return of results involves varying types of data, can be done using a wide range of methods, and can be tailored to the nature of the research being conducted. This heterogeneity represents a significant challenge to the design of return-of-results processes, particularly when potentially incorporating participants' varying preferences. There is both value in adjusting the format or language of communication according to participant preferences and evidence that what participants say they want is not always what will maximize their comprehension. Because the trade-offs may be different in different situations, the committee suggests that investigators should consider incorporating participant preferences, but it has not specified exactly how that should be done.

Environmental Health and the Return of Individual Research Results

The return of research results from environmental health biomonitoring 13 studies is well established both in the literature and by guidelines proposed by expert groups ( Brody et al., 2014 ; Dunagan et al., 2013 ; Exley et al., 2015 ; Haines et al., 2011 ; Judge et al., 2016 ; Morello-Frosch et al., 2009 ; Quigley, 2012 ). The return of results in this field is done because the research participants generally have a significant interest in learning their individual research results for their own use and safety ( Brody et al., 2014 ). A key consideration in determining how best to report results in an environmental monitoring study is whether a known clinical range or action level has been established for the analyte being assessed (a point we reinforce later in this chapter). Where a clinical or preclinical effect is known, this knowledge allows better guidance to be provided to participants, particularly in terms of follow-up. As is the case with exposure to lead or arsenic, acceptable blood levels and public health procedures are defined with the goal of mitigating future exposure ( CDC, 2018 ; WHO, 2017 ), although it is not uncommon for such guidance to change over time. For example, The Maternal–Infant Research of Environmental Chemicals study used predetermined guidelines to define its return and communication strategy, specifically, whether a result exceeded normal levels and might be associated with a health risk ( Haines et al., 2011 ). However, it is not uncommon for a chemical, pesticide, or other environmental contaminant to lack reference-range information, (i.e., an analyte that is not well characterized in a population) or to have differing reference ranges or other bias in datasets that can cause challenges in interpretation ( NRC, 2006 ). Therefore, determining the meaning and clinical interpretation of such test results can be a challenge, and “reference ranges do not provide conclusions on safety or risk. Presenting that fact and other limitations is an essential aspect of communicating reference-range information to individuals, the general public, and organizational decision-makers” ( NRC, 2006 , p. 151).

The return of research results with unknown clinical significance is also practiced in environmental health research. In 1999 the Household Exposure Study, which focused on identifying 89 endocrine-disrupting compounds, grappled with questions of whether the results (both from biomonitoring and environmental samples) should be returned to participants, including those results with unknown clinical meaning. Ultimately, after consideration of ethical guidelines and in consultation with community members, investigators allowed participants to access their individual and household results ( Brody et al., 2007 , 2014 ; Dunagan et al., 2013 ). Similarly, in 2004 the University of Michigan Dioxin Exposure Study, which conducted tests for the presence of 29 dioxins, furans, and polychlorinated biphenyls in participants' blood, household dust, and residential property soil, also gave participants the option to choose whether they would receive the results from each of their samples ( Garabrant et al., 2009 ). This option was provided for two key reasons. First, regulations were not available for the dioxin content of household dust, nor were medical guidelines available for the interpretation of serum dioxin levels at the time. Second, the researchers were aware that the disclosure of soil levels to property owners could cause those participants financial harm by affecting their property values.

In general, this literature concludes that the unknown should not dissuade investigators from returning results with uncertain meaning because “what little evidence we have suggests that a globally uncertainty-averse public is a myth; responses [to receiving uncertain information] vary widely across the population” ( NRC, 2006 , p. 207). This variability does, however, emphasize the need to return information with the input from the community or study population as results can often have community-wide implications or health risks.

As the committee heard in discussions with environmental health researchers, those participating in environmental exposure studies frequently want to know their results because they are the ones carrying the products of these exposures in their bodies. 14 For this reason, investigators in this field may feel a greater need to return such results. Such studies also frequently take place in communities where several households are affected and, therefore, the results of the study will likely be translatable to many in the community. To this end, investigators may use community partnerships in the design of communication plans. In a study by Erin Haynes of the University of Cincinnati, community engagement was used to develop the methods of communication used in the return of results ( Haynes et al., 2016 ). Working together, the study team and community members developed easy-to-read graphics and written materials tailored to the reading level of the recipients as well as a comparison to help in the interpretation of their results (i.e., comparing a recipient's results with those from other studies or for other children). The research team found that including community input in the development of its dissemination plans helped them translate biological data into a format that was usable by the target audience. Haynes et al. concluded that “scientists should include community partners from the target population in the development of research and data disclosure strategies in order to enhance the quality of research, to support the rights of the study participants to know their individual results, and to increase environmental health literacy” ( Haynes et al., 2016 , p. A26). See Box 5-1 for select engagement and communication practices for the return of research results in environmental health.

Select Engagement and Communication Practices for the Return of Research Results in Environmental Health.

CONCLUSION: Current research projects that return research results to individual participants use a variety of practices that have been tailored to reflect differences in study goals, populations, types of results, and other factors.

Applying Principles for Effective Communication to the Return of Research Results

Applying existing principles for clear communication represents a concrete strategy for improving the quality of return-of-results practices. While the body of evidence is still small, these issues have begun to be examined in health communication and environmental health studies. California law, for example, requires that biomonitoring results be made available to participants, and the state has conducted usability testing for content, allowing others to benefit from this work ( Biomonitoring California, 2018 ; Brown-Williams and Morello-Frosch, 2011 ). More empirical testing is needed to guide stakeholders, but there is work already occurring in this arena (as discussed above). IRBs would benefit from using best practices and reviewing the literature outside of their field; e.g., biomedical scientists can benefit from the existing guidance in environmental monitoring in developing their return-of-results communication plans. IRBs do not need to rely on gut opinions when evidence-based guidance exists and can inform participant and community input in plans. The key principles in communication that have been identified include (1) taking audience characteristics and needs into consideration and (2) having a clearly defined communication objective (i.e., what cognitive, emotional, motivational, or behavioral outcomes should ideally result from the communication) ( Haga et al., 2014 ; Nelson et al., 2009 ; Schiavo, 2014 ).

Consideration of audience characteristics and needs includes taking into account how much background knowledge a research participant has (i.e., what he or she knows about a particular disease or condition, about research, etc.) and what kinds of experiences the participant has had in the past. Research studies need to approach all participants and every community with respect and cultural humility. Doing so supports the development of trust between researchers and participants, and such trust is especially important given the known history of exploitation in racial and ethnic minorities and intellectually disabled individuals ( Carlson, 2013 ; Corbie-Smith et al., 2002 ; Yancey et al., 2006 ). Because different stakeholders will have varied perspectives and preferences, those differences need to be considered and weighed. It may be necessary to design separate return-of-results communication plans for different stakeholder groups, since something designed for one audience is likely to be non-optimal for other audiences. As a result, a one-size-fits-all approach will rarely be effective in results communication.

Research studies are designed to produce generalizable information that is applicable to the broad population and results have meaning to multiple users, from the participants who contributed to the study to the investigators who ran the study. Results can sometimes be interpreted as a characteristic of an individual participant rather than an aggregate result reflective of a broad population, making it relevant or meaningful to family members, a physical community, or a demographic group, which may have implications for the communication approach. For example, the discovery of a genetic variant in a participant provides information about that individual participant's future disease risk but, if the variant is heritable, the discovery may also offer information about family members' risks and lead to generalizations about a group's risk. Similarly, an environmental exposure result may be relevant not only to the participant, but potentially to others who share that environment (e.g., family members, neighbors, coworkers).

Using layered presentations of information is a key communication approach for meeting different needs. For example, many communications should start with a clear and concise summary of the primary points that is designed to be maximally understandable to all users. However, providing access to more detailed information (which may be more difficult to understand) is often beneficial for users with greater personal interest, literacy, or numeracy skills. When participants have different informational baselines and literacy levels, research teams will need to consider how much background information to provide to each audience. For some people, “less is more,” while for others, “more is more” ( Arcia et al., 2016 ).

When returning results to participants investigators need a clearly defined communication objective and should consider what specific change in knowledge, beliefs, motivation, or behavior is intended. The objectives of the communication will need to take into account the individuals' needs more than the investigators' needs, and they should be focused, with just one or a few objectives. A general truth is that the more one attempts to convey in a communication, the less effective that communication is likely to be ( Heath and Heath, 2007 ).

Good design practices can significantly improve people's ability to overcome communication barriers. For example, the CDC's Clear Communication Index identifies key characteristics that enhance and aid people's understanding of information. These include the use of materials translated into the recipient's primary language, use of plain language with minimal jargon, use of good visual design principles, and use of evidence-based visual displays of data ( CDC, 2016 ; Kosslyn, 2006 ; Plain Language Action and Information Network, 2018 ; Tufte, 2001 ). These practices represent minimum standards that all results communications (including clinical results) should achieve. As such, they should be included as part of training initiatives for investigators and clinicians as the research enterprise works to build the necessary expertise for effective return of results.

  • SETTING PARTICIPANT EXPECTATIONS IN THE CONSENT PROCESS AND BEYOND

In returning research results to participants, investigators should set participants' expectations up front ( Tarrant et al., 2015 ). This will require investigators to plan for when and how results will be returned early in the study process both so that participant preferences can be incorporated in the study design and participant expectations for the return of results can be addressed during the initial consent process. Addressing expectations during the initial consent process not only helps build trust between the researcher and participants, but it also provides information to participants to make a decision about whether to participate in the study.

Consent is more than just telling a participant what he or she should expect and ensuring participant comprehension. Consent design also prepares investigators for the role of administering consent; this requires investigators to establish a strategy for how consent will be administered (including the use of educational materials) ( Nusbaum et al., 2017 ). Consent may be a one-time event or an ongoing process, particularly if results will be returned at intermittent times over the course of the study. In particular, the traditional consent occurring only at the time of enrollment may not always be sufficient ( Appelbaum et al., 2014 ). There are several key issues related to the return of research results that investigators need to convey clearly to participants, regardless of the model of consent used. These include (1) what will be returned to research participants and how it will be returned, (2) the appropriate reference information and communication formats to enable understanding, and (3) the benefits and harms that may occur.

First, during the consent process, investigators will need clarity regarding what individual results will be offered to participants or what individual research results participants can access upon request; 15 when participants can expect results; the conditions under which researchers will alert participants of the availability of results; and how and when results will be communicated to participants ( Fernandez et al., 2012 ; Simon et al., 2011 ). The Multi-Regional Clinical Trials collaborative has developed a toolkit that provides guidance for informed consent documents and processes for the return of general as well as genomic research results ( MRCT Center, 2017b ). In planning the consent process, investigators also will need to consider whether participants have a right to request and receive their results under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) access right (i.e., when research laboratories operate as part of a HIPAA-covered entity, discussed more in Chapter 6 ). It is not clear how many participants (or patients) are aware of their HIPAA access rights, but researchers and institutions have an obligation to disclose participants' right to access research results under HIPAA, when applicable. Regardless, information about access rights should not be buried in the consent form. Rather, this particular pathway for accessing results, when applicable, should be made clear during the consent process. The consent should also explain how results will be returned in response to a request under HIPAA. The HIPAA access right may grant access to raw data, but it does not require that participants receive a tailored message as might be expected in a clinical care setting. Still, while HIPAA does not require the investigator to provide interpretation, in any case where results are to be returned, the goal should be to provide them in a way that is useful.

Second, due to the variability in research results and the frequent lack of clear reference information, participants may need help in determining whether they want the results and, if so, what the results might look like. To further shape participant expectations and guide decision making during the consent process regarding which results, if any, they would like to receive, it may be helpful to provide participants with examples of what results may look like ( NHGRI, 2018 ) and how they may experience possible outcomes of their choice ( Hibbard and Peters, 2003 ). Concrete examples can help people consider how they would feel or what they would do based on specific findings (versus whether they want to know “their results” in general) ( Kim et al., 2017 ), which may be particularly helpful when addressing the risks and benefits of receiving results. While descriptions of possible outcomes are important, case studies or hypothetical narratives may also be useful to enable participants to anticipate not just what the possible results might be but also their potential implications (medical, emotional, or otherwise) ( Shaffer et al., 2013 ). The examples that investigators provide of the types of results that might be returned would not be based on the participants' data, but rather would be derived from previous research that used similar assays; this will give participants a sense of what the information will look like upon return. Having participants engage in a brief values clarification exercise may help them determine what they care about and hence whether the receipt of different types of test results might confer benefits or risks to them ( Fagerlin et al., 2013 ; Holly et al., 2016 ).

Third, receiving either clinical or research test results can result in both benefits and harms, and it is critical to address these during the consent process. The benefits of receiving results may include the identification of treatable disorders, enhanced life planning, or increased knowledge about oneself. Certain types of results may have immediate practical benefits, and participants should be informed of the conditions under which researchers will alert them of the availability of urgent results. However, framing the possible value of information in a purely positive manner (overly focusing on benefits in relation to risks) is ethically inappropriate. The risks associated with the return of results can take the form of participant anxieties and fears or the misuse of research results in a medical context, leading to inappropriate medical or personal actions. In addition, results that are not actionable may cause emotional or other sorts of distress ( Zikmund-Fisher, 2017 ). Investigators will need to consider both the benefits and the risks prospectively, but under certain circumstances they may not even know that tests will be done; therefore, they may not always be able to offer participants a great deal of specificity when describing the potential benefits and risks ( Appelbaum et al., 2014 ). To adequately address the potential harms from return of research results, investigators will need to acknowledge the uncertainty in research and the possibility that non-useful information will be generated. Furthermore, in addition to sometimes lacking usefulness, research results may also sometimes be incorrect. For example, a research test may generate a false-positive 16 or false-negative 17 result, either of which can cause emotional, physical, or financial harm. Alternately, the understanding of the science behind the result may change, thereby affecting the meaning of the result for the participant. The emotional or psychological harms that may be associated with return should be discussed with participants during the consent process and again later in the study process, when investigators are actively returning results.

The design of the consent process should consider that participants' desires and willingness to take on risk may change over time and that the meaning of the results may change over time. As a result, participants ought to be given the opportunity to determine whether they want to receive their research results when they are eventually made available to the participants. Even if a participant consented to receiving results at the start of a study, he or she should have the opportunity to refuse (or accept) results once available. To accomplish this, when planning their studies investigators may need to consider models of flexible consent that will include return of research results. One option outside of a one-time consent is a staged model for consent ( Bunnik et al., 2013 ). Staged consent means that investigators “obtain consent in stages, with brief mention of [incidental findings] at the time of initial consent, but with more detailed consent obtained if and when reportable results are found” ( Appelbaum et al., 2014 , p. 6). The flexibility of staged consent models must, however, be weighed against the fact that participants who are re-contacted for further consent may infer (accurately or inaccurately) the type of result that has been found (i.e., positive or negative, good or bad) simply because of the new contact.

Models of Consent

Current consent processes are not standardized and are frequently inadequate to ensure understanding on the part of all participants. In fact, some research suggests that clinicians rarely meet even the minimum standards for disclosure necessary for the purposes of obtaining true consent ( Hall et al., 2012 ). Unfortunately, many investigators do not have appropriate training in consent practices. Furthermore, they can (much like participants) be susceptible to therapeutic misconception and may, as a result, convey biased messages to participants ( Larson et al., 2009 ).

In selecting a consent model and administering consent, investigators may want to consider how technology can facilitate the consent process. For example, technology can be a particularly helpful way to incorporate the principles of health literacy (as discussed previously). Health literacy has a strong impact on what individuals understand and how they use information related to health care and decision making. As such, investigators would benefit from capitalizing on best practices in health-literate informed consent (see Box 5-2 ). “The challenge is finding practical, non-onerous ways to respect persons' choices that have minimal negative effects on the science. Information technology may provide new opportunities to implement informed consent with minimal intrusion” ( Grady et al., 2017 , p. 857). For example, technology-assisted consent, such as the Apple Research Kit for mobile devices, which includes a layered approach to consent in which the formal consent document is augmented by a visual, animated sequence, helps the user better understand the consent contents ( ResearchKit, 2017 ).

Best Practices for Health-Literate Informed Consent Related to the Return of Individual Research Results.

Additionally, video-aided consent, like that used in the ADAPTABLE trial, can contribute to participant understanding ( ADAPTABLE Asprin Study, 2018 ; Grady et al., 2017 ). Tele-consent is another method that enables researchers to remotely video-conference with prospective research participants. With tele-consent, investigators create a display that interactively guides participants in real time through a consent form, which they then electronically sign ( Welch et al., 2016 ). However, while the use of electronic methods for consent may offer advantages for the return of research results in terms of convenience as well as providing varied approaches (e.g., use of multimedia interactive formats) for increasing understanding of the information and making possible structured assessments of that understanding, there are also a number of challenges that need to be considered ( Welch et al., 2016 ). These challenges include the fact that many people do not read terms of agreements on computers and mobile devices, there is a dearth of evidence regarding the advantages and disadvantages of electronic methods in terms of understanding of information, and since there are no face-to-face visits, verifying the identity of the individual giving consent may be difficult ( Grady et al., 2017 ; NPR, 2014 ).

In addition to ensuring that investigators are meeting the communication needs of participants with health-literate consent, investigators and IRBs will need to consider the trade-offs among consent models and formats, no matter which model of consent is used, whether traditional paper or electronic format. See Table 5-1 for an example of how the advantages and disadvantages of consent models were assessed for the return of secondary findings. Table 5-1 discusses these secondary findings (also referred to as “incidental findings”) and consent. The committee considers secondary findings to be results that can be anticipated on the part of the investigator and that considerations similar to those presented in this table can be made for any anticipated result, whether or not it is the primary aim of the study or test. Fully assessing the models of consent and closing gaps in communication during consent, particularly with the added considerations that accompany returning results, will require training for investigators and clinicians. Such training will take concerted effort, but it has the potential to enhance benefits, minimize harm, and build trust in the research enterprise.

TABLE 5-1. Potential Advantages and Disadvantages of Models of Consent to Return of Secondary Findings.

Potential Advantages and Disadvantages of Models of Consent to Return of Secondary Findings.

CONCLUSION: Details regarding the return of individual research results to participants are currently only addressed during the consent processes on an ad hoc basis, creating inconsistency across studies and institutions and inadequately setting participant expectations. CONCLUSION: How the return of individual research results is, or is not, addressed in the consent process affects participant expectations. CONCLUSION: The heterogeneity of research study designs and populations means that different consent processes will be appropriate in different situations, but regardless of the type of consent process, clear communication appropriate to varying levels of health literacy is essential.
Recommendation 9: Ensure Transparency Regarding Return of Individual Research Results in the Consent Process. In the consent process, investigators should communicate in clear language to research participants A. which individual research results participants can access, if requested, including any results participants have a legal right to access under HIPAA, and how to request these results; and B. which individual research results, if any, will be offered to participants and why, and the participant's option to decline to receive their research results. C. If results are going to be offered, the following elements should also be communicated during the consent process: 1. the risks and benefits associated with receiving individual research results; 2. conditions under which researchers will alert participants of urgent results; 3. at what time and through what process results will be communicated to participants; 4. whether the results will be placed in the participant's medical record and whether the results will be communicated to the participant's clinician; and 5. when relevant to the research protocol, the participant's option to have results shared with family members in the event the participant becomes incapacitated or deceased.
  • EFFECTIVELY COMMUNICATING INDIVIDUAL RESEARCH RESULTS TO PARTICIPANTS

Once test results have been generated and the decision has been made to return these to research participants, investigators and institutions need to ensure that the results are delivered in an appropriate manner that achieves the communication goals and meets participants' needs. Optimal communication methods need to be determined on a study-by-study basis both because the goals for each study are different and because the research team will need to take into account context-dependent considerations, such as the type of the research results (and their associated uncertainty) and the characteristics of the participants. As discussed above, participants with low health literacy, low numeracy, low graph literacy, or limited English proficiency are likely to have more difficulty with interpreting the results and understanding what kinds of actions may be appropriate in response to the result ( Perzynski et al., 2013 ). Consequently, the processes for returning individual research results must either (1) use a “universal precautions” approach ( Brega et al., 2015 ), which assumes that all research participants may have difficulty comprehending the information and promotes communication in ways that anyone can understand, or (2) include tailored approaches to meet the information needs of the research participants who wish to have more detailed information. ( Box 5-3 highlights FDA experience with communication.)

Presenting Laboratory Results to Consumers—Experiences of the Food and Drug Administration.

Facilitating Understanding of the Meaning and Limitations of Results Through Reference Information

Having access to information is not the same as being able to understand and use that information. In particular, studies in both the consumer product marketing and medical decision-making fields have shown that people find it difficult to interpret unfamiliar data in the absence of relevant reference standards ( Hsee, 1996 ; Zikmund-Fisher et al., 2004 ). As a result, hard-to-evaluate information is often ignored or not used in decision making. Many recipients of clinical test results are unable to interpret them because of a lack of familiarity with test characteristics or the possible range of test outcomes. Furthermore, even when recipients know what the result is, they may not understand its practical meaning (in terms of whether concern or action is appropriate) ( O'Kane et al., 2015 ).

In sharing individual research results with participants (especially when results are offered as part of a return-of-results plan), research teams need to communicate not just what research or test was done, but why it was done and how. To improve the meaningfulness of research test results, especially those that are difficult to understand or that are generated from tests that are not commonly used, research teams need to provide clear cues regarding (1) how much participants should trust the result and (2) what the result means or what is not known about the meaning of the result. This is because the types of laboratory tests used in research studies may generate results that are more likely to produce hard-to-evaluate data because these tests are novel, their analytic validity is unknown or being established, or their clinical validity is unknown (see Chapter 3 for more details). To help make it easier for participants to understand results, investigators need to pay attention to what reference information (e.g., standard reference ranges, comparative risks, or categorization information) is needed or appropriate for each type of result communication. The information provided with the result may dictate recipients' understanding and actions even more than the result itself. In some cases, results may need to be accompanied by multiple types of reference information (when available) to enable participant understanding.

To be clear, providing reference information for a result is not the same as providing personalized interpretation, such as clinical guidance. Clinical guidance requires integrating a research test result into the participant's individual circumstances (e.g., known medical conditions, family history). While such integration is sometimes expected in certain study contexts, investigators may not be clinicians or may not be familiar with the specific health of the participant, in which case providing clinical guidance would not be appropriate. Additionally, clinical guidance may be labor intensive, requiring investigators to tailor the research results and reference information to each individual participant's circumstances. Reference information, however, is a function of the test and the circumstances of the study but not of the individual. Consequently, providing reference information is scalable: investigators can more easily return results to a large number of participants because, in general, the reference information is applicable to all of them or to all similar participants receiving the same test. Emphasizing the identification and communication of appropriate reference standards is hence a cost-effective way of improving return-of-results communications.

Relevant reference information may be well established and standardized or may be unknown. For example, environmental contaminants such as radon and arsenic have established action thresholds or other benchmarks set by the Environmental Protection Agency. Similarly, standard clinical tests have established reference ranges (often interpreted as the range of normal values) and sometimes even pre-defined critical values (i.e., values high or low enough that a laboratory is obligated to immediately notify treating clinicians about the result to minimize associated risks; an example would be an elevated glucose level). In a genetics context, the impact of having a known BRCA1 mutation on lifetime breast cancer and ovarian cancer risk is relatively well established when family history is also known ( Paul and Paul, 2014 ). Other times, however, reference knowledge is known but no standard guidance is available; i.e., the reference information that is available cannot be generalized to a population. Alternately, reference information may not be well understood or may be completely unknown in research contexts. For example, safe or dangerous levels for a particular toxin or novel biomarker may not have been established. Dose–response relationships may be unknown or difficult to estimate for particular populations. Even relative standards, such as percentiles compared to reference distributions, may be unavailable or incorrectly used if no previous studies exist or if previous studies involved different populations, such as different racial or ethnic groups ( Holland and Palaniappan, 2012 ). Genetic variants often have no clear significance or correlations with health outcomes, and many times the prevalence of the variants in different populations is unknown ( Caswell-Jin et al., 2017 ; Saulsberry and Terry, 2013 ).

The more that is unknown about reference standards for a particular result, the more that the participant and either the investigator or the individual performing the communication should have a two-way communication to clarify “what this result means for me.” Clarification of meaning via dialogue is important not merely to improve participant understanding, but also to prevent an inaccurate interpretation or over-interpretation of results. When reference standards for a result are not known, investigators should weigh the benefits and risks of return and consider whether a return of aggregate results only would be more appropriate than a return of individual results. Regardless of whether aggregate or individual results are returned, the fact that reference information does not exist should be explicitly communicated to participants.

When developing a return-of-results plan, one explicit step should be the identification of appropriate reference information to be provided to participants. The reference information varies by the nature and type of results generated and by how informative the result is to the participant. Box 5-4 summarizes the kinds of reference information that may be appropriate to provide to participants, given the types of results that laboratories generate. Laboratory results are of two distinct types—continuous (e.g., biomarker levels that may vary across a continuous range of possible values) or binary (e.g., presence/absence of genetic variant or marker). In the clinical laboratory, these types of results are referred to as quantitative and qualitative results.

Types of Reference Information.

Continuous or Quantitative Results

When communicating continuous results, providing relative standards to which an individual result can be compared (e.g., a second data point for comparison or an observed distribution) can provide a certain degree of meaning (i.e., that the current result is higher or lower). However, relative standards may not sufficiently convey whether action should be taken, say, whether a participant should consult a physician. If, for example, a study has measured blood levels of a specific pesticide, then returning the individual result and the range of values obtained for the other study participants will not indicate whether an individual is at risk of harm from exposure to that pesticide. Nor does it indicate whether the investigators know if the pesticide poses a health risk and, if so, at what dose. For instance, if an entire community has been exposed, having average exposure levels compared to other community members may nonetheless represent a significant risk.

Because relative standards provide only limited and potentially misleading meaning, it is generally preferable to provide absolute reference information (just as absolute risk communication is generally preferred over relative risk communications), though the committee acknowledges that this will not always be possible ( Dunagan et al., 2013 ; Trevena et al., 2013 ). The absolute reference standard commonly provided with clinical test results is a standard or normal range, which in principle allows recipients to determine whether their results are normal when compared to the general population. 18 In practice, however, many people with lower literacy and numeracy skills have significant difficulty determining whether the result is inside or outside of a standard range ( Zikmund-Fisher et al., 2014 ). Furthermore, in many research contexts the substance being measured either should not normally be present or else normal ranges are unknown. The absolute meaning of continuous results can be communicated by binning results into easy-to-evaluate categories (e.g., high, moderate, low risk), noting whether a result falls within or outside of a target range; by mapping a result onto a dose–response curve; or by reporting whether the result falls above or below a harm, alert, or action threshold ( Peters et al., 2009 ). For the latter method, marking the individual result and the harm threshold on a visual display of the range can be an intuitive way to convey this information ( Zikmund-Fisher et al., 2018 ). Care should be taken, however, to ensure that important variations in meaning are not obscured by a categorization process and that people do not interpret below threshold results or those categorized as “low risk” to mean zero risk of harm.

Another critical challenge that arises when communicating continuous results involves conveying the degree of imprecision in an estimate and the corresponding uncertainties related to interpretation. Test results that are presented as point estimates without measures of variability and reliability fail to convey the uncertainty of the results ( Pocock and Hughes, 1990 ). Therefore, people tend to assume that the value they receive from a test is both precise and accurate, 19 when in fact the true level may be higher or lower. The degree of uncertainty directly relates to the likelihood of misinterpretation of the meaning of the result. For example, if the value of a result is close to some reference value, people may overinterpret what is actually an unreliable difference because of the inherent error in the estimated value.

The limits of accuracy for point estimates can be communicated through confidence intervals, error bars, or standard errors. Even when such measures are provided, however, people often do not understand their meaning ( Dieckmann et al., 2012 ). People tend to interpret uncertainty in such a way as to be favorable to their preferences or worldviews—the so-called “motivated evaluation” ( Dieckmann et al., 2017 ). The use of plain language can help research participants better understand the limitations related to the validity of the test result and the implications in terms of whether the data should be relied on for decision making. For example, while many people may not be familiar with the term “95 percent confidence intervals,” the extent of uncertainty can be conveyed by discussing minimum and maximum levels or best and worst case scenarios (i.e., “the value might be as high as X or as low as Y”). However, including a description of capture probability (e.g., a 90 percent confidence interval) increases the likelihood that people interpret the distribution of values within that range as more normally distributed rather than uniformly distributed ( Dieckmann et al., 2015 ). Further research is clearly needed to determine optimal language for expressing value uncertainty in different situations.

Binary or Qualitative Results

Despite the seemingly simple nature of binary results (i.e., the characteristic is either present or not, and the test result is accurate or not), meaningful communication of this type of test results remains challenging. The prevalence of the characteristic or finding, either in a study population or an external reference population, can be reported with the result. Prevalence rates and pretest probability information are of high value in determining the likelihood that the test result represents a true-positive rather than a false-positive result, or a true-negative rather than a false-negative result. In many research circumstances, the prevalence of the target characteristic may be uncertain, as may be the sensitivity and specificity of the test, all of which are relevant to an estimate of positive and negative predictive values (as discussed previously in Chapter 3 ). Prior knowledge, or lack of knowledge, of prevalence and test sensitivity and specificity will be relevant to a decision about whether results should be returned and to what degree confirmatory testing is recommended.

In other cases, the question is not as much whether a result is accurate, but whether it is meaningful. An example would be a test that identifies a genetic variant. In such cases, prevalence rates have limited value in guiding recipient perceptions or actions ( Zikmund-Fisher, 2013 ), especially once repeat testing provides confirmation of a finding. For example, how common or uncommon a particular genetic variant is in the population generally should not affect what the individual might want to do about a valid and true result. Prevalence rates should not be used by recipients as a proxy for how serious a finding is or whether action is needed, since common characteristics may sometimes have limited risk impact and rare conditions can sometimes have enormous impact on an individual's risk. For binary results that are indicators of a disease (or other condition), penetrance information (i.e., information about the extent to which a particular gene is expressed in those carrying it) and relative risk statistics (i.e., information about the risk of the disease in people with the characteristic relative to the risk in those without the characteristic) are more useful than prevalence rates for helping recipients understand the meaning of their results. Furthermore, guidance documents for risk communication recommend communicating absolute risk reduction or risk increase whenever possible ( Trevena et al., 2013 ; Zikmund-Fisher, 2013 ).

The meaning of binary results is most clear when they are classified into a specific action category (e.g., someone with a particular biomarker should consider a specific intervention) or at least a risk category (e.g., labeling as normal), although care must be taken to avoid misinterpretation of such labels ( Marteau et al., 2001 ). However, classifying binary results into a specific action category is not always possible, particularly in the research context, both because disease is often multifactorial and because the scientific understanding of how binary risk factors (e.g., genetic markers) are associated with outcomes is often highly incomplete ( Coulehan, 1979 ). For example, it may be difficult to communicate to research participants how much or how little effect a particular genetic marker may have on the incidence or severity of a condition—and, accordingly, whether an intervention or other action is appropriate. In such cases, as discussed below, the areas of uncertainty should be explicitly communicated to the recipient.

With binary results, the primary concern when trying to communicate issues of reliability is false certainty—that is, people often fail to consider the chance that the finding is wrong. The idea that a test may result in false-positive or false-negative results can be hard to understand. Consequently, recipients are likely to act on the assumption that the result they have received is accurate ( Garcia-Retamero and Hoffrage, 2013 ; Kelman et al., 2016 ). Explicit statements that emphasize the potential for inaccuracies of all types (e.g., sample swaps, false positives, or false negatives) can help to offset this tendency, though their effectiveness is likely to be imperfect. Note that once a result is known, it is appropriate to communicate in plain language only the false-negative or the false-positive rate, whichever is relevant, since the other rate does not affect that particular participant and speaking about it is likely to add to confusion. However, a concrete visual presentations of risk (e.g., icon array 20 displays) may be needed to support a participant's understanding of how likely it is that the returned binary result is in fact the opposite result ( Garcia-Retamero and Hoffrage, 2013 ; Trevena et al., 2013 ).

CONCLUSION: The meaning of a test result is determined by what the result is compared against. The ability of individual participants to understand and make use of research results depends on the provision of relevant reference information that clarifies what is known or unknown about the meaning of the specific result. For some individuals, a reference range alone would do nothing because of their limited health literacy and numeracy. CONCLUSION: The state of scientific knowledge about a particular test guides the types of reference information that are available and can be provided to research participants when returning individual research results. When the context for a test result is well established and standardized, then a strong presumption is that this reference information will be provided. When the context is unknown or uncertain, however, being clear how little is known is essential to participant understanding.

Communicating Key Takeaways, Including the Actionability of Individual Research Results

When returning results to participants, a single, clear takeaway message is important. Being given information and not knowing whether or how it should be acted upon can be disconcerting and potentially emotionally harmful to participants ( Shani et al., 2008 ). Consistent with the ethical principles of beneficence and non-maleficence, research teams have some obligation to minimize and mitigate such potential harms. When results are being offered to participants, the most straightforward way of offering a single, clear takeaway message is to provide a concise statement of why the results are being returned and a clear summary of the meaning of the results based on the research team's current knowledge of the test performed at that point in time. Given that scientific knowledge is constantly evolving, especially in terms of understanding research results, investigators should clarify both the date when the message is being generated by the study team and how likely or unlikely it is that the interpretation of the result might change in the future. In addition, given the evidence discussed above of substantial language and literacy barriers to comprehension, the importance of providing action steps (if appropriate) clearly and in plain language cannot be overstated.

The takeaway message can vary depending on the state of knowledge regarding the test result and its implications. When the meaning is uncertain (i.e., the investigators do not know how to interpret the result), this uncertainty and the fact that no action can be recommended is the takeaway message. Such a clear message of no recommended action needs to be stated explicitly to prevent people from making inaccurate assumptions. In some cases, the meaning of the result may be known, but it has no implied action. An example of such a result would be the return of “normal” results from clinical testing that was conducted in the course of a research study. However, determining the appropriate takeaway message is not always so straightforward, such as when a genetic variant of unknown significance is identified in genetic testing. A communication with no recommended action can be particularly difficult because people may not believe that researchers would return a result but not want the participant to take any further action; there is also the issue of the potential “emotional burden, concern, or worry of knowing that there is nothing [the participant] could do about it” ( Hyams et al., 2016 , p. 5). Providing such information can have both positive effects (e.g., by drawing a participant's attention to a particular disease risk) and negative effects (e.g., inducing anxiety or motivation to pursue unnecessary screening tests). In other cases, the result may indicate the need for possible or even highly encouraged action.

In the consent document, key information is optimally included at the beginning of the consent document and will contain a “concise and focused” description of the research and summarize the project information that is most important to potential participants in making their decision whether to enroll in the study ( Federal Register , 2017 ). Similar methods (i.e., requiring concise and focused descriptions of the findings and its implications) should be applied in the return-of-results communications.

When participants will need to carefully consider a potential action (e.g., because of trade-offs), the more that a communication can identify both why participants should consider actions and why they might not want to do so, the more useful the communication will be. In addition, if a result implies an action that is highly encouraged, acknowledging the potential barriers or challenges to undertaking these actions is beneficial by helping to frame realistic expectations and prepare participants to overcome those barriers, when appropriate.

Guiding principles for the design of return-of-results procedures parallel the best practices for consent procedures and support the importance of providing key takeaway messages. Best practices need not be developed at the level of the individual investigator alone. Changes in community, federal, or industrial practices may be needed to develop better guidance for how the research committee needs to approach these situations. To deal with the fact that research participants often struggle to make sense of consent documents, the 2018 proposed revisions to the Common Rule mandate that consent documents provide a “key information” section at the beginning of the consent document that contains a “concise and focused” description of the research and summarizes the project information most important to potential subjects in making their decision whether to participate ( Federal Register , 2017 ). Similar remedies (i.e., requiring concise and focused descriptions of the findings and its implications) should be applied in the context of returning research results.

CONCLUSION: Individual research results need to be communicated with a clear takeaway message that includes a statement of actionability (or lack thereof).

Communicating Caveats and Uncertainties

Previous chapters discussed multiple reasons why research results often have substantial variance or potential for error which limits interpretation and usability for an individual participant. Even after accounting for the quality of laboratory procedures, research results may vary in their level of certainty and potential to guide personal action. For example, a cholesterol level obtained in a research study is likely to provide a research participant with readily interpreted information about cardiac risk (assuming that appropriate laboratory quality measures were in place), while other research results may reflect evolving knowledge that has substantial uncertainty. For example, a study might discover an association between a biomarker and a particular health risk, with an unknown effect size and no information to guide actions to reduce risk.

Most research participants, however, are unlikely to think about these threats to validity and interpretability. Hence, research results are prone to misinterpretation (e.g., confusing a research result with an established clinical test result) or misuse. As a result, it may be necessary to include a formal caveat or warning statement in return-of-results communications. Depending on the context, such statements may address

  • uncertain standards,
  • uncertain interpretation,
  • an elevated potential for error in the result, and
  • the fact that the result may not be the participant's result (e.g., in the case of a sample swap or mislabeling).

For example, appropriate disclosure to the participant might include the caveats that the level of risk is still unknown and that no actions to reduce risk are known. Researchers might also include information about plans for future research to study these questions.

Investigators are not used to identifying the full list of threats to validity, uncertainties, and caveats that are applicable to their study. In fact, incentives in both the funding application process and the research publication process minimize attention to such threats. Consequently, investigators need both guidance (e.g., a list of key questions that should be asked) and incentives (e.g., explicit consideration in IRB review) to do this task. The Multi-Regional Clinical Trials Center toolkit includes a checklist to guide IRBs and other ethics committees in reviewing plans for the return of research results ( MRCT Center, 2017b ).

Because people tend to assume that any test results they receive are both precise and accurate, providing information that conveys the uncertainty of the result is critical, particularly since the potential for error increases in research contexts. Furthermore, given that understanding and adjusting for uncertainty is psychologically difficult, it is reasonable to believe that, on average, the potential for over-interpretation of results and under-consideration of uncertainties is likely to be greater in practice than the reverse. The committee is already advocating for the return of results in novel circumstances, including (under certain conditions) when reliability is lower than it is for clinical results. As a result, the committee believes it is prudent to err on the side of promoting recipient attention to caveats and uncertainties. An outcome in which participants feel a need to confirm important results before acting on them would be appropriate in many situations. When a significant risk of therapeutic misconception is possible, 21 a disclaimer distinguishing a research result from a clinical result is particularly critical.

Since clarity and concreteness are critical, caveats, cautions, or warnings that accompany the return of results need to be written in plain language. For example, many users will not understand or react to a statement that a test has “low validity.” Instead, statements should describe specific potential risks in simple terms, e.g., by making statements such as “Your result might be wrong,” “Your true results may be higher or lower than what is shown,” and “It is even possible that this result may not be yours.” Similarly, uncertainties about the meaning of the result could be stated as plainly as “We do not know what your results mean” and “We cannot recommend any actions for you to take.”

As caveats and warning statements are developed and used for the first time, they will need to be reviewed by the appropriate individuals (or groups) and tested for understanding and efficacy. Engagement with target populations is essential both for identifying which caveats are most critical to communicate and for determining the optimal methods for communication. Research has demonstrated that warnings can be used successfully to communicate benefits and risks, but only when they are specifically designed for the target audience ( Andrews, 2011 ). Work in the environmental exposure field can offer some useful models and templates to share. The Association of Public Health Laboratories and Biomonitoring California offer models for communicating environmental exposure information to participants ( Association of Public Health Laboratories, 2012 ; Biomonitoring California, 2018 ) and Biomonitoring California prototypes have undergone usability testing by Health Research for Action researchers ( Health Research for Action, 2011 ). Additionally, FDA has explored the issue of whether and how results should be provided directly to consumers many times through the use of advisory panels or workshops that have asked experts and lay users on preferences, to explore risks of return, and develop mitigations to those risks ( FDA, 2010 , 2016 ).

Once effective warning statements are developed by investigators in a variety of fields of research, the research community would benefit from the sharing of templates and examples to avoid repeating unnecessary effort, while still allowing adaptation for a given need or context-specific communication.

CONCLUSION: Research participants may fail to understand the degree to which research results may have substantially greater uncertainties than clinical results. Little evidence exists to guide best practices for communicating warnings and qualifiers that address potential inaccuracies or potential variance in interpretation.

Identifying the Appropriate Communication Modality

Different types of communication may be appropriate in different contexts. The communication methods commonly used for returning results include

  • in-person discussion,
  • phone- or video-conference–based discussion,
  • electronic delivery (e.g., through secure portals, including those tethered to EHRs), and
  • mailing of printed materials.

Other reports have described a number of different factors that go into the selection of an appropriate communication method for returning individual research results ( Fitzpatrick-Lewis et al., 2010 ; MRCT Center, 2017a ), and the committee recommends that study teams use available guidance. For example, the Multi-Regional Clinical Trials collaborative has developed toolkits that support the return of individual as well as aggregate results and provide guidance for investigators, sponsors, and ethics review committees throughout the study life cycle from planning through study completion ( MRCT Center, 2017b ). As discussed above, ideally participants should be queried on their preferred communication method early in studies in which results are to be returned, and investigators should take participants' preferences into account. However, given that the potential cost, required infrastructure, and expertise will vary from study to study, the choice of how results will be communicated reflects a cost–benefit trade-off that needs to be evaluated for each study.

Delivering results in person maximizes the ability of the investigator to provide clarification, answer participant questions, and assess and address potential confusion or emotions from the participants. In some cases resources are needed to support the inclusion of specialized expertise in return, for example when genetic counselors assist an investigator in returning results to participants. As a result, this return strategy is the most time and resource intensive. Wendy Chung estimated that returning results for a large study using a team of genetic counselors cost approximately $250 per participant. 22 Because of the time and resources required to plan for in-person return, this strategy is not well suited to scenarios where the results to be returned are time sensitive. Additionally, the return of results via a genetic counselor may lead to participants declining to participate due to the time commitment of counseling sessions, as was encountered by Chung and colleagues ( Wynn, 2016 ).

The return of results via phone has many of the advantages of in-person return, including opportunities for clarification, participant questions, and addressing emotions, but it is less personal. This method can be carried out quickly if the return is time sensitive and the participant must be reached promptly. The costs associated with return by phone, like in-person delivery, remain high due to the time and expertise required.

Many patients are familiar with using electronic portals, which are commonly used for delivering clinical laboratory or other medical results ( Giardina et al., 2015 ). These portals can be used to provide documents detailing results to participants as well as to provide links to additional educational resources. In some instances, the research results could be tethered to an existing patient portal or EHR, such as in cases where a research participant may also be a patient receiving clinical care within the institution. Although such portals typically feature a secure two-way e-mail communication option, there are a number of potential disadvantages, including a lack of opportunity for the synchronous communication of a phone or in-person return and the fact that the portal is less likely to be used by racial and ethnic minority and rural populations and those with limited health literacy or technology proficiency ( Sarkar et al., 2010 , 2011 ; Sharit et al., 2014 ). Furthermore, including research results in a patient's EHR may affect what is included in that patient's designed record set. Investigators in environmental health have tested other digital methods to return personalized results and engage participants in the research ( Boronow et al., 2017 ). Establishing and using a portal has some initial and maintenance cost, but it is more easily scalable than in-person delivery, with only a marginal cost for the addition of many participants.

The return of results by mail is most useful in scenarios where researchers are returning non-urgent, reference communications and may be particularly effective for accessing individuals in remote locations, like some tribal areas where telecommunications access is limited, unreliable, or unavailable. 23 While mail is an inexpensive method for return, communication by mail has a number of shortcomings, especially a lack of opportunity for dialogue and limitations in what can be communicated in a paper-based, visual format. Certified mail can be used to help prevent sensitive participant information from being received by someone other than the participant.

Data visualization is an effective tool for helping people understand their health data (see an example in Box 5-5 ), and many tools have been created to assist with the development of appropriate data visualizations. For example, the Data Viz Project by Ferdio is a website that organizes visualizations by functions (e.g., comparison, part-to-whole, correlation) to make it easier to select the right visualization for a particular communication goal ( Data Viz Project, 2018 ). Resources also are available to help in choosing the most effective type of chart (e.g., the Extreme Presentation Method; see Abela, 2018 ). Developed by the Risk Science Center and Center for Bioethics and Social Sciences in Medicine at the University of Michigan, Icon Array provides open-source icon arrays for communicating risk ( University of Michigan Risk Science Center, 2018 ). Electronic Info-graphics for Community Engagement, Education, and Empowerment (EnTICE 3 ) is open-source software that allows a user to create tailored messages and visualization outputs that are responsive to overlapping participant characteristics such as language, age, and level of health literacy ( Arcia et al., 2015 ; Unertl et al., 2016 ). This software has been used during participatory design sessions to create a communication style guide tailored to inform and engage the target community. Such communications also can be used to stimulate health-motivating behaviors, for example, by offering comparisons to national rates of depression ( Bevans et al., 2014 ) or providing dietary standards, associated risks, or recommendations for preventative action ( NASEM, 2017 ). Under the Precision in Symptom Self-Management Center at Columbia University, EnTICE 3 is being expanded beyond its original use to support biomarker result reporting including cytokines, ancestry informative markers, and genetic mutations. 24

Examples of Visualizations Relevant to Results Reporting.

As with any tool, visualization for returning research results must be well matched to the communication goal and data type ( Arcia et al., 2013 , 2018 ). No single visualization is ideal for all situations ( Torsvik et al., 2013 ). Visual simplicity is also valuable, as visual embellishments (e.g., three-dimensional charts) tend to inhibit user comprehension ( Tufte, 2001 ). A variety of authors have argued against three-dimensional graphs on both conceptual grounds (e.g., three-dimensional bars are more difficult to visually align with an axis to determine the level shown) and empirical grounds. In particular, while three-dimensional graphics may attract attention, they tend to perform worse in accuracy, which is perhaps the most critical dimension in the application to return of research test results ( Fausset et al., 2008 ). Nor are more technologically advanced displays necessarily better: in at least some situations, interactive or animated data visualizations can be counterproductive, actually hurting an individual's ability to process the underlying data ( Torsvik et al., 2013 ; Trevena et al., 2013 ; Zikmund-Fisher et al., 2012 ). Additionally, the Health Level Seven standard for infobuttons supports context-aware retrieval ( Health Level Seven, 2018 ), which is increasingly being used in clinical research and can be added to a variety of electronic communication methods (portal, designated website, e-mail, etc.) to link to additional context-specific explanatory content and resources, including those that are visual or are related to the participant's specific research result ( Cook et al., 2016 ; Torsvik et al., 2013 ; Trevena et al., 2013 ).

In many situations, a multimodal approach to returning individual results will be beneficial (e.g., delivering results via mail or electronic portal and then following up with a phone discussion or in-person meeting to offer participants a chance to ask questions and seek clarification). Consequently, health care standards that support the integration of additional sources of information into EHRs and tethered patient portals provide a foundation for multimodal approaches. Beyond infobuttons, a National Academy of Medicine Genomics and Precision Health Roundtable Action Collaborative, DIGITizE: Displaying and Integrating Genetic Information Through the EHR, has specified a set of standards including Fast Healthcare Interoperability Resources (FHIR), Substitutable Medical Applications and Reusable Technologies (SMART) on FHIR, SMART on FHIR Genomics, and Clinical Decision Support (CDS) Hooks (see Box 5-6 ).

Standards That Support Integration of External Resources with Electronic Health Records and Patient Portals and Are of Relevance to Return of Research Results.

CONCLUSION: Research results can be returned through a variety of communication methods that are matched to participants' needs and the context of the research results. CONCLUSION: The appropriate use of visualizations can help achieve the communication goal for the return of research results. CONCLUSION: Existing and emerging technical standards for the exchange of health data are available and relevant to support the return of research results at scale through electronic systems such as EHRs and secure portals.
Recommendation 10: Enable Understanding of Individual Research Results by Research Participants. Whenever individual research results are communicated to participants, investigators and institutions should facilitate understanding of both the meaning and the limitations of the results by A. ensuring that there is a clear takeaway message and necessary reference information to convey what is known and not known about both the meaning of the result and potential clinical implications; B. communicating effectively the level of uncertainty in the result validity; C. providing mechanisms for participants to obtain additional information and answers to questions when appropriate and feasible; D. providing guidance for follow-up actions/consultations when appropriate; E. aligning the communication approaches to the particular needs and preferences of the participants and the context of the study; F. providing a written summary of the results and other information communicated to participants for future reference by participants and investigators; and G. leveraging existing and emerging health information technologies to enable tailored, layered, and large-scale communications when appropriate.
  • DEVELOPING A LEARNING PROCESS TO IMPROVE THE RETURN OF RESEARCH RESULTS

The return of individual research results is a relatively new process for the research enterprise. To communicate effectively, the research community will need to develop a learning system in which processes for returning research results are continuously evaluated for benefits and harms in order to support the development of best practices over time. The committee notes that research to study the impact of returning individual research results is already under way, but more work will be required to generate best practices ( Genomes 2 People, 2018 ; Miller et al., 2008 ; MRCT Center, 2017a ; Wynn et al., 2017 ). As best practices are identified, systems for translating that knowledge into practice will be needed. Given that most investigators are not currently trained in communication and may not be able to contextualize the meaning of a result, training will be critical if the return of results is expected for research on human biospecimens. Communication is a skill that needs to be developed over time, and what matters is the communicator's ability to contextualize information and respond to questions by participants. In fact, the individual tasked with addressing participant expectations of return and communicating the results may not be the person with the most advanced expertise in the test itself (i.e., the principal investigator or someone on the research team) but rather may be a trained community member, a communication expert at an institution, or another individual adept in communication.

In developing training for current and future investigators, stakeholders will need to consider different methods of communication. Specifically, guidance is needed regarding what training should be expected for face-to-face interactions, phone interactions, or communication through patient portals, e-mail, or mail. Communicating the meaning of data in plain language will likely require different approaches, depending on the method used to communicate. Investigators will need assistance in determining which methods are most appropriate for their study.

These new communication tasks will, of course, have financial implications. The more context and interpretation that is required to be provided for a specific result (perhaps due to the potential harms associated with returning it), the higher the likely cost. To this end, future research into communicating results will need to address whether additional expertise should be included and factored into grant applications, under what circumstances face-to-face communication is needed and by whom, and which possible methods for return are appropriate for different types of research and groups of participants. As discussed in Chapter 3 , institutions may be able to assist research teams by developing the required infrastructure for the return of results, and this could include infrastructure that enables investigators access to core communication expertise. As the return of individual research results becomes more widely practiced, including research communication cores into institutional development grants may be considered and would provide investigators access to experts and a standardized mechanism for communication and avoid potential costs associated with study-by-study assessments.

CONCLUSION: Ensuring effective return of research results requires developing skills and expertise among research teams as well as access to the resources, training, and relevant expertise needed to achieve good quality communication outcomes.
Recommendation 11: Expand the Empirical Evidence Base Relevant to the Return of Individual Research Results. To expand the empirical evidence base relevant to the return of individual research results, sponsors and funding agencies should support additional research to better understand the benefits and harms of the return of results as well as participant needs, preferences, and values and to enable the development of best practices and guidance.

When it comes to funding empirical research for the return of individual research results, the National Institutes of Health (NIH) is the obvious, and likely primary, sponsor to fund such an endeavor. However, this should not be an NIH task alone. The return of research results will soon become part of the research enterprise, it is a global endeavor, and all sponsors of research using human biospecimens should put resources into addressing the needs of investigators and participants through the funding of empirical research in the practice. Having more unified guidance to the practice of return will help prevent dramatic variability in practice between institutions and aid IRBs in making informed decisions. Funding agencies have a responsibility to ensure that the processes for return are both feasible and implemented appropriately.

  • Abela A. Charts. 2018. [February 8, 2018]. https: ​//extremepresentation ​.com/design/7-charts .
  • ADAPTABLE Aspirin Study. Learn about the study. 2018. [February 8, 2018]. https: ​//adaptablepatient ​.com/en/prescreen/watch-thanks .
  • AHA (American Hospital Association). Individuals' ability to electronically access their hospital medical records, perform key tasks is growing. Washington, DC: American Hospital Association; 2016. [March 8, 2018]. https://www ​.aha.org/guidesreports ​/2016-07-14-individuals-ability-electronically-access-their-hospital-medical-records .
  • Aldoory L, Barrett Ryan KE, Rouhani AM. Informed consent and health literacy: Workshop summary. Washington, DC: The National Academies Press; 2014. Best practices and new models of health literacy for informed consent: Review of the impact of informed consent regulations on health-literate communications; pp. 119–174.
  • American Association for Clinical Chemistry. Reference ranges and what they mean. 2017. [March 8, 2018]. https: ​//labtestsonline ​.org/articles/laboratory-test-reference-ranges .
  • Andrews JC. Warnings and disclosures. In: Fischhoff B, Brewer NT, Downs JS, editors. Communicating risks and benefits: An evidence-based user's guide. Silver Spring, MD: Food and Drug Administration; 2011. pp. 149–162.
  • Appelbaum PS, Anatchkova M, Albert K, Dunn LB, Lidz CW. Therapeutic misconception in research subjects: Development and validation of a measure. Clinical Trials (London, England). 2012; 9 (6):748–761. [ PMC free article : PMC3690536 ] [ PubMed : 22942217 ]
  • Appelbaum PS, Parens E, Waldman CR, Klitzman R, Fyer A, Martinez J, Price WN, Chung WK. Models of consent to return of incidental findings in genomic research. The Hastings Center Report. 2014; 44 (4):22–32. [ PMC free article : PMC4107028 ] [ PubMed : 24919982 ]
  • Arcia A, Bales ME, Brown W 3rd, Co MC Jr., Gilmore M, Lee YJ, Park CS, Prey J, Velez M, Woollen J, Yoon S, Kukafka R, Merrill JA, Bakken S. Method for the development of data visualizations for community members with varying levels of health literacy. AMIA Symposium. 2013; 2013 :51–60. [ PMC free article : PMC3900122 ] [ PubMed : 24551322 ]
  • Arcia A, Velez M, Bakken S. Style guide: An interdisciplinary communication tool to support the process of generating tailored infographics from electronic health data using EnTICE 3 . eGEMs. 2015; 3 (1):1120. [ PMC free article : PMC4371489 ] [ PubMed : 25848634 ]
  • Arcia A, Suero-Tejeda N, Bales ME, Merrill JA, Yoon S, Woollen J, Bakken S. Sometimes more is more: Iterative participatory design of infographics for engagement of community members with varying levels of health literacy. Journal of the American Medical Informatics Association. 2016; 23 (1):174–183. [ PMC free article : PMC5009940 ] [ PubMed : 26174865 ]
  • Arcia A, Woollen J, Bakken S. A systematic method for exploring data attributes in preparation for designing tailored infographics of patient reported outcomes. eGEMs. 2018; 6 (1):1–9. [ PMC free article : PMC5983055 ] [ PubMed : 29881760 ]
  • Association of Public Health Laboratories. Guidance for laboratory biomonitoring. Silver Spring, MD: Association of Public Health Laboratories; 2012.
  • Baratloo A, Hosseini M, Negida A, El Ashal G. Part 1: Simple definition and calculation of accuracy, sensitivity and specificity. Emergency. 2015; 3 (2):48–49. [ PMC free article : PMC4614595 ] [ PubMed : 26495380 ]
  • Bevans M, Ross A, Cella D. Patient-Reported Outcomes Measurement Information System (PROMIS®): Efficient, standardized tools to measure self-reported health and quality of life. Nursing Outlook. 2014; 62 (5):339–345. [ PMC free article : PMC4179871 ] [ PubMed : 25015409 ]
  • Biesecker BB, Boehnke M, Calzone K, Markel DS, Garber JE, Collins FS, Weber BL. Genetic counseling for families with inherited susceptibility to breast and ovarian cancer. JAMA. 1993; 269 (15):1970–1974. [ PubMed : 8352830 ]
  • Biomonitoring California. Communicating results. 2018. [May 7, 2018]. https: ​//biomonitoring ​.ca.gov/results/communicating-results .
  • Boronow KE, Susmann HP, Gajos KZ, Rudel RA, Arnold KC, Brown P, Morello-Frosch R, Havas L, Brody JG. DERBI: A digital method to help researchers offer “right-to-know” personal exposure results. Environmental Health Perspectives. 2017; 125 (2):A27–A33. [ PMC free article : PMC5289917 ] [ PubMed : 28145870 ]
  • Bosl W, Mandel J, Jonikas M, Ramoni RB, Kohane IS, Mandl KD. Scalable decision support at the point of care: A substitutable electronic health record app for monitoring medication adherence. Journal of Medical Internet Research. 2013; 2 (2):e13. [ PMC free article : PMC3815431 ] [ PubMed : 23876796 ]
  • Boyd JC. Defining laboratory reference values and decision limits: Populations, intervals, and interpretations. Asian Journal of Andrology. 2010; 12 (1):83–90. [ PMC free article : PMC3739683 ] [ PubMed : 20111086 ]
  • Brega AG, Barnard J, Mabachi P, Mabachi NM, Weiss BD, DeWalt DA, Brach C, Cifuentes M, Albright K, West DR. AHRQ health literacy universal precautions toolkit. 2015. [May 23, 2018]. https://www ​.ahrq.gov ​/professionals/quality-patient-safety ​/quality-resources ​/tools ​/literacy-toolkit/index.html .
  • Brody JG, Morello-Frosch R, Brown P, Rudel RA, Altman RG, Frye M, Osimo CA, Pérez C, Seryak LM. “Is it safe?”: New ethics for reporting personal exposures to environmental chemicals. American Journal of Public Health. 2007; 97 (9):1547–1554. [ PMC free article : PMC1963285 ] [ PubMed : 17666695 ]
  • Brody JG, Dunagan SC, Morello-Frosch R, Brown P, Patton S, Rudel RA. Reporting individual results for biomonitoring and environmental exposures: Lessons learned from environmental communication case studies. Environmental Health. 2014; 13 (40) [ PMC free article : PMC4098947 ] [ PubMed : 24886515 ]
  • Brown-Williams H, Morello-Frosch R. “Biomonitoring literacy” in the MIEEP/Chemicals in Our Bodies Project: Developing report-back materials with input from study participants. 2011. [May 22, 2018]. https: ​//biomonitoring ​.ca.gov/sites/default ​/files/downloads/031611SGP ​_BiomonLiteracy.pdf .
  • Bunnik EM, Janssens ACJW, Schermer MHN. A tiered-layered-staged model for informed consent in personal genome testing. European Journal of Human Genetics. 2013; 21 (6):596–601. [ PMC free article : PMC3658183 ] [ PubMed : 23169494 ]
  • Carlson L. Research ethics and intellectual disability: Broadening the debates. Yale Journal of Biology and Medicine. 2013; 86 (3):303–314. [ PMC free article : PMC3767215 ] [ PubMed : 24058305 ]
  • Caswell-Jin JL, Gupta T, Hall E, Petrovchich IM, Mills MA, Kingham KE, Koff R, Chun NM, Levonian P, Lebensohn AP, Ford JM, Kurian AW. Racial/ethnic differences in multiple-gene sequencing results for hereditary cancer risk. Genetics in Medicine. 2017; 20 :234–239. [ PubMed : 28749474 ]
  • CDC (Centers for Disease Control and Prevention). Third national report on human exposure to environmental chemicals. Atlanta, GA: Centers for Disease Control and Prevention; 2005.
  • CDC. Everyday words for public health communication. 2016. [May 23, 2018]. https://www ​.cdc.gov/other ​/pdf/everyday-words-060216-final ​.pdf .
  • CDC. What do parents need to know to protect their children? 2018. [March 29, 2018]. https://www ​.cdc.gov/nceh ​/lead/acclpp/blood_lead_levels.htm .
  • Cook DA, Teixeira MT, Heale BSE, Cimino JJ, Del Fiol G. Context-sensitive decision support (infobuttons) in electronic health records: A systematic review. Journal of the American Medical Informatics Association. 2016; 24 (2):460–468. [ PMC free article : PMC6080678 ] [ PubMed : 27497794 ]
  • Corbie-Smith G, Thomas SB, St. George DM. Distrust, race, and research. Archives of Internal Medicine. 2002; 162 (21):2458–2463. [ PubMed : 12437405 ]
  • Coulehan JL. Multifactorial etiology of disease. JAMA. 1979; 242 (5):416. [ PubMed : 448956 ]
  • Danecek P, Auton A, Abecasis G, Albers CA, Banks E, DePristo MA, Handsaker RE, Lunter G, Marth GT, Sherry ST, McVean G, Durbin R., 1000 Genomes Project Analysis Group. The variant call format and VCFtools. Bioinformatics. 2011; 27 (15):2156–2158. [ PMC free article : PMC3137218 ] [ PubMed : 21653522 ]
  • Data Viz Project. About. 2018. [March 9, 2018]. http: ​//datavizproject.com/about .
  • Dieckmann NF, Gregory R, Peters E, Tusler M. Making sense of uncertainty: Advantages and disadvantages of providing an evaluative structure. Journal of Risk Research. 2012; 15 (7):717–735.
  • Dieckmann NF, Peters E, Gregory R. At home on the range? Lay interpretations of numerical uncertainty ranges. Risk Analysis. 2015; 35 (7):1281–1295. [ PubMed : 25808952 ]
  • Dieckmann NF, Gregory R, Peters E, Hartman R. Seeing what you want to see: How imprecise uncertainty ranges enhance motivated reasoning. Risk Analysis. 2017; 37 (3):471–486. [ PubMed : 27667776 ]
  • Doyle DL, Awwad RI, Austin JC, Baty BJ, Bergner AL, Brewster SJ, Erby LAH, Franklin CR, Greb AE, Grubs RE, Hooker GW, Noblin SJ, Ormond KE, Palmer CG, Petty EM, Singletary CN, Thomas MJ, Toriello H, Walton CS, Uhlmann WR. 2013 review and update of the genetic counseling practice based competencies by a task force of the Accreditation Council for Genetic Counseling. Journal of Genetic Counseling. 2016; 25 (5):868–879. [ PubMed : 27333894 ]
  • Dunagan SC, Brody JG, Morello-Frosch R, Brown P, Goho S, Tovar J, Patton S, Danford R. When pollution is personal: Handbook for reporting results to participants in biomonitoring and personal exposure studies. Newton, MA: Silent Spring Institute; 2013.
  • Exley K, Cano N, Aerts D, Biot P, Casteleyn L, Kolossa-Gehring M, Schwedler G, Castano A, Angerer J, Koch HM, Esteban M, Schoeters G, Den Hond E, Horvat M, Bloemen L, Knudsen LE, Joas R, Joas A, Dewolf MC, Van de Mieroop E, Katsonouri A, Hadjipanayis A, Cerna M, Krskova A, Becker K, Fiddicke U, Seiwert M, Morck TA, Rudnai P, Kozepesy S, Cullen E, Kellegher A, Gutleb AC, Fischer ME, Ligocka D, Kaminska J, Namorado S, Reis MF, Lupsa IR, Gurzau AE, Halzlova K, Jajcaj M, Mazej D, Tratnik JS, Huetos O, Lopez A, Berglund M, Larsson K, Sepai O. Communication in a human biomonitoring study: Focus group work, public engagement and lessons learnt in 17 European countries. Environmental Research. 2015; 141 :31–41. [ PubMed : 25499539 ]
  • Fagerlin A, Pignone M, Abhyankar P, Col N, Feldman-Stewart D, Gavaruzzi T, Kryworuchko J, Levin CA, Pieterse AH, Reyna V, Stiggelbout A, Scherer LD, Wills C, Witteman HO. Clarifying values: An updated review. BMC Medical Informatics and Decision Making. 2013; 13 (2):S8. [ PMC free article : PMC4044232 ] [ PubMed : 24625261 ]
  • Fast Healthcare Interoperability Resources. FHIR overview. 2017. [March 9, 2018]. https://www ​.hl7.org/fhir/overview.html .
  • Fausset CB, Rogers WA, Fisk AD. Visual graph display guidelines. Atlanta, GA: Institute of Technology; 2008.
  • FDA (Food and Drug Administration). Labeling: Regulatory requirements for medical devices. 1989. [May 23, 2018]. https://www ​.fda.gov/downloads ​/medicaldevices ​/deviceregulationandguidance ​/guidancedocuments ​/ucm095308.pdf . FDA 89-4203.
  • FDA. Guidance for 510(k)s on cholesterol tests for clinical laboratory, physicians' office laboratory and home use. 1995. [May 23, 2018]. https://www ​.fda.gov/RegulatoryInformation ​/Guidances/ucm094140.htm#toc_16 .
  • FDA. Guidance on medical device patient labeling: Final guidance for industry and FDA reviewers. 2001. [May 23, 2018]. https://www ​.fda.gov/downloads ​/MedicalDevices ​/DeviceRegulationandGuidance ​/GuidanceDocuments ​/ucm070801.pdf .
  • FDA. Guidance for industry: Presenting risk information in prescription drug and medical device promotion. 2009. [May 23, 2018]. https://www ​.fda.gov/downloads ​/drugs/guidances/ucm155480.pdf .
  • FDA. FDA/CDRH public meeting: Oversight of laboratory developed tests (LDTs), July 19-20, 2010. 2010. [May 21, 2018]. https://web ​.archive.org ​/web/20110101182031/http://www ​.fda.gov ​/MedicalDevices/NewsEvents ​/WorkshopsConferences/ucm212830 ​.htm .
  • FDA. Summary of safety and effectiveness: Oraquick in home HIV test. 2012. [May 23, 2018]. https://www ​.fda.gov/downloads ​/BiologicsBloodVaccines ​/BloodBloodProducts ​/ApprovedProducts ​/PremarketApprovalsPMAs ​/UCM312534.pdf .
  • FDA. Evaluation of automatic class III designation for the 23andme personal genome service carrier screening test for Bloom syndrome: Decision summary. 2015. [May 23, 2018]. https://www ​.accessdata ​.fda.gov/cdrh_docs/reviews/DEN140044 ​.pdf .
  • FDA. Public workshop—Patient and medical professional perspectives on the return of genetic test results, March 2, 2016. 2016. [May 21, 2018]. http://wayback ​.archive-it ​.org/7993/20171115050724 ​/https://www ​.fda.gov/MedicalDevices ​/NewsEvents/WorkshopsConferences ​/ucm478841.htm .
  • FDA. Evaluation of automatic class III designation for the 23andme personal genome service (PGS) genetic health risk report for BRCA1/BRCA2 (selected variants): Decision summary. 2017a. [May 23, 2018]. https://www ​.accessdata ​.fda.gov/cdrh_docs/reviews/DEN170046 ​.pdf .
  • FDA. Evaluation of automatic class III designation for the 23andme personal genome service carrier screening test for Bloom syndrome: Decision summary correction. 2017b. [May 23, 2018]. https://www ​.accessdata ​.fda.gov/cdrh_docs/reviews/DEN160026 ​.pdf .
  • FDA. Bringing an over-the-counter (OTC) drug to market: Label comprehension. 2018a. [May 21, 2018]. https://www ​.accessdata ​.fda.gov/scripts/cder ​/training/OTC/topic3 ​/topic3/da_01_03_0170.htm .
  • FDA. Device labeling: Introduction to medical device labeling. 2018b. [May 21, 2018]. https://www ​.fda.gov/MedicalDevices ​/DeviceRegulationandGuidance ​/Overview/DeviceLabeling/default.htm .
  • FDA. General controls for medical devices. 2018c. [May 21, 2018]. https://www ​.fda.gov/MedicalDevices ​/DeviceRegulationandGuidance ​/Overview/GeneralandSpecialControls ​/ucm055910.htm .
  • Federal Register. Federal policy for the protection of human subjects. 82 FR. Vol. 7149. 2017. pp. 7149–7274. [ PubMed : 28106360 ]
  • Fernandez CV, Ruccione K, Wells RJ, Long JB, Pelletier W, Hooke MC, Pentz RD, Noll RB, Baker JN, O'Leary M, Reaman G, Adamson PC, Joffe S. Recommendations for the return of research results to study participants and guardians: A report from the Children's Oncology Group. Journal of Clinical Oncology. 2012; 30 (36):4573–4579. [ PMC free article : PMC3518731 ] [ PubMed : 23109703 ]
  • Fitzpatrick-Lewis D, Yost J, Ciliska D, Krishnaratne S. Communication about environmental health risks: A systematic review. Environmental Health. 2010; 9 (1):67. [ PMC free article : PMC2988771 ] [ PubMed : 21040529 ]
  • Galesic M, Garcia-Retamero R, Gigerenzer G. Using icon arrays to communicate medical risks: Overcoming low numeracy. Health Psychology. 2009; 28 (2):210–216. [ PubMed : 19290713 ]
  • Garabrant DH, Franzblau A, Lepkowski J, Gillespie BW, Adriaens P, Demond A, Ward B, LaDronka K, Hedgeman E, Knutson K, Zwica L, Olson K, Towey T, Chen Q, Hong B. The University of Michigan Dioxin Exposure Study: Methods for an environmental exposure study of polychlorinated dioxins, furans, and biphenyls. Environmental Health Perspectives. 2009; 117 (5):803–810. [ PMC free article : PMC2685845 ] [ PubMed : 19479025 ]
  • Garcia-Retamero R, Hoffrage U. Visual representation of statistical information improves diagnostic inferences in doctors and their patients. SSM Social Science & Medicine. 2013; 83 :27–33. [ PubMed : 23465201 ]
  • Genomes 2 People. The BabySeq project. 2018. [March 29, 2018]. http://www ​.genomes2people ​.org/babyseqproject .
  • Giardina TD, Modi V, Parrish DE, Singh H. The patient portal and abnormal test results: An exploratory study of patient experiences. Patient Experience Journal. 2015; 2 (1):148–154. [ PMC free article : PMC5363705 ] [ PubMed : 28345018 ]
  • Golbeck AL, Ahlers-Schmidt CR, Paschal AM, Dismuke SE. A definition and operational framework for health numeracy. American Journal of Preventive Medicine. 2005; 29 (4):375–376. [ PubMed : 16242604 ]
  • Grady C, Cummings SR, Rowbotham MC, McConnell MV, Ashley EA, Kang G. Informed consent. New England Journal of Medicine. 2017; 376 (9):856–867. [ PubMed : 28249147 ]
  • Haga SB, Mills R, Pollak KI, Rehder C, Buchanan AH, Lipkus IM, Crow JH, Datto M. Developing patient-friendly genetic and genomic test reports: Formats to promote patient engagement and understanding. Genome Medicine. 2014; 6 (7):58. [ PMC free article : PMC4254435 ] [ PubMed : 25473429 ]
  • Haines DA, Arbuckle TE, Lye E, Legrand M, Fisher M, Langlois R, Fraser W. Reporting results of human biomonitoring of environmental chemicals to study participants: A comparison of approaches followed in two Canadian studies. Journal of Epidemiology and Community Health. 2011; 65 (3):191–198. [ PubMed : 20628082 ]
  • Hall DE, Prochazka AV, Fink AS. Informed consent for clinical treatment. Canadian Medical Association Journal. 2012; 184 (5):533–540. [ PMC free article : PMC3307558 ] [ PubMed : 22392947 ]
  • Han PKJ, Klein WMP, Arora NK. Varieties of uncertainty in health care. Medical Decision Making. 2011; 31 (6):828–838. [ PMC free article : PMC3146626 ] [ PubMed : 22067431 ]
  • Haynes EN, Elam S, Burns R, Spencer A, Yancey E, Kuhnell P, Alden J, Walton M, Reynolds V, Newman N, Wright RO, Parsons PJ, Praamsma ML, Palmer CD, Dietrich KN. Community engagement and data disclosure in environmental health research. Environmental Health Perspectives. 2016; 124 (2):A24–A27. [ PMC free article : PMC4749085 ] [ PubMed : 26829152 ]
  • Health Level Seven. Hl7 version 3 standard: Context aware knowledge retrieval application (“infobutton”), knowledge request, release 2. 2018. [February 8, 2018]. http://www ​.hl7.org/implement ​/standards/product_brief ​.cfm?product_id=208 .
  • Health Research for Action. Biomonitoring communications. 2011. [May 21, 2018]. http: ​//healthresearchforaction ​.org/biomonitoring-communications .
  • Heaney C, Tindall G, Lucas J, Haga SB. Researcher practices on returning genetic research results. Genetic Testing & Molecular Biomarkers. 2010; 14 (6):821–827. [ PMC free article : PMC3001830 ] [ PubMed : 20939736 ]
  • Heath C, Heath D. Made to stick: Why some ideas survive and others die. New York: Random House; 2007.
  • Hernick AD, Kathryn Brown M, Pinney SM, Biro FM, Ball KM, Bornschein RL. Sharing unexpected biomarker results with study participants. Environmental Health Perspectives. 2011; 119 (1):1–5. [ PMC free article : PMC3018486 ] [ PubMed : 20876037 ]
  • Hibbard JH, Peters E. Supporting informed consumer health care decisions: Data presentation approaches that facilitate the use of information in choice. Annual Review of Public Health. 2003; 24 (1):413–433. [ PubMed : 12428034 ]
  • HL7 and Boston Children's Hospital. Overview. 2018. [March 9, 2018]. http://cds-hooks ​.org .
  • Holland AT, Palaniappan LP. Problems with the collection and interpretation of Asian-American health data: Omission, aggregation, and extrapolation. Annals of Epidemiology. 2012; 22 (6):397–405. [ PMC free article : PMC4324759 ] [ PubMed : 22625997 ]
  • Holly OW, Laura DS, Teresa G, Arwen HP, Andrea FF, Selma Chipenda D, Nicole E, Valerie CK, Deb FS, Nananda FC, Alexis FT, Angela F. Design features of explicit values clarification methods: A systematic review. Medical Decision Making. 2016; 36 (4):453–471. [ PubMed : 26826032 ]
  • Hsee CK. The evaluability hypothesis: An explanation for preference reversals between joint and separate evaluations of alternatives. Organizational Behavior and Human Decision Processes. 1996; 67 (3):247–257.
  • Hyams T, Bowen D, Condit C, Grossman J, Fitzmaurice M, Goodman D, Wenzel L, Edwards KL. Views of cohort study participants about returning research results in the context of precision medicine. Public Health Genomics. 2016; 19 (5):269–275. [ PMC free article : PMC5053808 ] [ PubMed : 27553645 ]
  • IOM (Institute of Medicine). Health literacy: A prescription to end confusion. Washington, DC: The National Academies Press; 2004. [ PubMed : 25009856 ]
  • IOM. Health literacy and numeracy: Workshop summary. Washington, DC: The National Academies Press; 2014. [ PubMed : 25077183 ]
  • IOM. Informed consent and health literacy: Workshop summary. Washington, DC: The National Academies Press; 2015.
  • Johns AL, Miller DK, Simpson SH, Gill AJ, Kassahn KS, Humphris JL, Samra JS, Tucker K, Andrews L, Chang DK, Waddell N, Pajic M, Pearson JV, Grimmond SM, Biankin AV, Zeps N, Martyn-Smith M, Tang H, Papangelis V, Beilin M. Returning individual research results for genome sequences of pancreatic cancer. Genome Medicine. 2014; 6 (5):42. [ PMC free article : PMC4067993 ] [ PubMed : 24963353 ]
  • Joint Commission. “What did the doctor say?”: Improving health literacy to protect patient safety. Oakbrook Terrace, IL: Joint Commission; 2007.
  • Judge JM, Brown P, Brody JG, Ryan S. The exposure experience: Ohio River Valley residents respond to local perfluorooctanoic acid (PFOA) contamination. Journal of Health and Social Behavior. 2016; 57 (3):333–350. [ PubMed : 27601409 ]
  • Kelman A, Robinson CO, Cochin E, Ahluwalia NJ, Braverman J, Chiauzzi E, Simacek K. Communicating laboratory test results for rheumatoid factor: What do patients and physicians want? Patient Preference and Adherence. 2016; 10 :2501–2517. [ PMC free article : PMC5171200 ] [ PubMed : 28008236 ]
  • Kim NS, Johnson SGB, Ahn WK, Knobe J. The effect of abstract versus concrete framing on judgments of biological and psychological bases of behavior. Cognitive Research. 2017; 2 (1):17. [ PMC free article : PMC5357666 ] [ PubMed : 28367497 ]
  • Kosslyn SM. Graph design for the eye and mind. New York: Oxford University Press; 2006.
  • Lab Tests Online-AU. Accuracy, precision, specificity & sensitivity. 2018. [March 29, 2018]. https://www ​.labtestsonline ​.org.au/understanding ​/test-accuracy-and-reliability ​/how-reliable-is-pathology-testing .
  • Larson EL, Cohn EG, Meyer DD, Boden-Albala B. Consent administrator training to reduce disparities in research participation. Journal of Nursing Scholarship. 2009; 41 (1):95–103. [ PubMed : 19335683 ]
  • Mandl KD, Mandel JC, Murphy SN, Bernstam EV, Ramoni RL, Kreda DA, Michael McCoy J, Adida B, Kohane IS. The SMART platform: Early experience enabling substitutable applications for electronic health records. Journal of the American Medical Informatics Association. 2012; 19 (4):597–603. [ PMC free article : PMC3384120 ] [ PubMed : 22427539 ]
  • Marteau TM, Senior V, Sasieni P. Women's understanding of a “normal smear test result”: Experimental questionnaire based study. BMJ (Clinical research ed.). 2001; 322 (7285):526–528. [ PMC free article : PMC26558 ] [ PubMed : 11230068 ]
  • Medscape. Laboratory reference ranges in healthy adults. 2014. [March 8, 2018]. https://emedicine ​.medscape ​.com/article/2172316-overview .
  • Miller CE, Krautscheid P, Baldwin EE, Tvrdik T, Openshaw AS, Hart K, LaGrave D. Genetic counselor review of genetic test orders in a reference laboratory reduces unnecessary testing. American Journal of Medical Genetics Part A. 2014; 164 (5):1094–1101. [ PubMed : 24665052 ]
  • Miller FA, Giacomini M, Ahern C, Robert JS, de Laat S. When research seems like clinical care: A qualitative study of the communication of individual cancer genetic research results. BMC Medical Ethics. 2008; 9 :4. [ PMC free article : PMC2267198 ] [ PubMed : 18294373 ]
  • Morello-Frosch R, Brody JG, Brown P, Altman RG, Rudel RA, Pérez C. Toxic ignorance and right-to-know in biomonitoring results communication: A survey of scientists and study participants. Environmental Health: A Global Access Science Source. 2009; 8 (1):6. [ PMC free article : PMC2654440 ] [ PubMed : 19250551 ]
  • MRCT (Multi-Regional Clinical Trials) Center. Return of individual results to participants recommendations document. Boston, MA: MRCT Center; 2017a.
  • MRCT Center. Return of individual results to participants toolkit. Boston, MA: MRCT Center; 2017b.
  • NASEM (National Academies of Sciences, Engineering, and Medicine). The challenge of treating obesity and overweight: Proceedings of a workshop. Washington, DC: The National Academies Press; 2017. [ PubMed : 29341559 ]
  • National Center for Education Statistics. The health literacy of America's adults: Results from the 2003 National Assessment of Adult Literacy. Washington, DC: U.S. Department of Education, National Center for Education Statistics; 2006.
  • National Library of Medicine. Help me understand genetics. 2018. [March 29, 2018]. https://ghr ​.nlm.nih.gov/primer .
  • Nature.com. Genetic linkage study. 2018. [March 29, 2018]. https://www ​.nature.com ​/subjects/genetic-linkage-study .
  • Nelson DE, Hesse BW, Croyle RT. Making data talk: Communicating public health data to the public, policy makers, and the press. New York: Oxford University Press; 2009.
  • NHGRI (National Human Genome Research Institute). Special considerations for genome research. 2018. [February 8, 2018]. https://www ​.genome.gov ​/27559024/informed-consent-special-considerations-for-genome-research .
  • NPR. Whydoweblindlysigntermsofserviceagreements? 2014. [March 29, 2018]. https://www ​.npr.org/2014 ​/09/01/345044359 ​/why-do-we-blindly-sign-terms-of-service-agreements .
  • NRC (National Research Council). Human biomonitoring for environmental chemicals. Washington, DC: The National Academies Press; 2006.
  • Nusbaum L, Douglas B, Damus K, Paasche-Orlow M, Estrella-Luna N. Communicating risks and benefits in informed consent for research: A qualitative study. Global Qualitative Nursing Research. 2017; 4 [ PMC free article : PMC5613795 ] [ PubMed : 28975139 ]
  • O'Connor A. The New York Times. Jun 6, 2016. Direct-to-consumer lab tests, no doctor visit required.
  • O'Kane M, Freedman D, Zikmund-Fisher BJ. Can patients use test results effectively if they have direct access? British Medical Journal. 2015; 350 (h673) [ PubMed : 25673132 ]
  • Ostergren JE, Gornick MC, Carere DA, Kalia SS, Uhlmann WR, Ruffin MT, Mountain JL, Green RC, Roberts JS. How well do customers of direct-to-consumer personal genomic testing services comprehend genetic test results? Findings from the impact of personal genomics study. Public Health Genomics. 2015; 18 (4):216–224. [ PMC free article : PMC4926310 ] [ PubMed : 26087778 ]
  • Parker MB, Bakken S, Wolf MS. Getting it right with the Precision Medicine Initiative: The role of health literacy. NAM Perspectives. 2016 [May 23, 2018]; http://nam ​.edu/wp-content ​/uploads/2016/02 ​/Getting-it-Right-with-the-Precision-Medicine-Initiative-the-Role-of-Health-Literacy.pdf .
  • Patch C, Middleton A. Genetic counselling in the era of genomic medicine. British Medical Bulletin. 2018 April 2; Epub ahead of print. [ PMC free article : PMC5998955 ] [ PubMed : 29617718 ]
  • Paul A, Paul S. The breast cancer susceptibility genes (BRCA) in breast and ovarian cancers. Frontiers in Bioscience (Landmark edition). 2014; 19 :605–618. [ PMC free article : PMC4307936 ] [ PubMed : 24389207 ]
  • Perzynski AT, Terchek JJ, Blixen CE, Dawson NV. Playing the numbers: How hepatitis C patients create meaning and make healthcare decisions from medical test results. Sociology of Health & Illness. 2013; 35 (4):610–627. [ PubMed : 23009649 ]
  • Peters E, Dieckmann NF, Västfjäll D, Mertz CK, Slovic P, Hibbard JH. Bringing meaning to numbers: The impact of evaluative categories on decisions. Journal of Experimental Psychology. 2009; 15 (3):213–227. [ PubMed : 19751072 ]
  • Plain Language Action and Information Network. Plainlanguage.gov. 2018. [March 29, 2018]. https: ​//plainlanguage.gov .
  • Pocock SJ, Hughes MD. Estimation issues in clinical trials and overviews. Statistics in Medicine. 1990; 9 (6):657–671. [ PubMed : 2145623 ]
  • Quandt SA, Doran AM, Rao P, Hoppin JA, Snively BM, Arcury TA. Reporting pesticide assessment results to farmworker families: Development, implementation, and evaluation of a risk communication strategy. Environmental Health Perspectives. 2004; 112 (5):636–642. [ PMC free article : PMC1241934 ] [ PubMed : 15064174 ]
  • Quigley D. Applying bioethical principles to place-based communities and cultural group protections: The case of biomonitoring results communication. The Journal of Law, Medicine & Ethics. 2012; 40 (2):348–358. [ PubMed : 22789050 ]
  • Obtaining consent. 2017. [February 8, 2018]. ResearchKit. http://researchkit ​.org ​/docs/docs/InformedConsent ​/Informed-Consent.html .
  • Roberts JS, Ostergren J. Direct-to-consumer genetic testing and personal genomics services: A review of recent empirical studies. Current Genetic Medicine Reports. 2013; 1 (3):182–200. [ PMC free article : PMC3777821 ] [ PubMed : 24058877 ]
  • Rodríguez V, Andrade AD, García-Retamero R, Anam R, Rodríguez R, Lisigurski M, Sharit J, Ruiz J. Health literacy, numeracy, and graphical literacy among veterans in primary care and their effect on shared decision making and trust in physicians. Journal of Health Communications. 2013; 18 (Suppl 1):273–289. [ PMC free article : PMC3815195 ] [ PubMed : 24093361 ]
  • Sarkar U, Karter AJ, Liu JY, Adler NE, Nguyen R, Lopez A, Schillinger D. The literacy divide: Health literacy and the use of an Internet-based patient portal in an integrated health system-results from the Diabetes Study of Northern California (DISTANCE). Journal of Health Communication. 2010; 15 :183–196. [ PMC free article : PMC3014858 ] [ PubMed : 20845203 ]
  • Sarkar U, Karter AJ, Liu JY, Adler NE, Nguyen R, López A, Schillinger D. Social disparities in internet patient portal use in diabetes: Evidence that the digital divide extends beyond access. Journal of the American Medical Informatics Association. 2011; 18 (3):318–321. [ PMC free article : PMC3078675 ] [ PubMed : 21262921 ]
  • Saulsberry K, Terry SF. The need to build trust: A perspective on disparities in genetic testing. Genetic Testing and Molecular Biomarkers. 2013; 17 (9):647–648. [ PMC free article : PMC3761437 ] [ PubMed : 24000888 ]
  • Schiavo R. Health communication: From theory to practice. 2nd ed. San Francisco, CA: John Wiley & Sons, Inc.; 2014.
  • Shaffer VA, Owens J, Zikmund-Fisher BJ. The effect of patient narratives on information search in a Web-based breast cancer decision aid: An eye-tracking study. Journal of Medical Internet Research. 2013; 15 (12):e273. [ PMC free article : PMC3875892 ] [ PubMed : 24345424 ]
  • Shani Y, Tykocinski OE, Zeelenberg M. When ignorance is not bliss: How feelings of discomfort promote the search for negative information. Journal of Economic Psychology. 2008; 29 (5):643–653.
  • Sharit J, Lisigurski M, Andrade AD, Karanam C, Nazi KM, Lewis JR, Ruiz JG. The roles of health literacy, numeracy, and graph literacy on the usability of the VA's personal health record by veterans. Journal of Usability Studies. 2014; 9 (4):173–193.
  • Simon CM, Williams JK, Shinkunas L, Brandt D, Daack-Hirsch S, Driessnack M. Informed consent and genomic incidental findings: IRB chair perspectives. Journal of Empirical Research on Human Research Ethics. 2011; 6 (4):53–67. [ PMC free article : PMC3616513 ] [ PubMed : 22228060 ]
  • Tarrant C, Jackson C, Dixon-Woods M, McNicol S, Kenyon S, Armstrong N. Consent revisited: The impact of return of results on participants' views and expectations about trial participation. Health Expectations. 2015; 18 (6):2042–2053. [ PMC free article : PMC4737222 ] [ PubMed : 25929296 ]
  • Torsvik T, Lillebo B, Mikkelsen G. Presentation of clinical laboratory results: An experimental comparison of four visualization techniques. Journal of the American Medical Informatics Association. 2013; 20 (2):325–331. [ PMC free article : PMC3638193 ] [ PubMed : 23043123 ]
  • Trevena LJ, Zikmund-Fisher BJ, Edwards A, Gaissmaier W, Galesic M, Han PKJ, King J, Lawson ML, Linder SK, Lipkus I, Ozanne E, Peters E, Timmermans D, Woloshin S. Presenting quantitative information about decision outcomes: A risk communication primer for patient decision aid developers. BMC Medical Informatics and Decision Making. 2013; 13 (Suppl 2):S7. [ PMC free article : PMC4045391 ] [ PubMed : 24625237 ]
  • Tufte ER. The visual display of quantitative information. Cheshire, CT: Graphics Press; 2001.
  • Unertl KM, Schaefbauer CL, Campbell TR, Senteio C, Siek KA, Bakken S, Veinot TC. Integrating community-based participatory research and informatics approaches to improve the engagement and health of underserved populations. Journal of the American Medical Informatics Association. 2016; 23 (1):60–73. [ PMC free article : PMC4713901 ] [ PubMed : 26228766 ]
  • University of Michigan Center for Statistical Genetics. BAM. 2013. [March 29, 2018]. https://genome ​.sph.umich.edu/wiki/BAM .
  • University of Michigan Risk Science Center. Icon array. 2018. [February 8, 2018]. http://www ​.iconarray.com .
  • Warren NS. Introduction to the special issue: Toward diversity and cultural competence in genetic counseling. Journal of Genetic Counseling. 2011; 20 (6):543–546. [ PubMed : 21870209 ]
  • Weil J. Multicultural education and genetic counseling. Clinical Genetics. 2001; 59 (3):143–149. [ PubMed : 11260220 ]
  • Welch BM, Marshall E, Qanungo S, Aziz A, Laken M, Lenert L, Obeid J. Teleconsent: A novel approach to obtain informed consent for research. Contemporary Clinical Trials Communications. 2016; 3 :74–79. [ PMC free article : PMC5096381 ] [ PubMed : 27822565 ]
  • WHO (World Health Organization). Arsenic. 2017. [March 29, 2018]. http://www ​.who.int/mediacentre ​/factsheets/fs372/en .
  • Wynn J. Genomic testing: A genetic counselor's personal reflection on three years of consenting and testing. Journal of Genetic Counseling. 2016; 25 (4):691–697. [ PMC free article : PMC4744148 ] [ PubMed : 26242468 ]
  • Wynn J, Martinez J, Bulafka J, Duong J, Zhang Y, Chiuzan C, Preti J, Cremona ML, Jobanputra V, Fyer AJ, Klitzman RL, Appelbaum PS, Chung WK. Impact of receiving secondary results from genomic research: A 12-month longitudinal study. Journal of Genetic Counseling. 2017; 27 (3):709–722. [ PMC free article : PMC5945295 ] [ PubMed : 29168042 ]
  • Yancey AK, Ortega AN, Kumanyika SK. Effective recruitment and retention of minority research participants. Annual Review of Public Health. 2006; 27 (1):1–28. [ PubMed : 16533107 ]
  • Zikmund-Fisher BJ. The right tool is what they need, not what we have: A taxonomy of appropriate levels of precision in patient risk communication. Medical Care Research and Review. 2013; 70 (1 Suppl):37S–49S. [ PubMed : 22955699 ]
  • Zikmund-Fisher BJ. When “actionable” genomic sequencing results cannot be acted upon. JAMA Oncology. 2017; 3 (7):891–892. [ PMC free article : PMC6200406 ] [ PubMed : 27657856 ]
  • Zikmund-Fisher BJ, Fagerlin A, Ubel PA. “Is 28% good or bad?” Evaluability and preference reversals in health care decisions. Medical Decision Making. 2004; 24 (2):142–148. [ PubMed : 15090100 ]
  • Zikmund-Fisher BJ, Witteman HO, Fuhrel-Forbis A, Exe NL, Kahn VC, Dickson M. Animated graphics for comparing two risks: A cautionary tale. Journal of Medical Internet Research. 2012; 14 (4):e106. [ PMC free article : PMC3409597 ] [ PubMed : 22832208 ]
  • Zikmund-Fisher BJ, Exe NL, Witteman HO. Numeracy and literacy independently predict patients' ability to identify out-of-range test results. Journal of Medical Internet Research. 2014; 16 (8):e187. [ PMC free article : PMC4137189 ] [ PubMed : 25135688 ]
  • Zikmund-Fisher BJ, Scherer AM, Solomon JB, Exe NL, Scherer AM, Witteman HO, Tarini BA, Fagerlin A. Graphics help patients distinguish between urgent and non-urgent deviations in laboratory test results. Journal of the American Medical Informatics Association. 2017; 24 (3):520–528. [ PMC free article : PMC5565988 ] [ PubMed : 28040686 ]
  • Zikmund-Fisher JB, Scherer MA, Witteman OH, Solomon BJ, Exe LN, Fagerlin A. Effect of harm anchors in visual displays of test results on patient perceptions of urgency about near-normal values: Experimental study. Journal of Medical Internet Research. 2018; 20 (3):e98. [ PMC free article : PMC5891666 ] [ PubMed : 29581088 ]

Health literacy is “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions” ( IOM, 2004 , p. 20).

Health numeracy is “the degree to which individuals have the capacity to access, process, interpret, communicate, and act on numerical, quantitative, graphical, biostatistical, and probabilistic health information needed to make effective health decisions” ( Golbeck et al., 2005 , p. 375).

“A reference range is a set of values that includes upper and lower limits of a lab test based on a group of otherwise healthy people. The values in between those limits may depend on such factors as age, sex, and specimen type (blood, urine, spinal fluid, etc.) and can also be influenced by circumstantial situations such as fasting and exercise. These intervals are thought of as normal ranges or limits” ( American Association for Clinical Chemistry, 2017 ).

“Genetic linkage study: A genetic linkage study is a family-based method used to map a trait to a genomic location by demonstrating co-segregation of the disease with genetic markers of known chromosomal location; locations identified are more likely to contain a causal genetic variant. This technique is particularly useful for the identification of genes that are inherited in a Mendelian fashion” ( Nature.com, 2018 ).

Testimony of Joanne Murabito of the Framingham Heart Study at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.

Testimony of Adam Buchanan of Geisinger Health System at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.

Testimony of Wendy Chung of Columbia University at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.

The BAM format is a binary format for storing sequence data ( University of Michigan Center for Statistical Genetics, 2013 ).

The variant call format (VCF) is a generic format for storing DNA polymorphism data such as single nucleotide polymorphisms, insertions, deletions, and structural variants, together with rich annotations ( Danecek et al., 2011 , p. 2156).

Testimony of Jessica B. Langbaum of the Banner Alzheimer's Institute at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.

Biomonitoring is “the assessment of human exposure to chemicals by measuring the chemicals or their metabolites in human specimens such as blood or urine” ( CDC, 2005 , p. 1).

Testimony of Nicholas Newman of the University of Cincinnati at the public session of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on October 24, 2017.

As discussed in Chapter 3 , what research results will be offered will depend on the analytical and clinical validity, the value to the participant, and how feasible it is for the investigators to return the results. These considerations will be weighed in determining what to return and the timing for return. Timing will be especially relevant in longitudinal studies or trials where information may need to be withheld to support study design objectives. Additionally, if blinding is required in a clinical trial, results may not be able to be returned as they are generated because it may jeopardize the scientific integrity of the study ( MRCT Center, 2017a ).

False positive is when an individual is incorrectly identified as having a disease or condition ( Baratloo et al., 2015 ).

False negative is when an individual is incorrectly identified as healthy and not having the disease or condition ( Baratloo et al., 2015 ).

“Typically, reference values or reference intervals are established for each laboratory test to delineate the range of values that would usually be encountered in a ‘healthy' population” ( Boyd, 2010 , p. 84).

“A test method is said to be accurate when it measures what it is supposed to measure. This means it is able to measure the true amount or concentration of a substance in a sample. . . . A test method is said to be precise when repeated determinations (analyses) on the same sample give similar results. When a test method is precise, the amount of random variation is small. The test method can be trusted because results are reliably reproduced time after time. . . . A test method can be precise (reliably reproducible in what it measures) without being accurate (actually measuring what it is supposed to measure), or vice versa” ( Lab Tests Online–AU, 2018 ).

“Icon arrays are graphical representations consisting of a number of stick figures, faces, circles, or other icons symbolizing individuals who are affected by some risk” ( Galesic et al., 2009 ).

“Therapeutic misconception (TM) was first described in the 1980s, when it was noticed that some research subjects ‘fail[ed] to appreciate the distinction between the imperatives of clinical research and of ordinary treatment.' People who manifest TM often express incorrect beliefs about the degree to which their treatment will be individualized to meet their specific needs; the likelihood of benefit from participation in the study; and the goals of the researchers in conducting the project” ( Appelbaum et al., 2012 , p. 2).

Testimony of John Molina of Native Health at the public session of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on December 11, 2017.

Personal communication with Suzanne Bakken of Columbia University.

  • Cite this Page National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on Health Sciences Policy; Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories; Downey AS, Busta ER, Mancher M, et al., editors. Returning Individual Research Results to Participants: Guidance for a New Research Paradigm. Washington (DC): National Academies Press (US); 2018 Jul 10. 5, Advancing Practices for Returning Individual Research Results.
  • PDF version of this title (3.7M)

In this Page

Related information.

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Advancing Practices for Returning Individual Research Results - Returning Indivi... Advancing Practices for Returning Individual Research Results - Returning Individual Research Results to Participants

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Researchers detect a new molecule in space

Press contact :.

Illustration against a starry background. Two radio dishes are in the lower left, six 3D molecule models are in the center.

Previous image Next image

New research from the group of MIT Professor Brett McGuire has revealed the presence of a previously unknown molecule in space. The team's open-access paper, “ Rotational Spectrum and First Interstellar Detection of 2-Methoxyethanol Using ALMA Observations of NGC 6334I ,” appears in April 12 issue of The Astrophysical Journal Letters .

Zachary T.P. Fried , a graduate student in the McGuire group and the lead author of the publication, worked to assemble a puzzle comprised of pieces collected from across the globe, extending beyond MIT to France, Florida, Virginia, and Copenhagen, to achieve this exciting discovery. 

“Our group tries to understand what molecules are present in regions of space where stars and solar systems will eventually take shape,” explains Fried. “This allows us to piece together how chemistry evolves alongside the process of star and planet formation. We do this by looking at the rotational spectra of molecules, the unique patterns of light they give off as they tumble end-over-end in space. These patterns are fingerprints (barcodes) for molecules. To detect new molecules in space, we first must have an idea of what molecule we want to look for, then we can record its spectrum in the lab here on Earth, and then finally we look for that spectrum in space using telescopes.”

Searching for molecules in space

The McGuire Group has recently begun to utilize machine learning to suggest good target molecules to search for. In 2023, one of these machine learning models suggested the researchers target a molecule known as 2-methoxyethanol. 

“There are a number of 'methoxy' molecules in space, like dimethyl ether, methoxymethanol, ethyl methyl ether, and methyl formate, but 2-methoxyethanol would be the largest and most complex ever seen,” says Fried. To detect this molecule using radiotelescope observations, the group first needed to measure and analyze its rotational spectrum on Earth. The researchers combined experiments from the University of Lille (Lille, France), the New College of Florida (Sarasota, Florida), and the McGuire lab at MIT to measure this spectrum over a broadband region of frequencies ranging from the microwave to sub-millimeter wave regimes (approximately 8 to 500 gigahertz). 

The data gleaned from these measurements permitted a search for the molecule using Atacama Large Millimeter/submillimeter Array (ALMA) observations toward two separate star-forming regions: NGC 6334I and IRAS 16293-2422B. Members of the McGuire group analyzed these telescope observations alongside researchers at the National Radio Astronomy Observatory (Charlottesville, Virginia) and the University of Copenhagen, Denmark. 

“Ultimately, we observed 25 rotational lines of 2-methoxyethanol that lined up with the molecular signal observed toward NGC 6334I (the barcode matched!), thus resulting in a secure detection of 2-methoxyethanol in this source,” says Fried. “This allowed us to then derive physical parameters of the molecule toward NGC 6334I, such as its abundance and excitation temperature. It also enabled an investigation of the possible chemical formation pathways from known interstellar precursors.”

Looking forward

Molecular discoveries like this one help the researchers to better understand the development of molecular complexity in space during the star formation process. 2-methoxyethanol, which contains 13 atoms, is quite large for interstellar standards — as of 2021, only six species larger than 13 atoms were detected outside the solar system , many by McGuire’s group, and all of them existing as ringed structures.  

“Continued observations of large molecules and subsequent derivations of their abundances allows us to advance our knowledge of how efficiently large molecules can form and by which specific reactions they may be produced,” says Fried. “Additionally, since we detected this molecule in NGC 6334I but not in IRAS 16293-2422B, we were presented with a unique opportunity to look into how the differing physical conditions of these two sources may be affecting the chemistry that can occur.”

Share this news article on:

Related links.

  • McGuire Lab
  • Department of Chemistry

Related Topics

  • Space, astronomy and planetary science
  • Astrophysics

Related Articles

Green Bank Telescope

Found in space: Complex carbon-based molecules

Previous item Next item

More MIT News

William Deringer smiles and stands next to an ornate wooden door.

Exploring the history of data-driven arguments in public life

Read full story →

Photos of Roger Levy, Tracy Slatyer, and Martin Wainwright

Three from MIT awarded 2024 Guggenheim Fellowships

Carlos Prieto sits, playing cello, in a well-lit room

A musical life: Carlos Prieto ’59 in conversation and concert

Side-by-side headshots of Riyam Al-Msari and Francisca Vasconcelos

Two from MIT awarded 2024 Paul and Daisy Soros Fellowships for New Americans

Cartoon images of people connected by networks, depicts a team working remotely on a project.

MIT Emerging Talent opens pathways for underserved global learners

Two students push the tubular steel Motorsports car into Lobby 13 while a third sits in the car and steers

The MIT Edgerton Center’s third annual showcase dazzles onlookers

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Most Americans think U.S. K-12 STEM education isn’t above average, but test results paint a mixed picture

Eagle Academy Public Charter School Congress Heights second grader Kenard Brisbon, 7, gets some help from his mom Janille Thompson with a math lesson on Friday, April 3, 2020. Brisbon first watched a lesson online and then had too follow it with a worksheet that was also posted online. (Photo by Toni L. Sandys/The Washington Post via Getty Images)

Most Americans believe K-12 STEM education in the United States is either average or below average compared with other wealthy nations, according to a new Pew Research Center survey.

Recent global standardized test scores show that students in the U.S. are, in fact, lagging behind their peers in other wealthy nations when it comes to math. But America’s students are doing better than average in science compared with pupils in these other countries.

Pew Research Center conducted this study to understand Americans’ ratings of K-12 STEM education in the United States. For this analysis, we surveyed 10,133 U.S. adults from Feb. 7 to 11, 2024.

Everyone who took part in the survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way, nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology .

Here are the questions used for this analysis , along with responses, and its methodology .

We also analyzed the latest data from the Program for International Student Assessment (PISA), which tests 15-year-old students in math, reading and science in member and partner countries of the Organization for Economic Cooperation and Development (OECD). This analysis only includes scores from students in the 37 OECD countries that took the 2022 PISA.

How do Americans think U.S. STEM education compares with other wealthy countries?

A horizontal stacked bar chart showing that about two-thirds of Americans see K-12 STEM education in the U.S. as average or below average.

Just 28% of U.S. adults say America is the best in the world or above average in K-12 science, technology, engineering and math education compared with other wealthy nations. A third say the U.S. is average, while another 32% think the U.S. is below average or the worst in K-12 STEM education.

Some demographic groups are more pessimistic than others about the state of U.S. STEM education. White Americans (24%) are less likely than Black (31%), Hispanic (37%) or English-speaking Asian (43%) Americans to say U.S. K-12 STEM education is the best in the world or above average. And fewer women (25%) than men (32%) say K-12 STEM education is at least above average.

Republicans and Democrats give similar ratings to K-12 STEM education: 31% of Democrats and Democratic-leaning independents say it is at least above average, as do 27% of Republicans and GOP leaners.

Americans’ views today are similar to those in a 2019 telephone survey by the Center, which was conducted before the coronavirus pandemic caused major disruptions in the country’s schools. In that survey, 31% of Americans said U.S. K-12 STEM education is the best in the world or above average compared with other nations.

How does the U.S. compare with other countries in STEM test scores?

A dot plot showing that U.S. ranks below average in math, above average in science compared with other OECD countries.

The latest figures from the Program for International Student Assessment (PISA) show a mixed picture in U.S. math and science scores.

As of 2022, the U.S. was below average in math but above average in science compared with other member countries in the Organization for Economic Cooperation and Development (OECD), a group of mostly highly developed, democratic nations:

  • U.S. students ranked 28th out of 37 OECD member countries in math. Among OECD countries, Japanese students had the highest math scores and Colombian students scored lowest. The U.S. ranking was similar in 2018, the last time the test was administered. The U.S. average score for math fell by 13 percentage points between 2018 and 2022, but the U.S. was far from alone in experiencing a decline in scores. In fact, 25 of the 37 OECD countries saw at least a 10-point drop in average math scores from 2018 to 2022.
  • In science, the U.S. ranked 12th out of 37 OECD countries. Japanese students ranked highest and Mexican students ranked lowest. The U.S average science score was virtually unchanged since 2018. Across OECD countries, far fewer countries experienced a large decline in science scores than in math scores. Seven OECD countries saw their mean science scores decline by 10 points or more.

PISA is taken by 15-year-old students about every three years. Students in 37 OECD countries took the 2022 PISA.

Note: Here are the questions used for this analysis , along with responses, and its methodology .

  • STEM Education & Workforce

Brian Kennedy's photo

Brian Kennedy is a senior researcher focusing on science and society research at Pew Research Center

About 1 in 4 U.S. teachers say their school went into a gun-related lockdown in the last school year

About half of americans say public k-12 education is going in the wrong direction, what public k-12 teachers want americans to know about teaching, what’s it like to be a teacher in america today, race and lgbtq issues in k-12 schools, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

  • Visit the University of Nebraska–Lincoln
  • Apply to the University of Nebraska–Lincoln
  • Give to the University of Nebraska–Lincoln

Search Form

Nebraska on-farm research network releases 2023 research results publication.

2023 Research Results book on top of soil

The Nebraska On-Farm Research Network (NOFRN) is placing research results into producers’ hands through its 2023 Research Results book — a publication that highlights findings from approximately 80 on-farm research studies conducted in Nebraska during the 2023 growing season.

"The research results in this book equip producers with the tools to harness local insights, enabling them to make well-informed decisions that optimize both productivity and profitability on their own operation" said Taylor Lexow, NOFRN Project Coordinator.

Studies in the 2023 Research Results book cover various topics, including crop production, fertility and soil management, non-traditional products, cover crops, crop protection and equipment. The 2023 publication, along with publications from previous years, is now available on the NOFRN’s website.

With planting season upon us, now is the time to dig deeper into agricultural practices and determine what best fits the needs of every operation. Download a copy of the 2023 Research Results book today from the NOFRN site .

For more information about the 2023 Research Results book or the NOFRN, please contact Taylor Lexow at 402-245-2222 .

About the NOFRN

The Nebraska On-Farm Research Network (NOFRN) is a program of Nebraska Extension that partners with farmers to evaluate agricultural practices and provide innovative solutions that impact farm productivity, profitability and sustainability. It is supported by the Nebraska Corn Board, the Nebraska Corn Growers Association, the Nebraska Soybean Checkoff and the Nebraska Dry Bean Commission. To learn more about the NOFRN, visit its website .

2023 Research Results Book

Online Master of Science in Agronomy

With a focus on industry applications and research, the online program is designed with maximum flexibility for today's working professionals.

A field of corn.

Browser does not support script.

  • Writing opinion articles
  • Latest news
  • Video and audio
  • LSE News FAQs

Educational damage caused by the pandemic will mean poorer GCSE results for pupils well into the 2030s

Without a raft of equalising policies, the damaging legacy from COVID-19 school closures will be felt by generations of pupils.

classroom_747x560

The educational damage wrought by the COVID-19 pandemic will impact on children well into the 2030s, with generations of pupils set for the biggest declines in GCSE results for decades.

These are the devastating conclusions of a major new study from LSE, the University of Exeter and the University of Strathclyde. The report predicts that less than four in ten pupils in England in 2030 will achieve a grade 5 or above in English and Mathematics GCSEs – lower than the 45.3 per cent of pupils who achieved this benchmark in 2022/23.

The research, funded by the Nuffield Foundation, is the first to chart how school closures during COVID-19 hindered children’s socio-emotional and cognitive skills at age 5, 11, and 14, and predict how these will impact on future GCSE prospects and later life outcomes.

Socio-emotional skills include the ability to engage in positive social interactions, regulate emotions and maintain attention. Cognitive skills are measured by how well children perform in academic tests, reflecting maths, reading and writing skills.

The research finds that socio-emotional skills are just as important as cognitive skills for young people’s GCSE results. For example, 20 per cent of the best performing pupils in cognitive tests at age 14 but who had average socio-emotional skills fail to go on to attain five good GCSEs including English and Maths. Teenagers with strong socio-emotional skills were much more likely to achieve basic GCSEs.

A gender divide in the importance of different skills emerges in the teenage years. For boys, cognitive skills at age 14 are twice as important as socio-emotional skills in determining future GCSE prospects; for girls the opposite is true, with socio-emotional skills 50 per cent more impactful than cognitive skills.

The analysis uses the latest econometric techniques to develop a model of skill formation, based on just under 19,000 pupils in the Millennium Cohort Study. This was applied to later pupil cohorts to predict how GCSE results will be impacted by disruption from school closures during the pandemic.

Alongside an overall fall in GCSE results, the model points to a significant widening in socio-economic inequalities in GCSE results. The researchers use these results to estimate that the UK’s relative income mobility levels will decline by 12-15 per cent for generations of pupils leaving school over the next decade, a significant drop by international standards.

An international review as part of the work concludes that COVID-19 amplified long-term persistent education gaps across a range of OECD countries including the UK. Compared with most other nations, England’s pandemic response was heavily focused on academic catch-up with less emphasis on socio-emotional skills, extracurricular support, and wellbeing.

The report “A generation at risk: Rebalancing education in the post-pandemic era” was produced by Lee Elliot Major, Professor of Social Mobility at the University of Exeter; Andy Eyles; Professor Steve Machin from the Centre for Economic Performance (CEP) at the London School of Economics; and Esme Lillywhite from the University of Strathclyde. It proposes several low-cost policies with the potential to improve children’s outcomes, including:

  • A national programme of trained undergraduate student tutors helping to boost the foundational skills of pupils, and enabling undergraduates to consider a career in teaching.
  • Rebalancing Ofsted inspections to explicitly focus on how schools are performing for pupils from under-resourced backgrounds and credit schools excelling when serving under-resourced communities.
  • Rebalancing the school calendar to improve teacher wellbeing, prevent holiday hunger, improve pupil prospects and help parents with child-care during the long summer break.

Professor of Social Mobility at the University of Exeter and LSE CEP Associate Professor Elliot Major said : “Without a raft of equalising policies, the damaging legacy from COVID-19 school closures will be felt by generations of pupils well into the next decade. Our review shows that COVID amplified long-term persistent education gaps in England and other countries.

“The policies we propose would rebalance the school system so that it supports all children irrespective of their backgrounds. A particular worry is a group of pupils who are falling significantly behind, likely to be absent from the classroom and to leave school without the basic skills needed to function and flourish in life. The decline in social mobility levels threatens to cast a long shadow over our society.”

LSE CEP Associate Andy Eyles added : “To our knowledge, this is the first time this type of analysis has been used in this way to assess the consequences of the pandemic in England. Our results suggest that to improve child outcomes, much greater emphasis is needed in schools on activities that improve both socio-emotional and cognitive skills.”

Esme Lillywhite from the University of Strathclyde and a research assistant at LSE CEP said: “Compared with most other nations, England’s pandemic response was heavily focused on academic catch-up with less emphasis on socio-emotional skills, extracurricular support, and wellbeing. Much more could be gained by closer international collaboration to learn what approaches have been promising elsewhere.”

Dr Emily Tanner, Programme Head at the Nuffield Foundation said : "The mounting evidence on the long-term impact of learning loss on young people's development shows how important it is for students to develop socio-emotional skills alongside academic learning. The insights from this report on timing and gender provide a useful basis for targeting effective interventions."

Behind the article

The Nuffield Foundation is an independent charitable trust with a mission to advance social well-being. It funds research that informs social policy, primarily in Education, Welfare, and Justice. The Nuffield Foundation is the founder and co-funder of the Nuffield Council on Bioethics, the Ada Lovelace Institute and the Nuffield Family Justice Observatory. The Foundation has funded this project, but the views expressed are those of the authors and not necessarily the Foundation.

More From Forbes

Ge aerospace reports robust 1q24 results; raise target price.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

(Photo by Jaap Arriens/NurPhoto via Getty Images)

On April 23, 2024, GE Aerospace (NYSE: GE, $161.26, Market Capitalization: $176.5 billion) reported robust 1Q24 results, with a strong beat on EPS versus consensus. It should be noted that as GE Vernova was spun off from GE on April 2, and Vernova remained a part of the consolidated company for 1Q24. The company released consolidated results for General Electric General Electric on 4/23 for 1Q24, while GE Vernova’s results were declared on 4/25. As for GE Aerospace, the company reported strong revenue growth of 15.5% YoY to $8.1 billion in 1Q24, while Profit increased 15.3% YoY to $1.5 billion. Orders of $11.0 billion increased by 34% YoY, with strength in both Commercial Engines & Services and Defense & Propulsion Technologies. GE Aerospace’s adjusted EPS (on a consolidated basis) increased to $0.92 per share in 1Q24, up 46% YoY from $0.63 per share in 1Q23. Due to the upbeat results delivered by the company, the company raised its FY24 outlook, where it expects revenue growth in lower double-digits and expects operating profit in the range of $6.2 and $6.6 billion for the year (earlier $6.0-$6.5 billion). Overall, GE’s results showed a strong operating margin strength in the commercial spares market business, which is likely to persist throughout the year. Following the results, GE stock rose by 8.3% to close at 162.62 on 4/23, indicating a positive reaction from investors.

GE Price Performance and Spin-Off Details

On April 2, General Electric spun-off GE Vernova, and the remaining company was renamed GE Aerospace. GE Vernova is now a standalone company that includes Renewable Energy and Power.

Key Data and Top 5 Shareholders

Valuation and Recommendation

We value GE Aerospace using the 2025e EV/EBITDA methodology. Our intrinsic value of $175.00 (Previously: $142.00) per share for GE Aerospace is based on the 2025e EV/ EBITDA multiple of 21.5x (at a ~9% premium to the multiple of TransDigm Group TransDigm Group and ~16% discount to the multiple of HEICO Corp). Our valuation for GE Aerospace also includes a 6.7% stake in GE Healthcare. We maintain our ‘Hold’ rating on GE Aerospace with an implied upside of 8.5% from the current market price of $161.26 on 4/25. Risks to our target price include slower than expected growth in the Aviation industry, supply chain shortages, decline in quality, and lower-than-expected time on wing improvements on LEAP engines.

GE Aerospace’s profit and revenue gains may persist on strong maintenance and new engine demand, though margins might edge lower as spare demand rises and new engine builds increase. Furthermore, Boeing’s Boeing production problems are expected to lead to a higher mix of spares as they are GE’s largest customer for margin-dilutive new engines. However, supply chain problems continued in the quarter and remained challenging for GE and the industry. GE is the market leader in narrow-body and wide-body engines, with the largest installed base. Reduced durability of new-technology narrow-body engines should keep demand for spares and overhauls on its largest fleet (CFM56) robust as the older fleet is used longer. GE has a strong portfolio in the defense business, powering US destroyers and critical combat helicopters like the Black Hawk and Apache, providing stability to the commercial business.

Optimizing Sampling Schedules in Diffusion Models

Diffusion models (DMs) have proven themselves to be extremely reliable probabilistic generative models that can produce high-quality data. They have been successfully applied to applications such as image synthesis, image super-resolution, image-to-image translation, image editing, inpainting, video synthesis, text-to-3d generation, and even planning. However, sampling from DMs corresponds to solving a generative Stochastic or Ordinary Differential Equation (SDE/ODE) in reverse time and requires multiple sequential forward passes through a large neural network, limiting their real-time applicability.

Solving SDE/ODEs within the interval \([t_{min}, t_{max}]\) works by discretizing it into \(n\) smaller sub-intervals \(t_{min} = t_0 < t_1 < \dots < t_{n}=t_{max}\) called a sampling schedule, and numerically solving the differential equation between consecutive \(t_i\) values. Currently, most prior works adopt one of a handful of heuristic schedules, such as simple polynomials and cosine functions, and little effort has gone into optimizing this schedule. We attempt to fill this gap by introducing a principled approach for optimizing the schedule in a dataset and model specific manner, resulting in improved outputs given the same compute budget.

Assuming that \( P_{true} \) represents the distribution of running the reverse-time SDE (defined by the learnt model) exactly, and \( P_{disc} \) represents the distribution of solving it with Stochastic-DDIM and a sampling schedule, using the Girsanov theorem an upper bound can be derived for the Kullback-Leibler divergence between these two distributions (simplified; see paper for details) \[ D_{KL}(P_{true} || P_{disc}) \leq \underbrace{ \sum_{i=1}^{n} \int_{t_{i-1}}^{t_{i}} \frac{1}{t^3} \mathbb{E}_{x_t \sim p_t, x_{t_i} \sim p_{t_i | t}} || D_{\theta}(x_t, t) - D_{\theta}(x_{t_i}, t_i) ||_2^2 \ dt }_{= KLUB(t_0, t_1, \dots, t_n)} + constant \] A similar Kullback-Leibler Upper Bound (KLUB) can be found for other stochastic SDE solvers. Given this, we formulate the problem of optimizing the sampling schedule as minimizing the KLUB term with respect to its time discretization, i.e. the sampling scheduling. Monte-Carlo integration with importance sampling is used to estimate the expectation values and the schedule is optimized iteratively. We showcase the benefits of optimizing schedules on a 2D toy distribution (see visualization below).

research results for

Modeling a 2D toy distribution: Samples in (b), (c), and (d) are generated using 8 steps of SDE-DPM-Solver++(2M) with EDM, LogSNR, and AYS schedules, respectively. Each image consists of 100,000 sampled points.

Experimental Results

To evaluate the usefulness of optimized schedules, we performed rigorous quantitative experiments on standard image generation benchmarks (CIFAR10, FFHQ, ImageNet), and found that these schedules result in consistent improvements across the board in image quality (measured by FID) for a large variety of popular samplers. We also performed a user study for text-to-image models (specifically Stable Diffusion 1.5), and found that on average images generated with these schedules are preferred twice as much . Please see the paper for these results and evaluations.

Below, we showcase some text-to-image examples that illustrate how using an optimized schedule can generate images with more visual details and better text-alignment given the same number of forward evaluations (NFEs). We provide side-by-side comparisons between our optimized schedules against two of the most popular schedules used in practice (EDM and Time-Uniform). All images are generated with a stochastic ( casino ) or deterministic ( lock ) version of DPM-Solver++(2M) with 10 steps. Hover over the images for zoom-ins.

Stable Diffusion 1.5

research results for

casino Text prompt: "1girl, blue dress, blue hair, ponytail, studying at the library, focused" Model: Dreamshaper 8

research results for

casino Text prompt: "An enchanting forest path with sunlight filtering through the dense canopy, highlighting the vibrant greens and the soft, mossy floor"

research results for

casino Text prompt: "A digital Illustration of the Babel tower, 4k, detailed, trending in artstation, fantasy vivid colors"

research results for

casino Text prompt: "A glass-blown vase with a complex swirl of colors, illuminated by sunlight, casting a mosaic of shadows on a white table"

research results for

casino Text prompt: "A delicate glass pendant holding a single, luminous firefly, its glow casting warm, dancing shadows on the wearer's neck"

research results for

casino Text prompt: "A wise old owl wearing a velvet smoking jacket and spectacles, with a pipe in its beak, seated in a vintage leather armchair"

research results for

casino Text prompt: "A close-up portrait of a baby wearing a tiny spider-man costume, trending on artstation" Model: Dreamshaper 8

DeepFloyd-IF

research results for

casino Text prompt: "Capybara podcast neon sign"

research results for

casino Text prompt: "Long-exposure night photography of a starry sky over a mountain range, with light trails"

research results for

casino Text prompt: "A tranquil village nestled in a lush valley, with small, cozy houses dotting the landscape, surrounded by towering, snow-capped mountains under a clear blue sky. A gentle river meanders through the village, reflecting the warm glow of the sunrise"

research results for

casino Text prompt: "An ancient library buried beneath the earth, its halls lit by glowing crystals, with scrolls and tomes stacked in endless rows"

research results for

casino Text prompt: "A bustling spaceport on a distant planet, with ships of various designs taking off against a backdrop of twin moons"

research results for

casino Text prompt: "A set of ancient armor, standing as if worn by an invisible warrior, in front of a backdrop of medieval banners and weaponry."

research results for

casino Text prompt: "An elephant painting a colorful abstract masterpiece with its trunk, in a studio surrounded by amused onlookers."

research results for

casino Text prompt: "Tiger in construction gear, perched on aged wooden docks, formidable, curious, tiger on the waterfront, textured, vibrant, atmospheric, sharp focus, lifelike, professional lighting, cinematic, 8K"

research results for

casino Text prompt: "Cluttered house in the woods, anime, oil painting, high resolution, cottagecore, ghibli inspired, 4k"

research results for

casino Text prompt: "An old, creepy dollhouse in a dusty attic, with dolls posed in unsettling positions. Cobwebs, dim lighting, and the shadows of unseen presences create a chilling scene"

research results for

lock Text prompt: "A stunning, intricately detailed painting of a sunset in a forest valley, blending the rich, symmetrical styles of Dan Mumford and Marc Simonetti with astrophotography elements"

research results for

lock Text prompt: "Create a photorealistic scene of a powerful storm with swirling, dark clouds and fierce winds approaching a coastal village. Show villagers preparing for the storm, with detailed architecture reflecting a fantasy world"

research results for

lock Text prompt: "Cyberpunk cityscape with towering skyscrapers, neon signs, and flying cars"

Stable Video Diffusion

We also studied the effect of optimized schedules in video generation using the open-source image-to-video model Stable Video Diffusion. We find that using optimized schedules leads to more stable videos with less color distortions as the video progresses. Below we show side-by-side comparisons of videos generated with 10 DDIM steps using the two different schedules.

research results for

Amirmojtaba Sabour, Sanja Fidler, Karsten Kreis

IMAGES

  1. Types of Research Report

    research results for

  2. Best Way to Analyze and Present Survey Results Effectively

    research results for

  3. The quantitative research sample

    research results for

  4. Understanding Qualitative Research: An In-Depth Study Guide

    research results for

  5. How to write Result and discussion in Research

    research results for

  6. 5 Steps to Present Your Research in an Infographic

    research results for

VIDEO

  1. Your research can change the world

  2. Returning Individual Research Results and Data to Participants: Experience from the Field

  3. Importance of Research Methodology in Tamil

  4. HOW TO READ and ANALYZE A RESEARCH STUDY

  5. Results Section #1: What to Include_Shorts

  6. Patient-Centered Outcomes Research at Johns Hopkins

COMMENTS

  1. How to Write a Results Section

    Checklist: Research results 0 / 7. I have completed my data collection and analyzed the results. I have included all results that are relevant to my research questions. I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics. I have stated whether each hypothesis was supported ...

  2. Research Results Section

    Research Results. Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.

  3. Reporting Research Results in APA Style

    Reporting Research Results in APA Style | Tips & Examples. Published on December 21, 2020 by Pritha Bhandari.Revised on January 17, 2024. The results section of a quantitative research paper is where you summarize your data and report the findings of any relevant statistical analyses.. The APA manual provides rigorous guidelines for what to report in quantitative research papers in the fields ...

  4. PDF Results Section for Research Papers

    The results section of a research paper tells the reader what you found, while the discussion section tells the reader what your findings mean. The results section should present the facts in an academic and unbiased manner, avoiding any attempt at analyzing or interpreting the data. Think of the results section as setting the stage for the ...

  5. How to write a "results section" in biomedical scientific research

    The "Results section" is the third most important anatomical structure of IMRAD (Introduction, Method and Material, Result, And Discussion) frameworks, the almost universally accepted framework in many journals in the late nineteenth century. 3 Before using a structured IMRAD format, research findings in scientific papers were presented in ...

  6. The Principles of Biomedical Scientific Writing: Results

    1. Context. The "results section" is the heart of the paper, around which the other sections are organized ().Research is about results and the reader comes to the paper to discover the results ().In this section, authors contribute to the development of scientific literature by providing novel, hitherto unknown knowledge ().In addition to the results, this section contains data and ...

  7. How to Write the Results/Findings Section in Research

    Step 1: Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study. The guidelines will generally outline specific requirements for the results or findings section, and the published articles will ...

  8. 7. The Results

    For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results. Both approaches are appropriate in how you report your findings, but use only one approach. Present a synopsis of the results followed by an explanation of key findings. This approach can be used to highlight important findings.

  9. How to Present Results in a Research Paper

    The "Results" section is arguably the most important section in a research manuscript as the findings of a study, obtained diligently and painstakingly, are presented in this section. A well-written results section reflects a well-conducted study. This chapter provides helpful pointers for writing an effective, organized results section.

  10. Research Guides: Writing a Scientific Paper: RESULTS

    Present the results of the paper, in logical order, using tables and graphs as necessary. Explain the results and show how they help to answer the research questions posed in the Introduction. Evidence does not explain itself; the results must be presented and then explained. Avoid: presenting results that are never discussed; presenting ...

  11. Dissertation Results/Findings Chapter (Quantitative)

    The results chapter (also referred to as the findings or analysis chapter) is one of the most important chapters of your dissertation or thesis because it shows the reader what you've found in terms of the quantitative data you've collected. It presents the data using a clear text narrative, supported by tables, graphs and charts.

  12. Writing up a Research Report

    It condenses your research design, results, arguments, and conclusions into a brief stand-alone one (or at most two) pager. Purpose. The management summary, also called the executive summary or abstract, summarizes the entire report. It enables any reader to read this summary alone without reading through the complete research report, thesis ...

  13. Organizing Academic Research Papers: 7. The Results

    The results section of the research paper is where you report the findings of your study based upon the information gathered as a result of the methodology [or methodologies] you applied. The results section should simply state the findings, without bias or interpretation, and arranged in a logical sequence. The results section should always be ...

  14. Reporting Statistics in APA Style

    The APA Publication Manual is commonly used for reporting research results in the social and natural sciences. This article walks you through APA Style standards for reporting statistics in academic writing. Statistical analysis involves gathering and testing quantitative data to make inferences about the world.

  15. PDF Results/Findings Sections for Empirical Research Papers

    The Results (also sometimes called Findings) section in an empirical research paper describes what the researcher(s) found when they analyzed their data. Its primary purpose is to use the data collected to answer the research question(s) posed in the introduction, even if the findings challenge the hypothesis.

  16. How to Write a Results Section: Definition, Tips & Examples

    The easiest way to write a quantitative dissertation results section is to build it around a sub-question or hypothesis of your research. For each subquery, provide relevant results and include statistical analysis. Then briefly evaluate importance & reliability.

  17. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  18. Dissertation Results & Findings Chapter (Qualitative)

    The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and ...

  19. How to Visualize Your Qualitative User Research Results for Maximum

    The Take Away. Information visualization is a powerful technique to communicate the results from qualitative user research to your fellow designers or the client. There are three types of visualizations you could use. Affinity diagrams resemble your data analysis outcomes most, but you must rework them to provide more clarity to the people who ...

  20. (PDF) Reporting research results effectively

    in IR institutional research circles is that research results can be accurate, timely, and audience friendly but not all three. In the haste of preparation, reports can suffer from inadequate ...

  21. Market Research Report Examples For Your Analysis Results

    1. Market Research Report: Brand Analysis. Our first example shares the results of a brand study. To do so, a survey has been performed on a sample of 1333 people, information that we can see in detail on the left side of the board, summarizing the gender, age groups, and geolocation. **click to enlarge**.

  22. Advancing Practices for Returning Individual Research Results

    In the previous chapters, the committee addresses why returning results provides value to participants and scientific stakeholders, what research results could be returned, and the timing of returning individual research results. This chapter focuses on the "how." As discussed earlier in this report, the return of individual research results is a natural progression in the push for ...

  23. Americans' Top Foreign Policy Priorities in 2024

    Pew Research Center conducted this analysis to better understand Americans' long-range foreign policy priorities. For this analysis, we surveyed 3,600 U.S. adults from April 1 to April 7, 2024. Everyone who took part in this survey is a member of the Center's American Trends Panel (ATP), an online survey panel that is recruited through ...

  24. Researchers detect a new molecule in space

    New research from the group of MIT Professor Brett McGuire has revealed the presence of a previously unknown molecule in space. The team's open-access paper, "Rotational Spectrum and First Interstellar Detection of 2-Methoxyethanol Using ALMA Observations of NGC 6334I," appears in April 12 issue of The Astrophysical Journal Letters. Zachary T.P. Fried, a graduate student in the McGuire ...

  25. How US K-12 STEM education stacks up globally ...

    Pew Research Center conducted this study to understand Americans' ratings of K-12 STEM education in the United States. For this analysis, we surveyed 10,133 U.S. adults from Feb. 7 to 11, 2024. Everyone who took part in the survey is a member of the Center's American Trends Panel (ATP), an online survey panel that is recruited through ...

  26. Nebraska On-Farm Research Network Releases 2023 Research Results

    With planting season upon us, now is the time to dig deeper into agricultural practices and determine what best fits the needs of every operation. Download a copy of the 2023 Research Results book today from the NOFRN site. For more information about the 2023 Research Results book or the NOFRN, please contact Taylor Lexow at 402-245-2222.

  27. Educational damage caused by the pandemic will mean poorer GCSE results

    The research, funded by the Nuffield Foundation, is the first to chart how school closures during COVID-19 hindered children's socio-emotional and cognitive skills at age 5, 11, and 14, and predict how these will impact on future GCSE prospects and later life outcomes. ... Our results suggest that to improve child outcomes, much greater ...

  28. GE Aerospace Reports Robust 1Q24 Results; Raise Target Price

    On April 23, 2024, GE Aerospace (NYSE: GE, $161.26, Market Capitalization: $176.5 billion) reported robust 1Q24 results, with a strong beat on EPS versus consensus.

  29. Intel Reports First-Quarter 2024 Financial Results

    About Intel. Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore's Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers' greatest challenges.

  30. Align Your Steps

    Please see the paper for these results and evaluations. Below, we showcase some text-to-image examples that illustrate how using an optimized schedule can generate images with more visual details and better text-alignment given the same number of forward evaluations (NFEs). We provide side-by-side comparisons between our optimized schedules ...