• AI Content Shield
  • AI KW Research
  • AI Assistant
  • SEO Optimizer
  • AI KW Clustering
  • Customer reviews
  • The NLO Revolution
  • Press Center
  • Help Center
  • Content Resources
  • Facebook Group

An Effective Guide to Comparative Research Questions

Table of Contents

Comparative research questions are a type of quantitative research question. It aims to gather information on the differences between two or more research objects based on different variables. 

These kinds of questions assist the researcher in identifying distinctive characteristics that distinguish one research subject from another.

A systematic investigation is built around research questions. Therefore, asking the right quantitative questions is key to gathering relevant and valuable information that will positively impact your work.

This article discusses the types of quantitative research questions with a particular focus on comparative questions.

What Are Quantitative Research Questions?

Quantitative research questions are unbiased queries that offer thorough information regarding a study topic . You can statistically analyze numerical data yielded from quantitative research questions.

This type of research question aids in understanding the research issue by examining trends and patterns. The data collected can be generalized to the overall population and help make informed decisions. 

comparative study research questions

Types of Quantitative Research Questions

Quantitative research questions can be divided into three types which are explained below:

Descriptive Research Questions

Researchers use descriptive research questions to collect numerical data about the traits and characteristics of study subjects. These questions mainly look for responses that bring into light the characteristic pattern of the existing research subjects.

However, note that the descriptive questions are not concerned with the causes of the observed traits and features. Instead, they focus on the “what,” i.e., explaining the topic of the research without taking into account its reasons.

Examples of Descriptive research questions:

  • How often do you use our keto diet app?
  • What price range are you ready to accept for this product?

Comparative Research Questions

Comparative research questions seek to identify differences between two or more distinct groups based on one or more dependent variables. These research questions aim to identify features that differ one research subject from another while emphasizing their apparent similarities.

In market research surveys, asking comparative questions can reveal how your product or service compares to its competitors. It can also help you determine your product’s benefits and drawbacks to gain a competitive edge.

The steps in formulating comparative questions are as follows:

  • Choose the right starting phrase
  • Specify the dependent variable
  • Choose the groups that interest you
  • Identify the relevant adjoining text
  • Compose the comparative research question

Relationship-Based Research Questions

A relationship-based research question refers to the nature of the association between research subjects of the same category. These kinds of research question assist you in learning more about the type of relationship between two study variables.

Because they aim to distinctly define the connection between two variables, relationship-based research questions are also known as correlational research questions.

Examples of Comparative Research Questions

  • What is the difference between men’s and women’s daily caloric intake in London?
  • What is the difference in the shopping attitude of millennial adults and those born in 1980?
  • What is the difference in time spent on video games between people of the age group 15-17 and 18-21?
  • What is the difference in political views of Mexicans and Americans in the US?
  • What are the differences between Snapchat usage of American male and female university students?
  • What is the difference in views towards the security of online banking between the youth and the seniors?
  • What is the difference in attitude between Gen-Z and Millennial toward rock music?
  • What are the differences between online and offline classes?
  • What are the differences between on-site and remote work?
  • What is the difference between weekly Facebook photo uploads between American male and female college students?
  • What are the differences between an Android and an Apple phone?

Comparative research questions are a great way to identify the difference between two study subjects of the same group.

Asking the right questions will help you gain effective and insightful data to conduct your research better . This article discusses the various aspects of quantitative research questions and their types to help you make data-driven and informed decisions when needed.

An Effective Guide to Comparative Research Questions

Abir Ghenaiet

Abir is a data analyst and researcher. Among her interests are artificial intelligence, machine learning, and natural language processing. As a humanitarian and educator, she actively supports women in tech and promotes diversity.

Explore All Engaging Questions Tool Articles

Consider these fun questions about spring.

Spring is a season in the Earth’s yearly cycle after Winter and before Summer. It is the time life and…

  • Engaging Questions Tool

Fun Spouse Game Questions For Couples

Answering spouse game questions together can be fun. It’ll help begin conversations and further explore preferences, history, and interests. The…

Best Snap Game Questions to Play on Snapchat

Are you out to get a fun way to connect with your friends on Snapchat? Look no further than snap…

How to Prepare for Short Response Questions in Tests

When it comes to acing tests, there are a few things that will help you more than anything else. Good…

Top 20 Reflective Questions for Students

As students, we are constantly learning new things. Every day, we are presented with further information and ideas we need…

Random History Questions For History Games

A great icebreaker game is playing trivia even though you don’t know the answer. It is always fun to guess…

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • 10 Research Question Examples to Guide Your Research Project

10 Research Question Examples to Guide your Research Project

Published on October 30, 2022 by Shona McCombes . Revised on October 19, 2023.

The research question is one of the most important parts of your research paper , thesis or dissertation . It’s important to spend some time assessing and refining your question before you get started.

The exact form of your question will depend on a few things, such as the length of your project, the type of research you’re conducting, the topic , and the research problem . However, all research questions should be focused, specific, and relevant to a timely social or scholarly issue.

Once you’ve read our guide on how to write a research question , you can use these examples to craft your own.

Note that the design of your research question can depend on what method you are pursuing. Here are a few options for qualitative, quantitative, and statistical research questions.

Other interesting articles

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, October 19). 10 Research Question Examples to Guide your Research Project. Scribbr. Retrieved February 17, 2024, from https://www.scribbr.com/research-process/research-question-examples/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, writing strong research questions | criteria & examples, how to choose a dissertation topic | 8 steps to follow, evaluating sources | methods & examples, what is your plagiarism score.

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Velentgas P, Dreyer NA, Nourjah P, et al., editors. Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Jan.

Cover of Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide

Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide.

  • Hardcopy Version at Agency for Healthcare Research and Quality

Chapter 1 Study Objectives and Questions

Scott R Smith , PhD.

The steps involved in the process of developing research questions and study objectives for conducting observational comparative effectiveness research (CER) are described in this chapter. It is important to begin with identifying decisions under consideration, determining who the decisionmakers and stakeholders in the specific area of research under study are, and understanding the context in which decisions are being made. Synthesizing the current knowledge base and identifying evidence gaps is the next important step in the process, followed by conceptualizing the research problem, which includes developing questions that address the gaps in existing evidence. Understanding the stage of knowledge that the study is designed to address will come from developing these initial questions. Identifying which questions are critical to reduce decisional uncertainty and minimize gaps in the current knowledge base is an important part of developing a successful framework. In particular, it is beneficial to look at what study populations, interventions, comparisons, outcomes, timeframe, and settings (PICOTS framework) are most important to decisionmakers in weighing the balance of harms and benefits of action. Some research questions are easier to operationalize than others, and study limitations should be recognized and accepted from an early stage. The level of new scientific evidence that is required by the decisionmaker to make a decision or to take action must be recognized. Lastly, the magnitude of effect must be specified. This can mean defining what is a clinically meaningful difference in the study endpoints from the perspective of the decisionmaker and/or defining what is a meaningful difference from the patient's perspective.

The foundation for designing a new research protocol is the study's objectives and the questions that will be investigated through its implementation. All aspects of study design and analysis are based on the objectives and questions articulated in a study's protocol. Consequently, it is exceedingly important that a study's objectives and questions be formulated meticulously and written precisely in order for the research to be successful in generating new knowledge that can be used to inform health care decisions and actions.

An important aspect of CER 1 and other forms of translational research is the potential for early involvement and inclusion of patients and other stakeholders to collaborate with researchers in identifying study objectives, key questions, major study endpoints, and the evidentiary standards that are needed to inform decisionmaking. The involvement of stakeholders in formulating the research questions increases the applicability of the study to the end-users and facilitates appropriate translation of the results into health care practice and use by patient communities. While stakeholders may be defined in multiple ways, for the purposes of this User's Guide , a broad definition will be used. Hence, stakeholders are defined as individuals or organizations that use scientific evidence for decisionmaking and therefore have an interest in the results of new research. Implicit in this definition of stakeholders is the importance for stakeholders to understand the scientific process, including considerations of bioethics and the limitations of research, particularly with regard to studies involving human subjects. Ideally, stakeholders also should express commitment to using objective scientific evidence to inform their decisionmaking and recognize that disregarding sound scientific methods often will undermine decisionmaking. For stakeholder organizations, it is also advantageous if the organization has well-established processes for transparently reviewing and incorporating research findings into decisions as well as organized channels for disseminating research results.

There are at least seven essential steps in the conceptualization and development of a research question or set of questions for an observational CER protocol. These steps are presented as a general framework in Table 1.1 below and elaborated upon in the subsequent sections of this chapter. The framework is based on the principle that researchers and stakeholders will work together to objectively lay out the research problems, research questions, study objectives, and key parameters for which scientific evidence is needed to inform decisionmaking or health care actions. The intent of this framework is to facilitate communication between researchers and stakeholders in conceptualizing the research problem and the design of a study (or a program of research involving a series of studies) in order to maximize the potential that new knowledge will be created from the research with results that can inform decisionmaking. To do this, research results must be relevant, applicable, unbiased and sufficient to meet the evidentiary threshold for decisionmaking or action by stakeholders. In order for the results to be valid and credible, all persons involved must be committed to protecting the integrity of the research from bias and conflicts of interest. Most importantly, the study must be designed to protect the rights, welfare, and well-being of subjects involved in the research.

Table 1.1. Framework for developing and conceptualizing a CER protocol.

Framework for developing and conceptualizing a CER protocol.

  • Identifying Decisions, Decisionmakers, Actions, and Context

In order for research findings to be useful for decisionmaking, the study protocol should clearly articulate the decisions or actions for which stakeholders seek new scientific evidence. While only some studies may be sufficiently robust for making decisions or taking action, statements that describe the stakeholders' decisions will help those who read the protocol understand the rationale for the study and its potential for informing decisions or for translating the findings into changes in health care practices. This information also improves the ability of protocol readers to understand the purpose of the study so they can critically review its design and provide recommendations for ways it may be potentially improved. If stakeholders have a need to make decisions within a critical time frame for regulatory, ethical, or other reasons, this interval should be expressed to researchers and described in the protocol. In some cases, the time frame for decisionmaking may influence the choice of outcomes that can be studied and the study designs that can be used. For some stakeholders' questions, research and decisionmaking may need to be divided into stages, since it may take years for outcomes with long lag times to occur, and research findings will be delayed until they do.

In writing this section of the protocol, investigators should ask stakeholders to describe the context in which the decision will be made or actions will be taken. This context includes the background and rationale for the decision, key areas of uncertainty and controversies surrounding the decision, ways scientific evidence will be used to inform the decision, the process stakeholders will use to reach decisions based on scientific evidence, and a description of the key stakeholders who will use or potentially be affected by the decision. By explaining these contextual factors that surround the decision, investigators will be able to work with stakeholders to determine the study objectives and other major parameters of the study. This work also provides the opportunity to discuss how the tools of science can be applied to generate new evidence for informing stakeholder decisions and what limits may exist in those tools. In addition, this initial step begins to clarify the number of analyses necessary to generate the evidence that stakeholders need to make a decision or take other actions with sufficient certainty about the outcomes of interest. Finally, the contextual information facilitates advance planning and discussions by researchers and stakeholders about approaches to translation and implementation of the study findings once the research is completed.

  • Synthesizing the Current Knowledge Base

In designing a new study, investigators should conduct a comprehensive review of the literature, critically appraise published studies, and synthesize what is known related to the research objectives. Specifically, investigators should summarize in the protocol what is known about the efficacy, effectiveness, and safety of the interventions and about the outcomes being studied. Furthermore, investigators should discuss measures used in prior research and whether these measures have changed over time. These descriptions will provide background on the knowledge base for the current protocol. It is equally important to identify which elements of the research problem are unknown because evidence is absent, insufficient, or conflicting.

For some research problems, systematic reviews of the literature may be available and can be useful resources to guide the study design. The AHRQ Evidence-based Practice Centers 2 and the Cochrane Collaboration 3 are examples of established programs that conduct thorough systematic reviews, technology assessments, and specialized comparative effectiveness reviews using standardized methods. When available, systematic reviews and technology assessments should be consulted as resources for investigators to assess the current knowledge base when designing new studies and working with stakeholders.

When reviewing the literature, investigators and stakeholders should identify the most relevant studies and guidelines about the interventions that will be studied. This will allow readers to understand how new research will add to the existing knowledge base. If guidelines are a source of information, then investigators should examine whether these guidelines have been updated to incorporate recent literature. In addition, investigators should assess the health sciences literature to determine what is known about expected effects of the interventions based on current understanding of the pathophysiology of the target condition. Furthermore, clinical experts should be consulted to help identify gaps in current knowledge based on their expertise and interactions with patients. Relevant questions to ask to assess the current knowledge base for development of an observational CER study protocol are:

  • What are the most relevant studies and guidelines about the interventions, and why are these studies relevant to the protocol (e.g., because of the study findings, time period conducted, populations studied, etc.)?
  • Are there differences in recommendations from clinical guidelines that would indicate clinical equipoise?
  • What else is known about the expected effects of the interventions based on current understanding of the pathophysiology of the targeted condition?
  • What do clinical experts say about gaps in current knowledge?
  • Conceptualizing the Research Problem

In designing studies for addressing stakeholder questions, investigators should engage multiple stakeholders in discussions about how the research problem is conceptualized from the stakeholders' perspectives. These discussions will aid in designing a study that can be used to inform decisionmaking. Together, investigators and stakeholders should work collaboratively to determine the major objectives of the study based on the health care decisions facing stakeholders. As pointed out by Heckman, 4 research objectives should be formalized outside considerations of available data and the inferences that can be made from various statistical estimation approaches. Doing so will allow the study objectives to be determined by stakeholder needs rather than the availability of existing data. A thorough discussion of these considerations is beyond the scope of this chapter, but some important considerations are summarized in supplement 1 of this User's Guide.

In order to conceptualize the problem, stakeholders and other experts should be asked to describe the potential relationships between the intervention and important health outcomes. This description will help researchers develop preliminary hypotheses about the stated relationships. Likewise, stakeholders, researchers, and other experts should be asked to enumerate all major assumptions that affect the conceptualization of the research problem, but will not be directly examined in the study. These assumptions should be described in the study protocol and in reporting final study results. By clearly stating the assumptions, protocol reviewers will be better able to assess how the assumptions may influence the study results.

Based on the conceptualization of the research problem, investigators and stakeholders should make use of applicable scientific theory in designing the study protocol and developing the analytic plan. Research that is designed using a validated theory has a higher potential to reach valid conclusions and improve the overall understanding of a phenomenon. In addition, theory will aid in the interpretation of the study findings, since these results can be put in context with the theory and with past research. Depending on the nature of the inquiry, theory from specific disciplines such as health behavior, sociology, or biology could be the basis for designing the study. In addition, the research team should work with stakeholders to develop a conceptual model or framework to guide the implementation of the study. The protocol should also contain one or more figures that summarize the conceptual model or framework as it applies to the study. These figures will allow readers to understand the theoretical or conceptual basis for the study and how the theory is operationalized for the specific study. The figures should diagram relationships between study variables and outcomes to help readers of the protocol visualize relationships that will be examined in the study.

For research questions about causal associations between exposures and outcomes, causal models such as directed acyclic graphs (DAGs) may be useful tools in designing the conceptual framework for the study and developing the analytic plan. The value of DAGs in the context of refining study questions is that they make assumptions explicit in ways that can clarify gaps in knowledge. Free software such as DAGitty is available for creating, editing, and analyzing causal models. A thorough discussion of DAGs is beyond the scope of this chapter, but more information about DAGs is available in supplement 2 of this User's Guide.

The following list of questions may be useful for defining and describing a study's conceptual framework in a CER protocol:

  • What are the main objectives of the study, as related to specific decisions to be made?
  • What are the major assumptions of decisionmakers, investigators, and other experts about the problem or phenomenon being studied?
  • What relationships, if any, do experts hypothesize exist between interventions and outcomes?

What is known about each element of the model?

Can relationships be expressed by causal diagrams?

  • Determining the Stage of Knowledge Development for the Study Design

The scientific method is a process of observation and experimentation in order for the evidence base to be expanded as new knowledge is developed. Therefore, stakeholders and investigators should consider whether a program of research comprising a sequential or concurrent series of studies, rather than a single study, is needed to adequately make a decision. Staging the research into multiple studies and making interim decisions may improve the final decision and make judicious use of scarce research resources. In some cases, the results of preliminary studies, descriptive epidemiology, or pilot work may be helpful in making interim decisions and designing further research. Overall, a planned series of related studies or a program of research may be needed to adequately address stakeholders' decisions.

An example of a structured program of research is the four phases of clinical studies used by the Food and Drug Administration (FDA) to reach a decision about whether or not a new drug is safe and efficacious for market approval in the United States. Using this analogy, the final decision about whether a drug is efficacious and safe to be marketed for specific medical indications is based upon the accumulation of scientific evidence from a series of studies (i.e., not from any individual study), which are conducted in multiple sequential phases. The evidence generated in each phase is reviewed to make interim decisions about the safety and efficacy of a new pharmaceutical until ultimately all the evidence is reviewed to make a final decision about drug approval.

Under the FDA model for decisionmaking, initial research involves laboratory and animal tests. If the evidence generated in these studies indicates that the drug is active and not toxic, the sponsor submits an application to the FDA for an “investigational new drug.” If the FDA approves, human testing for safety and efficacy can begin. The first phase of human testing is usually conducted in a limited number of healthy volunteers (phase 1). If these trials show evidence that the product is safe in healthy volunteers, then the drug is further studied in a small number of volunteers who have the targeted condition (phase 2). If phase 2 studies show that the drug has a therapeutic effect and lacks significant adverse effects, trials with large numbers of people are conducted to determine the drug's safety and efficacy (phase 3). Following these trials, all relevant scientific studies are submitted to the FDA for a decision about whether the drug should be approved for marketing. If there are additional considerations like special safety issues, observational studies may be required to assess the safety of the drug in routine clinical care after the drug is approved for marketing (phase 4). Overall, the decisionmaking and research are staged so that the cumulative findings from all studies are used by the FDA to make interim decisions until the final decision is made about whether a medical product will be approved for marketing.

While most decisions about the comparative effectiveness of interventions will not need such extensive testing, it still may be prudent to stage research in a way that allows for interim decisions and sequentially more rigorous studies. On the other hand conditional approval or interim decisions may risk confusing patients and other stakeholders about the extent to which current evidence indicates that a treatment is effective and safe for all individuals with a health condition. For instance, under this staged approach new treatments could rapidly diffuse into a market even when there is limited evidence of long-term effectiveness and safety for all potential users. An illustrative example of this is the case of lung-volume reduction surgery, which was increasingly being used to treat severe emphysema despite limited evidence supporting its safety and efficacy until new research raised questions about the safety of the procedure. 6

Below is one potential categorization for the stages of knowledge development as related to informing decisions about questions of comparative effectiveness:

  • Descriptive analysis
  • Hypothesis generation
  • Feasibility studies/proof of concept
  • Hypothesis supporting
  • Hypothesis testing

The first stages (i.e., descriptive analysis, hypothesis generation, and feasibility studies) are not mutually exclusive and usually are not intended to provide conclusive results for most decisions. Instead these stages provide preliminary evidence or feasibility testing before larger, more resource-intensive studies are launched. Results from these categories of studies may allow for interim decisionmaking (e.g., conditional approval for reimbursement of a treatment while further research is conducted). While a phased approach to research may postpone the time when a conclusive decision can be reached it does help to conserve resources such as those that may be consumed in launching a large multicenter study when a smaller study may be sufficient. Investigators will need to engage stakeholders to prioritize what stage of research may be most useful for the practical range of decisions that will be made.

Investigators should discuss in the protocol what stage of knowledge the current study will fulfill in light of the actions available to different stakeholders. This will allow reviewers of the protocol to assess the degree to which the evidence generated in the study holds the potential to fill specific knowledge gaps. For studies that are described in the protocol as preliminary, this may also help readers understand other tradeoffs that were made in the design of the study, in terms of methodological limitations that were accepted a priori in order to gather preliminary information about the research questions.

  • Defining and Refining Study Questions Using PICOTS Framework

As recommended in other AHRQ methods guides, 7 investigators should engage stakeholders in a dialogue in order to understand the objectives of the research in practical terms, particularly so that investigators know the types of decisions that the research may affect. In working with stakeholders to develop research questions that can be studied with scientific methods, investigators may ask stakeholders to identify six key components of the research questions that will form the basis for designing the study. These components are reflected in the PICOTS typology and are shown below in Table 1.2 . These components represent the critical elements that will help investigators design a study that will be able to address the stakeholders' needs. Additional references that expand upon how to frame research questions can be found in the literature. 8 - 9

Table 1.2. PICOTS typology for developing research questions.

PICOTS typology for developing research questions.

The PICOTS typology outlines the key parts of the research questions that the study will be designed to address. 10 As a new research protocol is developed these questions can be presented in preliminary form and refined as other steps in the process are implemented. After the preliminary questions are refined, investigators should examine the questions to make sure that they will meet the needs of the stakeholders. In addition, they should assess whether the questions can be answered within the timeframe allotted and with the resources that are available for the study.

Since stakeholders ultimately determine effectiveness, it is important for investigators to ensure that the study endpoints and outcomes will meet their needs. Stakeholders need to articulate to investigators the health outcomes that are most important for a particular stakeholder to make decisions about treatment or take other health care actions. The endpoints that stakeholders will use to determine effectiveness may vary considerably. Unlike efficacy trials, in which clinical endpoints and surrogate measures are frequently used to determine efficacy, effectiveness may need to be determined based on several measures, many of which are not biological. These endpoints may be categorized as clinical endpoints, patient-reported outcomes and quality of life, health resource utilization, and utility measures. Types of measures that could be used are mortality, morbidity and adverse effects, quality of life, costs, or multiple outcomes. Chapter 6 gives a more extensive discussion of potential outcome measures of effectiveness.

The reliability, validity, and accuracy of study instruments to validly measure the concepts they purport to measure will also need to be acceptable to stakeholders. For instance, if stakeholders are interested in quality of life as an outcome, but do not believe there is an adequate measure of quality of life, then measurement development may need to be done prior to study initiation or other measures will need to be identified by stakeholders.

  • Discussing Evidentiary Need and Uncertainty

Investigators and stakeholders should discuss the tradeoffs of different study designs that may be used for addressing the research questions. This dialogue will help researchers design a study that will be relevant and useful to the needs of stakeholders. All study designs have strengths and weaknesses, the latter of which may limit the conclusiveness of the final study results. Likewise, some decisions may require evidence that cannot be obtained from certain designs. In addition to design weaknesses, there are also practical tradeoffs that need to be considered in terms of research resources, like the time needed to complete the study, the availability of data, investigator expertise, subject recruitment, human subjects protection, research budget, difference to be detected, and lost-opportunity costs of doing the research instead of other studies that have priority for stakeholders. An important decision that will need to be made is whether or not randomization is needed for the questions being studied. There are several reasons why randomization might be needed, such as determining whether an FDA-approved drug can be used for a new use or indication that was not studied as part of the original drug approval process. A paper by Concato includes a thorough discussion of issues to consider when deciding whether randomization is necessary. 11

In discussing the tradeoffs of different study designs, researchers and stakeholders may wish to discuss the principal goals of research and ensure that researchers and stakeholders are aligned in their understanding of what is meant by scientific evidence. Fundamentally, research is a systematic investigation that uses scientific methods to measure, collect, and analyze data for the advancement of knowledge. This advancement is through the independent peer review and publication of study results, which are collectively referred to as scientific evidence. One definition of scientific evidence has been proposed by Normand and McNeil 12 as:

… the accumulation of information to support or refute a theory or hypothesis. … The idea is that assembling all the available information may reduce uncertainty about the effectiveness of the new technology compared to existing technologies in a setting where we believe particular relationships exist but are uncertain about their relevance …

While the primary aim of research is to produce new knowledge , the Normand and McNeil concept of evidence emphasizes that research helps create knowledge by reducing uncertainty about outcomes. However, rarely, if at all, does research eliminate all uncertainty around most decisions. In some cases, successful research will answer an important question and reduce uncertainty related to that question, but it may also increase uncertainty by leading to more, better informed questions regarding unknowns. As a result, nearly all decisions face some level of uncertainty even in a field where a body of research has been completed. This distinction is also critical because it helps to separate the research and subsequent actions that decisionmakers may take based on their assessment of the research results. Those subsequent actions may be informed by the research findings but will also be based on stakeholders' values and resources. Hence, as the definition by Normand and McNeil implies, research generates evidence but stakeholders decide whether to act on the evidence. Scientific evidence informs decisions to the extent it can adequately reduce the uncertainty about the problem for the stakeholder. Ultimately, treatment decisions are only guided by an assessment of the certainty that a course of therapy will lead to the outcomes of interest and the likelihood that this conclusion will be affected by the results of future studies.

In conceptualizing a study design, it is important for investigators to understand what constitutes sufficient and valid evidence from the stakeholder's perspective. In other words, what is the type of evidence that will be required to inform the stakeholder's decision to act or make a conscious decision not to take action? Evidence needed for action may vary by type of stakeholder and the scope of decisions that the stakeholder is making. For instance, a stakeholder who is making a population-based decision such as whether to provide insurance coverage for a new medical device with many alternatives may need substantially robust research findings in order to take action and provide that insurance coverage. In this example, the stakeholder may only accept as evidence a study with strong internal validity and generalizability (i.e., one conducted in a nationally representative sample of patients with the disease). On the other hand a patient who has a health condition where there are few treatments may be willing to accept lower-quality evidence in order to make a decision about whether to proceed with treatment despite a higher level of uncertainty about the outcome.

In many cases, there may exist a gradient of actions that can be taken based on available evidence. Quanstrum and Hayward 13 have discussed this gradient and argued that health care decisionmaking is changing, partly because more information is available to patients and other stakeholders about treatment options. As shown in the upper panel (A) in Figure 1.1 , many people may currently believe that health care treatment decisions are basically uniform for most people and under most circumstances. Panel A represents a hypothetical treatment whereby there is an evidentiary threshold or a point at which treatment is always beneficial and should be recommended. On the other hand below this threshold care provides no benefits and treatment should be discouraged. Quanstrum and Hayward argue that increasingly health care decisions are more like the lower panel (B). This panel portrays health care treatments as providing a large zone of discretion where benefits may be low or modest for most people. While above this zone treatment may always be recommended, individuals who fall within the zone may have questionable health benefits from treatment. As a result, different decisionmakers may take different actions based on their individual preferences.

Conceptualization of clinical decisionmaking. See Quanstrum KH, Hayward RA (Reference #). This figure is copyrighted by the Massachusetts Medical Society and reprinted with permission.

In light of this illustration, the following questions are suggested for discussion with stakeholders to help elicit the amount of uncertainty that is acceptable so that the study design can reach an appropriate level of evidence for the decision at hand:

  • What level of new scientific evidence does the decisionmaker need to make a decision or take action?
  • What quality of evidence is needed for the decisionmaker to act?
  • What level of certainty of the outcome is needed by the decisionmaker(s)?
  • How specific does the evidence need to be?
  • Will decisions require consensus of multiple parties?

Additional Considerations When Considering Evidentiary Needs

As mentioned earlier, different stakeholders may disagree on the usefulness of different research designs, but it should be pointed out that this disagreement may be because stakeholders have different scopes of decisions to make. For example, high-quality research that is conclusive may be needed to make a decision that will affect the entire nation. On the other hand, results with more uncertainty as to the magnitude of the effect estimate(s) may be acceptable in making some decisions such as those affecting fewer people or where the risks to health are low. Often this disagreement occurs when different stakeholders debate whether evidence is needed from a new randomized controlled trial or whether evidence can be obtained from an analysis of an existing database. In this debate, both sides need to clarify whether they are facing the same decision or the decisions are different, particularly in terms of their scope.

Groups committed to evidence-based decisionmaking recognize that scientific evidence is only one component of the process of making decisions. Evidence generation is the goal of research, but evidence alone is not the only facet of evidence-based decisionmaking. In addition to scientific evidence, decisionmaking involves the consideration of (a) values, particularly the values placed on benefits and harms, and (b) resources. 14 Stakeholder differences in values and resources may mean that different decisions are made based on the same scientific evidence. Moreover, differences in values may create conflict in the decisionmaking process. One stakeholder may believe a particular study outcome is most important from their perspective, while another stakeholder may believe a different outcome is the most important for determining effectiveness.

Likewise, there may be inherent conflicts in values between individual decisionmaking and population decisionmaking, even though these decisions are often interrelated. For example, an individual may have a higher tolerance for treatment risk in light of the expected treatment benefits for him or her. On the other hand a regulatory health authority may determine that the population risk is too great without sufficient evidence that treatment provides benefits to the population. An example of this difference in perspective can be seen with how different decisionmakers responded to evidence about the drug Avastin ® (bevacizumab) for the treatment of metastatic breast cancer. In this case, the FDA revoked their approval of the breast cancer indication for Avastin after concluding that the drug had not been shown to be safe and effective for that use. Nonetheless, Medicare, the public insurance program for the elderly and disabled continued to allow coverage when a physician prescribes the drug, even for breast cancer. Likewise, some patient groups were reported to be concerned by the decision since it presumably would deny some women access to Avastin treatment. For a more thorough discussion of these issues around differences in perspective, the reader is referred to an article by Atkins 15 and the examples in Table 1.3 below.

Table 1.3. Examples of individual versus population decisions (Adapted from Atkins, 2007).

Examples of individual versus population decisions (Adapted from Atkins, 2007).

  • Specifying Magnitude of Effect

In order for decisions to be objective, it is important for there to be an a priori discussion with stakeholders about the magnitude of effect that stakeholders believe represents a meaningful difference between treatment options. Researchers will be familiar with the basic tenet that statistically significant differences do not always represent clinically meaningful differences. Hence, researchers and stakeholders will need to have knowledge of the instruments that are used to measure differences and the accuracy, limitations, and properties of those instruments. Three key questions are recommended to use when eliciting from stakeholders the effect sizes that are important to them for making a decision or taking action:

  • How do patients and other stakeholders define a meaningful difference between interventions?
  • How do previous studies and reviews define a meaningful difference?
  • Are patients and other stakeholders interested in superiority or noninferiority as it relates to decisionmaking?
  • Challenges to Developing Study Questions and Initial Solutions

In developing CER study objectives and questions, there are some potential challenges that face researchers and stakeholders. The involvement of patients and other stakeholders in determining study objectives and questions is a relatively new paradigm, but one that is consistent with established principles of translational research. A key principle of translational research is that users need to be involved in research at the earliest stages for the research to be adopted. 16 In addition, most research is currently initiated by an investigator, and traditionally there have been few incentives (and some disincentives) to involving others in designing a new research study. Although the research paradigm is rapidly shifting, 17 there is little information about how to structure, process, and evaluate outcomes from initiatives that attempt to engage stakeholders in developing study questions and objectives with researchers. As different approaches are taken to involve stakeholders in the research process, researchers will learn how to optimize the process of stakeholder involvement and improve the applicability of research to the end-users.

The bringing together of stakeholders may create some general challenges to the research team. For instance, it may be difficult to identify, engage, or manage all stakeholders who are interested in developing and using scientific evidence for addressing a problem. A process that allows for public commenting on research protocols through Internet postings may be helpful in reaching the widest network of interested stakeholders. Nevertheless, finding stakeholders who can represent all perspectives may not always be practical or available to the study team. In addition, competing interests among stakeholders may make prioritization of research questions challenging. Different stakeholders have different needs and this may make prioritization of research difficult. Nonetheless, as the science of translational research evolves, the collaboration of researchers with stakeholders will likely become increasingly the standard of practice in designing new research.

To assist researchers and stakeholders with working together, AHRQ has published several online resources to facilitate the involvement of stakeholders in the research process. These include a brief guide for stakeholders that highlights opportunities for taking part in AHRQ's Effective Health Care Program, a facilitation primer with strategies for working with diverse stakeholder groups, a table of suggested tasks for researchers to involve stakeholders in the identification and prioritization of future research, and learning modules with slide presentations on engaging stakeholders in the Effective Health Care Program. 18 - 19 In addition, AHRQ supports the Evidence-based Practice Centers in working with various stakeholders to further develop and prioritize decisionmakers' future research needs, which are published in a series of reports on AHRQ's Web site and on the National Library of Medicine's open-access Bookshelf. 20

Likewise, AHRQ supports the active involvement of patients and other stakeholders in the AHRQ DEcIDE program, in which different models of engagement have been used. These models include hosting in-person meetings with stakeholders to create research agendas; 21 - 22 developing research based on questions posed by public payers such as Centers for Medicare and Medicaid Services; addressing knowledge gaps that have been identified in AHRQ systematic reviews through new research; and supporting five research consortia, each of which involves researchers, patients, and other stakeholders working together to develop, prioritize, and implement research studies.

  • Summary and Conclusion

This chapter provides a framework for formulating study objectives and questions, for a research protocol on a CER topic. Implementation of the framework involves collaboration between researchers and stakeholders in conceptualizing the research objectives and questions and the design of the study. In this process, there is a shared commitment to protect the integrity of the research results from bias and conflicts of interest, so that the results are valid for informing decisions and health care actions. Due to the complexity of some health care decisions, the evidence needed for decisionmaking or action may need to be developed from multiple studies, including preliminary research that becomes the underpinning for larger studies. The principles described in this chapter are intended to strengthen the writing of research protocols and enhance the results from the emanating studies, for informing the important decisions facing patients, providers, and other stakeholders about health care treatments and new technologies. Subsequent chapters in this User's Guide provide specific principles for operationalizing the study objectives and research questions in writing a complete study protocol that can be executed as new research.

Checklist: Guidance and key considerations for developing study objectives and questions for observational CER protocols

View in own window

Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide is copyrighted by the Agency for Healthcare Research and Quality (AHRQ). The product and its contents may be used and incorporated into other materials on the following three conditions: (1) the contents are not changed in any way (including covers and front matter), (2) no fee is charged by the reproducer of the product or its contents for its use, and (3) the user obtains permission from the copyright holders identified therein for materials noted as copyrighted by others. The product may not be sold for profit or incorporated into any profitmaking venture without the expressed written permission of AHRQ.

  • Cite this Page Smith SR. Study Objectives and Questions. In: Velentgas P, Dreyer NA, Nourjah P, et al., editors. Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Jan. Chapter 1.
  • PDF version of this title (5.8M)

In this Page

Other titles in these collections.

  • AHRQ Methods for Effective Health Care
  • Health Services/Technology Assessment Text (HSTAT)

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Study Objectives and Questions - Developing a Protocol for Observational Compara... Study Objectives and Questions - Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Enago Academy

How to Develop a Good Research Question? — Types & Examples

' src=

Cecilia is living through a tough situation in her research life. Figuring out where to begin, how to start her research study, and how to pose the right question for her research quest, is driving her insane. Well, questions, if not asked correctly, have a tendency to spiral us!

Image Source: https://phdcomics.com/

Questions lead everyone to answers. Research is a quest to find answers. Not the vague questions that Cecilia means to answer, but definitely more focused questions that define your research. Therefore, asking appropriate question becomes an important matter of discussion.

A well begun research process requires a strong research question. It directs the research investigation and provides a clear goal to focus on. Understanding the characteristics of comprising a good research question will generate new ideas and help you discover new methods in research.

In this article, we are aiming to help researchers understand what is a research question and how to write one with examples.

Table of Contents

What Is a Research Question?

A good research question defines your study and helps you seek an answer to your research. Moreover, a clear research question guides the research paper or thesis to define exactly what you want to find out, giving your work its objective. Learning to write a research question is the beginning to any thesis, dissertation , or research paper. Furthermore, the question addresses issues or problems which is answered through analysis and interpretation of data.

Why Is a Research Question Important?

A strong research question guides the design of a study. Moreover, it helps determine the type of research and identify specific objectives. Research questions state the specific issue you are addressing and focus on outcomes of the research for individuals to learn. Therefore, it helps break up the study into easy steps to complete the objectives and answer the initial question.

Types of Research Questions

Research questions can be categorized into different types, depending on the type of research you want to undergo. Furthermore, knowing the type of research will help a researcher determine the best type of research question to use.

1. Qualitative Research Question

Qualitative questions concern broad areas or more specific areas of research. However, unlike quantitative questions, qualitative research questions are adaptable, non-directional and more flexible. Qualitative research question focus on discovering, explaining, elucidating, and exploring.

i. Exploratory Questions

This form of question looks to understand something without influencing the results. The objective of exploratory questions is to learn more about a topic without attributing bias or preconceived notions to it.

Research Question Example: Asking how a chemical is used or perceptions around a certain topic.

ii. Predictive Questions

Predictive research questions are defined as survey questions that automatically predict the best possible response options based on text of the question. Moreover, these questions seek to understand the intent or future outcome surrounding a topic.

Research Question Example: Asking why a consumer behaves in a certain way or chooses a certain option over other.

iii. Interpretive Questions

This type of research question allows the study of people in the natural setting. The questions help understand how a group makes sense of shared experiences with regards to various phenomena. These studies gather feedback on a group’s behavior without affecting the outcome.

Research Question Example: How do you feel about AI assisting publishing process in your research?

2. Quantitative Research Question

Quantitative questions prove or disprove a researcher’s hypothesis through descriptions, comparisons, and relationships. These questions are beneficial when choosing a research topic or when posing follow-up questions that garner more information.

i. Descriptive Questions

It is the most basic type of quantitative research question and it seeks to explain when, where, why, or how something occurred. Moreover, they use data and statistics to describe an event or phenomenon.

Research Question Example: How many generations of genes influence a future generation?

ii. Comparative Questions

Sometimes it’s beneficial to compare one occurrence with another. Therefore, comparative questions are helpful when studying groups with dependent variables.

Example: Do men and women have comparable metabolisms?

iii. Relationship-Based Questions

This type of research question answers influence of one variable on another. Therefore, experimental studies use this type of research questions are majorly.

Example: How is drought condition affect a region’s probability for wildfires.  

How to Write a Good Research Question?

good research question

1. Select a Topic

The first step towards writing a good research question is to choose a broad topic of research. You could choose a research topic that interests you, because the complete research will progress further from the research question. Therefore, make sure to choose a topic that you are passionate about, to make your research study more enjoyable.

2. Conduct Preliminary Research

After finalizing the topic, read and know about what research studies are conducted in the field so far. Furthermore, this will help you find articles that talk about the topics that are yet to be explored. You could explore the topics that the earlier research has not studied.

3. Consider Your Audience

The most important aspect of writing a good research question is to find out if there is audience interested to know the answer to the question you are proposing. Moreover, determining your audience will assist you in refining your research question, and focus on aspects that relate to defined groups.

4. Generate Potential Questions

The best way to generate potential questions is to ask open ended questions. Questioning broader topics will allow you to narrow down to specific questions. Identifying the gaps in literature could also give you topics to write the research question. Moreover, you could also challenge the existing assumptions or use personal experiences to redefine issues in research.

5. Review Your Questions

Once you have listed few of your questions, evaluate them to find out if they are effective research questions. Moreover while reviewing, go through the finer details of the question and its probable outcome, and find out if the question meets the research question criteria.

6. Construct Your Research Question

There are two frameworks to construct your research question. The first one being PICOT framework , which stands for:

  • Population or problem
  • Intervention or indicator being studied
  • Comparison group
  • Outcome of interest
  • Time frame of the study.

The second framework is PEO , which stands for:

  • Population being studied
  • Exposure to preexisting conditions
  • Outcome of interest.

Research Question Examples

  • How might the discovery of a genetic basis for alcoholism impact triage processes in medical facilities?
  • How do ecological systems respond to chronic anthropological disturbance?
  • What are demographic consequences of ecological interactions?
  • What roles do fungi play in wildfire recovery?
  • How do feedbacks reinforce patterns of genetic divergence on the landscape?
  • What educational strategies help encourage safe driving in young adults?
  • What makes a grocery store easy for shoppers to navigate?
  • What genetic factors predict if someone will develop hypothyroidism?
  • Does contemporary evolution along the gradients of global change alter ecosystems function?

How did you write your first research question ? What were the steps you followed to create a strong research question? Do write to us or comment below.

Frequently Asked Questions

Research questions guide the focus and direction of a research study. Here are common types of research questions: 1. Qualitative research question: Qualitative questions concern broad areas or more specific areas of research. However, unlike quantitative questions, qualitative research questions are adaptable, non-directional and more flexible. Different types of qualitative research questions are: i. Exploratory questions ii. Predictive questions iii. Interpretive questions 2. Quantitative Research Question: Quantitative questions prove or disprove a researcher’s hypothesis through descriptions, comparisons, and relationships. These questions are beneficial when choosing a research topic or when posing follow-up questions that garner more information. Different types of quantitative research questions are: i. Descriptive questions ii. Comparative questions iii. Relationship-based questions

Qualitative research questions aim to explore the richness and depth of participants' experiences and perspectives. They should guide your research and allow for in-depth exploration of the phenomenon under investigation. After identifying the research topic and the purpose of your research: • Begin with Broad Inquiry: Start with a general research question that captures the main focus of your study. This question should be open-ended and allow for exploration. • Break Down the Main Question: Identify specific aspects or dimensions related to the main research question that you want to investigate. • Formulate Sub-questions: Create sub-questions that delve deeper into each specific aspect or dimension identified in the previous step. • Ensure Open-endedness: Make sure your research questions are open-ended and allow for varied responses and perspectives. Avoid questions that can be answered with a simple "yes" or "no." Encourage participants to share their experiences, opinions, and perceptions in their own words. • Refine and Review: Review your research questions to ensure they align with your research purpose, topic, and objectives. Seek feedback from your research advisor or peers to refine and improve your research questions.

Developing research questions requires careful consideration of the research topic, objectives, and the type of study you intend to conduct. Here are the steps to help you develop effective research questions: 1. Select a Topic 2. Conduct Preliminary Research 3. Consider Your Audience 4. Generate Potential Questions 5. Review Your Questions 6. Construct Your Research Question Based on PICOT or PEO Framework

There are two frameworks to construct your research question. The first one being PICOT framework, which stands for: • Population or problem • Intervention or indicator being studied • Comparison group • Outcome of interest • Time frame of the study The second framework is PEO, which stands for: • Population being studied • Exposure to preexisting conditions • Outcome of interest

' src=

A tad helpful

Had trouble coming up with a good research question for my MSc proposal. This is very much helpful.

This is a well elaborated writing on research questions development. I found it very helpful.

Rate this article Cancel Reply

Your email address will not be published.

comparative study research questions

Enago Academy's Most Popular

Content Analysis vs Thematic Analysis: What's the difference?

  • Reporting Research

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for data interpretation

In research, choosing the right approach to understand data is crucial for deriving meaningful insights.…

Diverse Opinion in STM

  • Expert Interviews

Diverse Voices, Inclusive Publishing – Author perspectives on equitable research and entrepreneurship ecosystem

In our steadfast dedication to championing Diversity, Equity, and Inclusion (DEI), we engaged in an…

Addressing Biases in the Journey of PhD

  • Diversity and Inclusion

Addressing Barriers in Academia: Navigating unconscious biases in the Ph.D. journey

In the journey of academia, a Ph.D. marks a transitional phase, like that of a…

Research recommendation

Research Recommendations – Guiding policy-makers for evidence-based decision making

Research recommendations play a crucial role in guiding scholars and researchers toward fruitful avenues of…

Confounding Variables

Demystifying the Role of Confounding Variables in Research

In the realm of scientific research, the pursuit of knowledge often involves complex investigations, meticulous…

Setting Rationale in Research: Cracking the code for excelling at research

Research Problem Statement — Find out how to write an impactful one!

Experimental Research Design — 6 mistakes you should never make!

Peer Review Week 2022: Scout All About “Peer Review Fostering Research Integrity”…

comparative study research questions

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

comparative study research questions

According to you, how can AI writing tools improve academic writing accuracy?

Grad Coach

Research Question 101 📖

Everything you need to know to write a high-quality research question

By: Derek Jansen (MBA) | Reviewed By: Dr. Eunice Rautenbach | October 2023

If you’ve landed on this page, you’re probably asking yourself, “ What is a research question? ”. Well, you’ve come to the right place. In this post, we’ll explain what a research question is , how it’s differen t from a research aim, and how to craft a high-quality research question that sets you up for success.

Research Question 101

What is a research question.

  • Research questions vs research aims
  • The 4 types of research questions
  • How to write a research question
  • Frequently asked questions
  • Examples of research questions

As the name suggests, the research question is the core question (or set of questions) that your study will (attempt to) answer .

In many ways, a research question is akin to a target in archery . Without a clear target, you won’t know where to concentrate your efforts and focus. Essentially, your research question acts as the guiding light throughout your project and informs every choice you make along the way.

Let’s look at some examples:

What impact does social media usage have on the mental health of teenagers in New York?
How does the introduction of a minimum wage affect employment levels in small businesses in outer London?
How does the portrayal of women in 19th-century American literature reflect the societal attitudes of the time?
What are the long-term effects of intermittent fasting on heart health in adults?

As you can see in these examples, research questions are clear, specific questions that can be feasibly answered within a study. These are important attributes and we’ll discuss each of them in more detail a little later . If you’d like to see more examples of research questions, you can find our RQ mega-list here .

Free Webinar: How To Find A Dissertation Research Topic

Research Questions vs Research Aims

At this point, you might be asking yourself, “ How is a research question different from a research aim? ”. Within any given study, the research aim and research question (or questions) are tightly intertwined , but they are separate things . Let’s unpack that a little.

A research aim is typically broader in nature and outlines what you hope to achieve with your research. It doesn’t ask a specific question but rather gives a summary of what you intend to explore.

The research question, on the other hand, is much more focused . It’s the specific query you’re setting out to answer. It narrows down the research aim into a detailed, researchable question that will guide your study’s methods and analysis.

Let’s look at an example:

Research Aim: To explore the effects of climate change on marine life in Southern Africa.
Research Question: How does ocean acidification caused by climate change affect the reproduction rates of coral reefs?

As you can see, the research aim gives you a general focus , while the research question details exactly what you want to find out.

Need a helping hand?

comparative study research questions

Types of research questions

Now that we’ve defined what a research question is, let’s look at the different types of research questions that you might come across. Broadly speaking, there are (at least) four different types of research questions – descriptive , comparative , relational , and explanatory . 

Descriptive questions ask what is happening. In other words, they seek to describe a phenomena or situation . An example of a descriptive research question could be something like “What types of exercise do high-performing UK executives engage in?”. This would likely be a bit too basic to form an interesting study, but as you can see, the research question is just focused on the what – in other words, it just describes the situation.

Comparative research questions , on the other hand, look to understand the way in which two or more things differ , or how they’re similar. An example of a comparative research question might be something like “How do exercise preferences vary between middle-aged men across three American cities?”. As you can see, this question seeks to compare the differences (or similarities) in behaviour between different groups.

Next up, we’ve got exploratory research questions , which ask why or how is something happening. While the other types of questions we looked at focused on the what, exploratory research questions are interested in the why and how . As an example, an exploratory research question might ask something like “Why have bee populations declined in Germany over the last 5 years?”. As you can, this question is aimed squarely at the why, rather than the what.

Last but not least, we have relational research questions . As the name suggests, these types of research questions seek to explore the relationships between variables . Here, an example could be something like “What is the relationship between X and Y” or “Does A have an impact on B”. As you can see, these types of research questions are interested in understanding how constructs or variables are connected , and perhaps, whether one thing causes another.

Of course, depending on how fine-grained you want to get, you can argue that there are many more types of research questions , but these four categories give you a broad idea of the different flavours that exist out there. It’s also worth pointing out that a research question doesn’t need to fit perfectly into one category – in many cases, a research question might overlap into more than just one category and that’s okay.

The key takeaway here is that research questions can take many different forms , and it’s useful to understand the nature of your research question so that you can align your research methodology accordingly.

Free Webinar: Research Methodology 101

How To Write A Research Question

As we alluded earlier, a well-crafted research question needs to possess very specific attributes, including focus , clarity and feasibility . But that’s not all – a rock-solid research question also needs to be rooted and aligned . Let’s look at each of these.

A strong research question typically has a single focus. So, don’t try to cram multiple questions into one research question; rather split them up into separate questions (or even subquestions), each with their own specific focus. As a rule of thumb, narrow beats broad when it comes to research questions.

Clear and specific

A good research question is clear and specific, not vague and broad. State clearly exactly what you want to find out so that any reader can quickly understand what you’re looking to achieve with your study. Along the same vein, try to avoid using bulky language and jargon – aim for clarity.

Unfortunately, even a super tantalising and thought-provoking research question has little value if you cannot feasibly answer it. So, think about the methodological implications of your research question while you’re crafting it. Most importantly, make sure that you know exactly what data you’ll need (primary or secondary) and how you’ll analyse that data.

A good research question (and a research topic, more broadly) should be rooted in a clear research gap and research problem . Without a well-defined research gap, you risk wasting your effort pursuing a question that’s already been adequately answered (and agreed upon) by the research community. A well-argued research gap lays at the heart of a valuable study, so make sure you have your gap clearly articulated and that your research question directly links to it.

As we mentioned earlier, your research aim and research question are (or at least, should be) tightly linked. So, make sure that your research question (or set of questions) aligns with your research aim . If not, you’ll need to revise one of the two to achieve this.

FAQ: Research Questions

Research question faqs, how many research questions should i have, what should i avoid when writing a research question, can a research question be a statement.

Typically, a research question is phrased as a question, not a statement. A question clearly indicates what you’re setting out to discover.

Can a research question be too broad or too narrow?

Yes. A question that’s too broad makes your research unfocused, while a question that’s too narrow limits the scope of your study.

Here’s an example of a research question that’s too broad:

“Why is mental health important?”

Conversely, here’s an example of a research question that’s likely too narrow:

“What is the impact of sleep deprivation on the exam scores of 19-year-old males in London studying maths at The Open University?”

Can I change my research question during the research process?

How do i know if my research question is good.

A good research question is focused, specific, practical, rooted in a research gap, and aligned with the research aim. If your question meets these criteria, it’s likely a strong question.

Is a research question similar to a hypothesis?

Not quite. A hypothesis is a testable statement that predicts an outcome, while a research question is a query that you’re trying to answer through your study. Naturally, there can be linkages between a study’s research questions and hypothesis, but they serve different functions.

How are research questions and research objectives related?

The research question is a focused and specific query that your study aims to answer. It’s the central issue you’re investigating. The research objective, on the other hand, outlines the steps you’ll take to answer your research question. Research objectives are often more action-oriented and can be broken down into smaller tasks that guide your research process. In a sense, they’re something of a roadmap that helps you answer your research question.

Need some inspiration?

If you’d like to see more examples of research questions, check out our research question mega list here .  Alternatively, if you’d like 1-on-1 help developing a high-quality research question, consider our private coaching service .

comparative study research questions

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Research constructs: construct validity and reliability

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Cookies & Privacy
  • GETTING STARTED
  • Introduction
  • FUNDAMENTALS
  • Acknowledgements
  • Research questions & hypotheses
  • Concepts, constructs & variables
  • Research limitations
  • Getting started
  • Sampling Strategy
  • Research Quality
  • Research Ethics
  • Data Analysis

Types of quantitative research question

Dissertations that are based on a quantitative research design attempt to answer at least one quantitative research question . In some cases, these quantitative research questions will be followed by either research hypotheses or null hypotheses . However, this article focuses solely on quantitative research questions. Furthermore, since there is more than one type of quantitative research question that you can attempt to answer in a dissertation (i.e., descriptive research questions, comparative research questions and relationship-based research questions), we discuss each of these in this article. If you do not know much about quantitative research and quantitative research questions at this stage, we would recommend that you first read the article, Quantitative research questions: What do I have to think about , as well as an overview article on types of variables , which will help to familiarise you with terms such as dependent and independent variable , as well as categorical and continuous variables [see the article: Types of variables ]. The purpose of this article is to introduce you to the three different types of quantitative research question (i.e., descriptive, comparative and relationship-based research questions) so that you can understand what type(s) of quantitative research question you want to create in your dissertation. Each of these types of quantitative research question is discussed in turn:

Descriptive research questions

Comparative research questions.

  • Relationship-based research questions

Descriptive research questions simply aim to describe the variables you are measuring. When we use the word describe , we mean that these research questions aim to quantify the variables you are interested in. Think of research questions that start with words such as "How much?" , "How often?" , "What percentage?" , and "What proportion?" , but also sometimes questions starting "What is?" and "What are?" . Often, descriptive research questions focus on only one variable and one group, but they can include multiple variables and groups. We provide some examples below:

In each of these example descriptive research questions, we are quantifying the variables we are interested in. However, the units that we used to quantify these variables will differ depending on what is being measured. For example, in the questions above, we are interested in frequencies (also known as counts ), such as the number of calories, photos uploaded, or comments on other users? photos. In the case of the final question, What are the most important factors that influence the career choices of Australian university students? , we are interested in the number of times each factor (e.g., salary and benefits, career prospects, physical working conditions, etc.) was ranked on a scale of 1 to 10 (with 1 = least important and 10 = most important). We may then choose to examine this data by presenting the frequencies , as well as using a measure of central tendency and a measure of spread [see the section on Data Analysis to learn more about these and other statistical tests].

However, it is also common when using descriptive research questions to measure percentages and proportions , so we have included some example descriptive research questions below that illustrate this.

In terms of the first descriptive research question about daily calorific intake , we are not necessarily interested in frequencies , or using a measure of central tendency or measure of spread , but instead want understand what percentage of American men and women exceed their daily calorific allowance . In this respect, this descriptive research question differs from the earlier question that asked: How many calories do American men and women consume per day? Whilst this question simply wants to measure the total number of calories (i.e., the How many calories part that starts the question); in this case, the question aims to measure excess ; that is, what percentage of these two groups (i.e., American men and American women) exceed their daily calorific allowance, which is different for males (around 2500 calories per day) and females (around 2000 calories per day).

If you are performing a piece of descriptive , quantitative research for your dissertation, you are likely to need to set quite a number of descriptive research questions . However, if you are using an experimental or quasi-experimental research design , or a more involved relationship-based research design , you are more likely to use just one or two descriptive research questions as a means to providing background to the topic you are studying, helping to give additional context for comparative research questions and/or relationship-based research questions that follow.

Comparative research questions aim to examine the differences between two or more groups on one or more dependent variables (although often just a single dependent variable). Such questions typically start by asking "What is the difference in?" a particular dependent variable (e.g., daily calorific intake) between two or more groups (e.g., American men and American women). Examples of comparative research questions include:

Groups reflect different categories of the independent variable you are measuring (e.g., American men and women = "gender"; Australian undergraduate and graduate students = "educational level"; pirated music that is freely distributed and pirated music that is purchased = "method of illegal music acquisition").

Comparative research questions also differ in terms of their relative complexity , by which we are referring to how many items/measures make up the dependent variable or how many dependent variables are investigated. Indeed, the examples highlight the difference between very simple comparative research questions where the dependent variable involves just a single measure/item (e.g., daily calorific intake) and potentially more complex questions where the dependent variable is made up of multiple items (e.g., Facebook usage behaviour including a wide range of items, such as logins, weekly photo uploads, status changes, etc.); or where each of these items should be written out as dependent variables.

Overall, whilst the dependent variable(s) highlight what you are interested in studying (e.g., attitudes towards music piracy, perceptions towards Internet banking security), comparative research questions are particularly appropriate if your dissertation aims to examine the differences between two or more groups (e.g., men and women, adolescents and pensioners, managers and non-managers, etc.).

Relationship research questions

Whilst we refer to this type of quantitative research question as a relationship-based research question, the word relationship should be treated simply as a useful way of describing the fact that these types of quantitative research question are interested in the causal relationships , associations , trends and/or interactions amongst two or more variables on one or more groups. We have to be careful when using the word relationship because in statistics, it refers to a particular type of research design, namely experimental research designs where it is possible to measure the cause and effect between two or more variables; that is, it is possible to say that variable A (e.g., study time) was responsible for an increase in variable B (e.g., exam scores). However, at the undergraduate and even master's level, dissertations rarely involve experimental research designs , but rather quasi-experimental and relationship-based research designs [see the section on Quantitative research designs ]. This means that you cannot often find causal relationships between variables, but only associations or trends .

However, when we write a relationship-based research question , we do not have to make this distinction between causal relationships, associations, trends and interactions (i.e., it is just something that you should keep in the back of your mind). Instead, we typically start a relationship-based quantitative research question, "What is the relationship?" , usually followed by the words, "between or amongst" , then list the independent variables (e.g., gender) and dependent variables (e.g., attitudes towards music piracy), "amongst or between" the group(s) you are focusing on. Examples of relationship-based research questions are:

As the examples above highlight, relationship-based research questions are appropriate to set when we are interested in the relationship, association, trend, or interaction between one or more dependent (e.g., exam scores) and independent (e.g., study time) variables, whether on one or more groups (e.g., university students).

The quantitative research design that we select subsequently determines whether we look for relationships , associations , trends or interactions . To learn how to structure (i.e., write out) each of these three types of quantitative research question (i.e., descriptive, comparative, relationship-based research questions), see the article: How to structure quantitative research questions .

  • (855) 776-7763

Training Maker

All Products

Qualaroo Insights

ProProfs.com

  • Sign Up Free

Do you want a free Survey Software?

We have the #1 Online Survey Maker Software to get actionable user insights.

How to Write Quantitative Research Questions: Types With Examples

How to Write Quantitative Research Questions: Types With Examples

For research to be effective, it becomes crucial to properly formulate the quantitative research questions in a correct way. Otherwise, you will not get the answers you were looking for.

Has it ever happened that you conducted a quantitative research study and found out the results you were expecting are quite different from the actual results?

This could happen due to many factors like the unpredictable nature of respondents, errors in calculation, research bias, etc. However, your quantitative research usually does not provide reliable results when questions are not written correctly.

We get it! Structuring the quantitative research questions can be a difficult task.

Hence, in this blog, we will share a few bits of advice on how to write good quantitative research questions. We will also look at different types of quantitative research questions along with their examples.

Let’s start:

How to Write Quantitative Research Questions?

When you want to obtain actionable insight into the trends and patterns of the research topic to make sense of it, quantitative research questions are your best bet.

Being objective in nature, these questions provide you with detailed information about the research topic and help in collecting quantifiable data that can be easily analyzed. This data can be generalized to the entire population and help make data-driven and sound decisions.

Respondents find it easier to answer quantitative survey questions than qualitative questions . At the same time, researchers can also analyze them quickly using various statistical models.

However, when it comes to writing the quantitative research questions, one can get a little overwhelmed as the entire study depends on the types of questions used.

There is no “one good way” to prepare these questions. However, to design well-structured quantitative research questions, you can follow the 4-steps approach given below:

1. Select the Type of Quantitative Question

The first step is to determine which type of quantitative question you want to add to your study. There are three types of quantitative questions:

  • Descriptive
  • Comparative 
  • Relationship-based

This will help you choose the correct words and phrases while constructing the question. At the same time, it will also assist readers in understanding the question correctly.

2. Identify the Type of Variable

The second step involves identifying the type of variable you are trying to measure, manipulate, or control. Basically, there are two types of variables:

  • Independent variable (a variable that is being manipulated)
  • Dependent variable (outcome variable)

quantitative questions examples

If you plan to use descriptive research questions, you have to deal with a number of dependent variables. However, where you plan to create comparative or relationship research questions, you will deal with both dependent and independent variables.

3. Select the Suitable Structure

The next step is determining the structure of the research question. It involves:

  • Identifying the components of the question. It involves the type of dependent or independent variable and a group of interest (the group from which the researcher tries to conclude the population).
  • The number of different components used. Like, as to how many variables and groups are being examined.
  • Order in which these are presented. For example, the independent variable before the dependent variable or vice versa.

4. Draft the Complete Research Question

The last step involves identifying the problem or issue that you are trying to address in the form of complete quantitative survey questions. Also, make sure to build an exhaustive list of response options to make sure your respondents select the correct response. If you miss adding important answer options, then the ones chosen by respondents may not be entirely true.

Types of Quantitative Research Questions With Examples

Quantitative research questions are generally used to answer the “who” and “what” of the research topic. For quantitative research to be effective, it is crucial that the respondents are able to answer your questions concisely and precisely. With that in mind, let’s look in greater detail at the three types of formats you can use when preparing quantitative market research questions.

1. Descriptive

Descriptive research questions are used to collect participants’ opinions about the variable that you want to quantify. It is the most effortless way to measure the particular variable (single or multiple variables) you are interested in on a large scale. Usually, descriptive research questions begin with “ how much,” “how often,” “what percentage,” “what proportion,” etc.

Examples of descriptive research questions include:

2. Comparative

Comparative research questions help you identify the difference between two or more groups based on one or more variables. In general, a comparative research question is used to quantify one variable; however, you can use two or more variables depending on your market research objectives.

Comparative research questions examples include:

3. Relationship-based

Relationship research questions are used to identify trends, causal relationships, or associations between two or more variables. It is not vital to distinguish between causal relationships, trends, or associations while using these types of questions. These questions begin with “What is the relationship” between independent and dependent variables, amongst or between two or more groups.

Relationship-based quantitative questions examples include:

Ready to Write Your Quantitative Research Questions?

So, there you have it. It was all about quantitative research question types and their examples. By now, you must have figured out a way to write quantitative research questions for your survey to collect actionable customer feedback.

Now, the only thing you need is a good survey maker tool, like ProProfs Survey Maker, that will glide your process of designing and conducting your surveys . You also get access to various survey question types, both qualitative and quantitative, that you can add to any kind of survey along with professionally-designed survey templates .

Jared Cornell

About the author

Jared cornell.

Jared is a customer support expert. He has been published in CrazyEgg , Foundr , and CXL . As a customer support executive at ProProfs, he has been instrumental in developing a complete customer support system that more than doubled customer satisfaction. You can connect and engage with Jared on Twitter , Facebook , and LinkedIn .

Popular Posts in This Category

comparative study research questions

10 Best LimeSurvey Alternatives & Competitors for 2024

comparative study research questions

Customer Lifetime Value: An Ultimate Guide

comparative study research questions

20 Best NPS Survey Tools & Software for 2024

comparative study research questions

20 Effective Strategies to Increase Customer Retention in 2024

comparative study research questions

75+ Customer Retention Statistics You Should Know

comparative study research questions

16+ Best Mobile In-App Feedback Tools in 2024

  • Methodology
  • Open access
  • Published: 04 May 2016

Using qualitative comparative analysis in a systematic review of a complex intervention

  • Leila Kahwati 1 ,
  • Sara Jacobs 1 ,
  • Heather Kane 1 ,
  • Megan Lewis 1 ,
  • Meera Viswanathan 1 &
  • Carol E. Golin 2  

Systematic Reviews volume  5 , Article number:  82 ( 2016 ) Cite this article

6559 Accesses

21 Citations

15 Altmetric

Metrics details

Systematic reviews evaluating complex interventions often encounter substantial clinical heterogeneity in intervention components and implementation features making synthesis challenging. Qualitative comparative analysis (QCA) is a non-probabilistic method that uses mathematical set theory to study complex phenomena; it has been proposed as a potential method to complement traditional evidence synthesis in reviews of complex interventions to identify key intervention components or implementation features that might explain effectiveness or ineffectiveness. The objective of this study was to describe our approach in detail and examine the suitability of using QCA within the context of a systematic review.

We used data from a completed systematic review of behavioral interventions to improve medication adherence to conduct two substantive analyses using QCA. The first analysis sought to identify combinations of nine behavior change techniques/components (BCTs) found among effective interventions, and the second analysis sought to identify combinations of five implementation features (e.g., agent, target, mode, time span, exposure) found among effective interventions. For each substantive analysis, we reframed the review’s research questions to be designed for use with QCA, calibrated sets (i.e., transformed raw data into data used in analysis), and identified the necessary and/or sufficient combinations of BCTs and implementation features found in effective interventions.

Our application of QCA for each substantive analysis is described in detail. We extended the original review findings by identifying seven combinations of BCTs and four combinations of implementation features that were sufficient for improving adherence. We found reasonable alignment between several systematic review steps and processes used in QCA except that typical approaches to study abstraction for some intervention components and features did not support a robust calibration for QCA.

Conclusions

QCA was suitable for use within a systematic review of medication adherence interventions and offered insights beyond the single dimension stratifications used in the original completed review. Future prospective use of QCA during a review is needed to determine the optimal way to efficiently integrate QCA into existing approaches to evidence synthesis of complex interventions.

Peer Review reports

Systematic reviews evaluating complex or multicomponent interventions often encounter substantial clinical heterogeneity in intervention components, settings, and populations studied, which often contribute to heterogeneity of effect size. Complex interventions are those that include multiple components that often but do not necessarily interact with each other [ 1 – 4 ]. The UK Medical Research Council suggests that characteristics such as the number and difficulty of behaviors required by those delivering or receiving the intervention, the number and variability of targeted outcomes, and the degree of flexibility of tailoring of the intervention all contribute to an intervention’s complexity [ 5 ]. In addition to the number of components an intervention has, complexity can also refer to properties of the system in which an intervention is implemented, such as setting, number of actors involved, and intervention target characteristics [ 6 , 7 ]. Further, an intervention may employ multiple and varied implementation strategies [ 7 ]. As a result of these myriad sources of potential variation, complex interventions with a common underlying purpose may differ quite substantially from each other in form or function when implemented.

Accordingly, systematic review investigators face substantial methodological challenges to synthesizing bodies of evidence comprised of complex interventions [ 7 ]. Estimating summary effects via quantitative synthesis is often not possible because of heterogeneity. Reviewers may ignore underlying variation by only addressing an overall question of effectiveness (e.g., do these types of interventions work?), or reviewers may stratify the synthesis based on one or more aspects of variation, such as a specific intervention component, outcome, population, or setting [ 7 ]. However, multicomponent interventions with interdependent components may not be suitable for separation into distinct components, and assumptions about linear and additive effects of multiple components may not be valid [ 8 ]. Methods that can systematically explore heterogeneity based on an assumption of causal complexity and that can provide an analytic link between heterogeneity and outcomes would offer an enhancement to current systematic review methods.

Qualitative comparative analysis (QCA) is a case-oriented method to study complex phenomena originating from the comparative social sciences [ 9 ]; it has been proposed as a potential method for synthesizing evidence within systematic reviews [ 7 , 10 ]. QCA uses mathematical set theory, which is the branch of mathematical logic that studies the properties of sets, to examine set relationships between combinations of condition sets (cf., explanatory variables) present among cases and an outcome set (cf., dependent variable). QCA can be useful for identifying complex (i.e., non-linear, non-additive) causal patterns that variable-oriented methods may miss [ 9 , 11 , 12 ]. Applying QCA within the context of a systematic review may enhance review findings for policy-makers and practitioners by systematically evaluating sources of heterogeneity that influence the success (or failure) of an intervention using an approach that preserves each study’s unique combination of intervention components or other features. How to apply QCA within the context of a systematic review and the suitability of the method for this context is not definitively known because few actual applications exist [ 13 , 14 ]. Based on our experience conducting systematic reviews and our experience using QCA in primary research applications, we postulated that using QCA could offer additional insights within a systematic review of a complex intervention beyond traditional synthesis.

In this paper, we describe using QCA within a systematic review and examine its suitability for use within this context. We used data from an Agency for Healthcare Quality and Research (AHRQ)-sponsored review of interventions to improve medication adherence that was recently completed by members of our study team (M.V., C.G.) [ 15 , 16 ]. Medication adherence is a complex behavior with multiple determinants that vary among individuals [ 17 ]. Interventions to improve adherence often involve combinations of behavior change techniques (BCTs), such as interventions to improve self-efficacy or change attitudes. They often use different delivery modes (e.g., telephone vs. in-person) and agents (e.g., physicians, nurses, non-licensed staff) over various intervals of time and at different intensities. Further, interventions may be designed to influence patient adherence through interventions targeted at the practitioner or healthcare system level in addition to patient-directed components. We chose this review to use with QCA because the heterogeneity among interventions and outcomes seemed amenable to exploration through a configural lens and because we had access to all of the raw data and institutional knowledge associated with the review.

We turned to QCA because too much clinical heterogeneity had precluded a meta-analysis and meta-regression. Further, the completed review did not attempt mixed-treatment comparisons because of heterogeneity in the usual-care comparators [ 18 ]. However, all of the aforementioned approaches are correlational in nature, based on the assumption that one true distribution of effect exists and that trial-level covariates independently and additively contribute to variation from the true effect. QCA is not a substitute for these quantitative approaches to synthesis when they are appropriate, but these methods may rarely be appropriate for complex interventions because of the underlying assumptions upon which they are based. Thus, QCA offers a systematic approach to potentially unpacking intervention variability and relationship to an outcome when the phenomena under investigation can be characterized as complex.

We conducted two substantive analyses using QCA using data that was collected as part of a completed review. The first analysis sought to identify which combinations of patient-directed BCTs used across the body of evidence were necessary and/or sufficient for improving medication adherence, and findings from this analysis are presented in detail in a companion paper in this issue [ 19 ]. The second analysis sought to identify which combinations of implementation features (e.g., agent, mode) used across the body of evidence were necessary and/or sufficient for improving medication adherence. In the present paper, we discuss the methodologic approach applied to both analyses and highlight the added value and challenges we identified through its application in a systematic review.

Overview of QCA

Consistent with a case-oriented approach, QCA was originally developed for use with a small to medium number of cases ( N  = 10 to 50), allowing researchers to preserve the iterative nature of the data collection, analysis, and interpretation that stems from familiarity with the cases, a hallmark of qualitative research. More recently, QCA has been used for applications involving larger sample sizes [ 12 ]. Used within a systematic review context, each individual study within the review represents a case.

QCA preserves the holistic nature of each case throughout the analysis by not deconstructing the case into its component variables for analysis. Unlike variable-oriented methods that are based on probabilistic assumptions, QCA uses data from empiric cases to identify set relationships, which can be interpreted as relationships of “necessity” or “sufficiency” that often characterize causally complex phenomena. These relationships are depicted as a solution that uses Boolean operators, such as “AND,” “OR,” and “NOT,” to formulate verbal statements of the relationship between explanatory variables (i.e., conditions in QCA terminology) and an outcome. The solution generated by QCA is analogous to the expression of a correlational relationship among variables using a regression equation; though unlike probabilistic methods, solutions do not offer an estimate of precision, likelihood of finding results due to chance, nor can they be used for statistical hypothesis testing. A truth table is the analytic device used in QCA, and software is used to conduct most analyses [ 12 , 20 ]. A detailed methodological description of QCA, a hypothetical example of an analysis, and a glossary of terms related to QCA is provided as supplementary online material (Additional file 1 ).

Application of QCA to the completed review

Members of our study team (M.V., C.G.) conducted the completed review using methods associated with the AHRQ Effective Health Care Program (available at http://www.ncbi.nlm.nih.gov/books/NBK47095/ ). The completed review was limited to US studies in adults with chronic conditions, excluding patients with HIV/AIDS, severe mental illness, and substance abuse because these conditions often require specialized interventions not applicable to general medical populations [ 15 , 16 ]. Of 4124 citations identified in the completed review, 758 full-text articles were screened for eligibility. Of the 67 low- or medium-risk of bias studies included, 62 were randomized clinical trials and five were observational studies. Included studies were conducted among patient populations with ten different clinical conditions. Seven studies included populations with more than one clinical condition. Study authors did not use consistent language or a standard taxonomy to describe intervention type; thus, the review team developed categories of intervention types. Examples included “education with behavioral support,” “health coaching,” “medication monitoring and reminders,” “shared decision-making or decision aids,” “case management,” and “collaborative care.” Because of heterogeneity of populations and intervention types, a quantitative synthesis was not possible. The primary organizing framework for the qualitative synthesis was clinical conditions (e.g., hypertension, diabetes). Within each of the ten clinical conditions, adherence outcomes were synthesized by intervention type. For example, a low strength of evidence grade for benefit was assigned for the use of case management interventions among patients with diabetes based on evidence from three RCTs. Overall, this approach resulted in 40 strata, each of which was assigned a strength of evidence grade based on the one to five studies falling within the stratum. The completed review’s analytic framework, key questions, and a summary of the results are provided as supplementary online material (Additional file 2 ). In brief, this review found the most consistent evidence for effectiveness across clinical conditions for interventions that included case management and educational interventions.

We developed an approach to using QCA within the context of a systematic review based on existing standards of good practice for conducting QCA and our experience using the method in non-systematic review applications [ 21 – 23 ]. This approach is depicted in Fig.  1 , and although the figure depicts this approach as sequential, in practice, iterative specification and analysis is typical and consistent with qualitative research approaches.

QCA approach used in this analysis. Adapted from Kane et al. [ 22 ]

We will use the elements of Fig.  1 to summarize our process of using QCA with systematic review data.

Specify configural research questions

As indicated in Fig.  1 , we first specified a configural research question, which is a question designed to identify the combinations of conditions that produce an outcome. For each substantive analysis, we specified a single question that combined two of the completed review’s key questions. These were key question 1: “Among patients with chronic diseases with self-administered medication prescribed by a provider, what is the comparative effectiveness of interventions aimed at patients, providers, systems, and combinations of audiences in improving medication adherence?” and key question 3: “How do medication-adherence intervention characteristics vary?” Further, we specified both of the configural research questions to reflect causal asymmetry. The re-specified research question for the first QCA was “What combinations of behavioral change techniques are present in studies demonstrating improved medication adherence?” and for the second QCA was “What combinations of implementation features, such as agent, target, mode, span, and exposure are present in studies demonstrating improved medication adherence?”

Identify studies for use in analysis

We defined studies included in the systematic review as the cases for each analysis. Based on how we operationalized the research questions, we excluded seven of the 67 studies from the completed review from both analyses as they were focused on policy or system level interventions and not relevant to the conditions (BCTs and implementation features) that we were interested in exploring. We found that the process used for study selection in a typical systematic review of interventions, which defines inclusion and exclusion criteria using the PICOTS framework (patient, intervention, comparator, outcome, timing, and setting), ensured that the cases included in the QCA were similar enough to be comparable, yet still offered enough diversity in intervention design to enable understanding heterogeneity of effect. Further, this approach provides an explicit and detailed rationale for the selection (or non-selection) of cases, which is a standard of good practice for conducting QCA [ 21 ].

Specify and calibrate condition sets and outcome set

Because one of our study aims was to assess the suitability of using QCA in a systematic review context, we used a completed review to determine whether data typically abstracted during a review would be acceptable to use with QCA. Thus, our initial approach was to rely on the review’s completed data abstraction files and published evidence tables. However, we adjusted our approach during the course of the analyses to verify and supplement previously abstracted data as we needed additional information not collected during the original review process.

Set calibration refers to the process of assigning a numeric value between 0 and 1 based on data collected from or about the case for each condition set and outcome set included in an analysis. These values are referred to as set membership values and represent the degree to which the case belongs to each of the sets in the analysis. Researchers typically define the rubric that determines what set membership value to assign based on existing theory or information external to the cases at hand. Qualitative and/or quantitative data collected from a case is evaluated against the calibration rubric to determine the specific set membership value that should be assigned to the case. In a crisp-set (cf, binary) calibration scheme, cases are either assigned values of “1” (fully in the set) or “0” (fully out of the set). For example, when trying to establish whether an adherence intervention belongs to the set of studies that are “theory-based,” one could examine whether the intervention designers described and cited specific behavioral theories that were used to develop the intervention; if so, the study would be assigned a 1, and if not, the study would be assigned a 0. Non-binary calibration schemes are also possible and are described in more detail in the online supplementary material (Additional file 1 ).

Studies in the completed review used a variety of medication adherence outcomes measured at various time points based on self-report, prescription fills, or medication event monitoring systems (“smart” medication bottles). Some studies used more than one measure of adherence. We reviewed abstracted data and original studies and determined that we would consider studies to be fully in the set of studies with improved adherence if at least one measure of adherence demonstrated a statistically significant improvement as compared to a usual-care comparison group. We chose this calibration rubric because of the lack of a common adherence measure across studies. We considered using a fuzzy-set calibration rubric, which allows for set membership values between 0 and 1; but, the panoply of adherence measures used both within and across studies and the lack of external standards for defining differences in degree of adherence (e.g., “very much improved adherence” from “slightly improved adherence” from “slightly not improved adherence”) proved too challenging.

Condition sets used in each analysis are summarized in Table  1 . The abstracted data and evidence tables that described the BCTs and implementation features used in studies generally provided inadequate information to enable us to calibrate condition sets; thus, we went back to original study publications to obtain more detail and to clarify ambiguous data abstraction entries for nearly all studies.

The BCTs abstracted during the completed review were determined and defined a priori by the review team and derived from a previous meta-analysis of medication adherence interventions and a published taxonomy of BCTs [ 24 , 25 ]. One study reviewer captured a study’s use of each BCT as “yes” or “no” or “unclear” based on information available in the published intervention description, and this was confirmed by a second reviewer. Thus, studies could be identified as using multiple BCTs. To studies that used a BCT, we assigned a set membership value of 1 for that BCT, and we assigned studies that did not use a BCT, or for which use of the BCT was unclear, a set membership value of 0. We also conducted sensitivity analyses with an alternate rubric that calibrated “unclear” as BCT use.

A challenge we encountered for the first analysis was the large number (12) of BCTs identified during abstraction in the completed review. With this many conditions, we were concerned about limited diversity that would result by including too many condition sets for the fixed number of studies (60). We winnowed the number of included condition sets to nine by eliminating three BCTs that were used by fewer than three studies. We attempted to further reduce the number of BCTs included in analysis by combining two BCTs to create a macrocondition, a typical strategy in QCA to reduce the number of included condition sets. However, we found the BCTs too conceptually distinct to combine into a single macrocondition. Thus, we could not implement a QCA standard of good practice with respect to keeping the number of condition sets relative to the number of cases at a reasonable level [ 21 ].

For the second analysis, which evaluated implementation features, we specified condition set-based implementation features that the completed review authors determined a priori and captured during study abstraction. These features, listed in Table  1 , included intervention agent , target , span of intervention over time, mode of delivery, and intervention exposure . Information about these characteristics was captured by the review team using unstructured abstraction fields. For three of the condition sets, target, agent, and mode, the review team collapsed abstracted data into multivalue, mutually exclusive, categories for descriptive reporting of intervention characteristics.

We evaluated whether the multivalue categorical groupings for target , agent , and mod e could be further collapsed into dichotomous categories for a crisp-set calibration rubric. For target, the review team used information from the published description to assign each study to one of three categories: patient-only, combination of patient and provider, combination of patient and provider and system. For our analysis, we decided that the inclusion of a provider or system target, in addition to targeting the patient, was a key distinction as provider and system interventions would require additional training, infrastructure, and expense. Thus, we considered a study as “fully in” for the target condition set if the intervention targeted a provider or system in addition to a patient. Studies targeting only patients were considered “fully out” of the set. Similarly for mode , we first evaluated the completed review’s categorical groupings before deciding that a key design feature relevant to policy-makers and practitioners would be whether the intervention was delivered in-person versus some other mode (e.g., telephone, virtual, automated) because of secular trends in virtual care, convenience to patients, and perhaps lower costs. We developed two alternatives to accommodate interventions with mixed modes, where some of the intervention was delivered in person and some delivered by phone or virtually. For calibration of the agent condition set, we considered studies that used licensed health care professionals (e.g., nurse, physician, pharmacist) as fully in, and studies that used agents described as research assistants, health coaches, or other non-licensed types of staff as fully out.

The calibration of the final two condition sets in the second analysis, time span of intervention and intensity of exposure , exemplified the iterative back and forth between theory and empirical information from the cases at hand that is a QCA standard of good practice [ 21 ]. Study abstractors captured raw data about these two condition sets in an unstructured format during the review. We first transformed the raw data into standardized numeric values such that time span was represented in “weeks” from beginning to end of the intervention and the total time spent exposed to the intervention was represented in “minutes.” Because exposure information in some studies lacked detail, we made assumptions regarding average length of a clinic visit, telephone contact, or time spent exposed to an automated intervention when it was not specifically provided. For simplicity in interpretation, we chose to calibrate span and exposure with crisp sets. We contemplated various thresholds guided by the following considerations:

Select the calibration threshold with some knowledge of the range of values represented within our studies to avoid setting it too high or too low such that most studies would be in or out of the set.

Incorporate our substantive experience with behavioral interventions regarding what would be considered a threshold for a longer span or a higher exposure, but convey the condition sets using their numeric threshold value rather than terms such as low or high to mitigate concerns over the inherent arbitrariness of wherever we placed the threshold (e.g., span >12 weeks is “in,” rather than “long span” is “in”).

Test alternative thresholds in sensitivity analyses to assess the robustness of our findings with respect to the placement of the calibration threshold.

Ultimately, our main analysis used a calibration threshold of greater than or equal to 12 weeks as fully in the span condition set and a threshold of greater than or equal to 120 min as fully in the exposure condition set. In sensitivity analyses, we evaluated a span threshold of 6 weeks and two exposure thresholds, 60 and 240 min. We identified some differences in findings, and all supplemental analyses were made available as appendices to the main substantive analysis to support transparency and demonstrate the sensitivity of findings to changes in calibration thresholds.

Construct and analyze the truth table

For each analysis, we transformed the raw data matrix of set membership values into a truth table, which places studies with the exact same configuration of set membership values for condition sets into the same truth table row. The number of logically possible truth table rows in an analysis is equal to 2 k , where k is equal to the number of included condition sets; thus, the truth table for the first analysis contained 512 (i.e., 2 9 ) rows and the table for the second analysis contained 32 rows (i.e., 2 5 ). In both analyses, some of the truth table’s logically possible configurations were not present in any studies so these rows are “empty” of any empiric cases and are called logical remainders. The truth table is the analytic device in QCA for determining which configurations of condition sets consistently demonstrate the outcome. If all studies within a truth table row demonstrate improved adherence, then that row is coded as fully in or 1 with a consistency of 100 %. Rarely do real-world phenomena exhibit perfect consistency. In QCA, rows with a consistency of less than 100 % (also referred to as contradictory rows) can still be coded as 1 and included in sufficiency analyses if row consistency is above a prespecified level. Different thresholds for consistency can be used based on the nature of the research question, data quality, and number of cases, but typical thresholds are between 75 and 90 % [ 21 ].

Using the truth table created for each analysis, we identified set relationships between condition sets and configurations of condition sets and the outcome set. As described in the supplemental online materials (Additional file 1 ), superset relationships between condition sets and an outcome set can be interpreted as indicating necessary conditions. Similarly subset relationships between condition sets and an outcome set can be interpreted as indicating sufficient conditions. We used Stata Version 13 (StataCorp, College Station, TX) to create 2 × 2 contingency tables using set membership values for each condition set and the outcome set. Data from these tables are interpreted through a set-theoretic lens, meaning that the proportions produced by the table are interpreted as the consistency of each condition as a necessary condition for the outcome (% of cases in the outcome set that are also in the condition set) or as a sufficient condition for the outcome (% of cases in the condition set that are also in the outcome set). In the first analysis, we identified one BCT (techniques that increase knowledge) as individually necessary and one BCT (techniques that increase self-efficacy) as individually sufficient; in the second analysis, we did not identify any individually necessary or sufficient conditions.

Though an assessment of individually necessary or sufficient conditions is the initial analytic step, it is the evaluation of configurations of condition sets that allows QCA to offer powerful insights into complex causal patterns. For a configuration of condition sets to be necessary, it would need to be consistently present among all studies with the outcome of “improved medication adherence.” We did not identify two or more individual necessary condition sets in either analysis, and because formal logic prescribes that no configuration can be considered necessary unless each individual component condition set is necessary, we quickly discerned that we would not need an assessment of necessary configurations.

We used fsQCA version 2.5 to conduct sufficiency analyses for configurations [ 26 ]. In crisp-set QCA, the configuration of set membership values in each row of the truth table where the outcome set is 1 represents as expression of sufficiency. In other words, if the outcome is consistently present among cases within the row, then that unique combination of condition sets (i.e., presence or absence of conditions in a crisp-set scheme) is a sufficient pathway to the outcome. If multiple truth table rows consistently demonstrate the outcome, then multiple sufficient pathways are present (i.e., an equifinal solution). The most complex expressions of sufficiency can be taken directly from truth table rows; however, these statements are often unwieldy in the number of conditions and operator terms (ANDs, ORs, NOTs), which makes them difficult to interpret. These expressions can be logically minimized to simpler expressions with fewer terms and operators that are still logically consistent with the more complex expression, but easier to interpret.

The fsQCA software uses the Quine-McCluskey algorithm to perform this minimization procedure. The basis of this minimization procedure is that if two truth table rows with the same outcome differ in set membership value of only one condition set, then that condition set is irrelevant for producing the outcome in that row and can be eliminated. The two rows can be merged resulting in a simpler expression of sufficiency. This algorithm is repeated such that all truth table rows are compared and reduced until no further simplification is possible. In actuality, three variants of the minimization procedure are used to produce three variants of a solution, the conservative, the intermediate, and the parsimonious solutions. These three solutions are all logically consistent with each other but represent different degrees of parsimony and differ with respect to whether logical remainders are used as part of the minimization procedure.

Ultimately, we identified seven sufficient configurations in the intermediate solution for the first analysis and four sufficient configurations for the second analysis. A summary of these results is in Tables  2 and 3 . We computed parameters of fit to describe how well the set relationships we identified deviate from a perfect set relationship (i.e., consistency) and how well the solutions identified explain the outcome across all empiric cases included (i.e., coverage). See the online supplementary materials (Additional file 1 ) for additional information regarding parameters of fit.

Make sense of the results

We examined the studies covered by configurations in the identified solutions to narratively describe how these solutions were represented within a study and across studies for each analysis. The process of relating solution findings back to the studies was instructive for identifying the need for adjustments in condition set calibration. This process also helped us to think beyond numeric coverage levels when considering the relevance of the various configurations to the outcome that we identified. For example, in the first analysis, we found configurations that included the absence of various BCTs to be less interpretable than configurations mostly characterized by the presence of BCTs since interventions are not typically designed to explicitly exclude a BCT. Similarly, the process of re-reviewing the studies in light of the solutions they exemplified allowed us to reconsider the relevance of the knowledge BCT condition set, which we had identified as individually necessary. This condition was present in 57 of the 60 studies we used for the QCA and was generally exhibited within studies as providing patients with information about their disease, the medication used to treat, and benefits and side effects of treatment. Thus, membership in the knowledge BCT set was heavily skewed, and knowledge would likely be a necessary condition of whatever outcome set we defined, a concept described by QCA experts as a “trivial” necessary condition [ 12 ]. Lastly, in keeping with standards of good QCA practice, we repeated all analyses for the set of studies ( N  = 26) not demonstrating improved adherence [ 19 ].

We used QCA within a systematic review to identify combinations of BCTs and combinations of implementation features found among effective medication adherence interventions. The 40 strength of evidence grades in the completed review provided readers with a synthesis of the magnitude and direction of effect for 40 small groups of studies, each group characterized by the same clinical condition and type of intervention [ 16 ]. The QCA results we identified complement the completed review findings by synthesizing across the boundaries of clinical condition and typology to identify combinations of BCTs and implementation features present among the entire set of effective interventions. The QCA findings are not a replacement for the findings in the completed review; rather, they provide additional insights based on configurational questions. Configurational questions are often not formulated as review key questions or the evidence is deemed insufficient to answer such questions for a variety of reasons—for example, lack of trials with direct comparisons of various different intervention features. Yet, “what is the recipe for effectiveness?” is often the information that practitioners and policy-makers want to know when complex interventions and their outcomes are heterogeneous.

We judged QCA to be suitable for use within systematic reviews based on the similarity of processes that are already part of a typical evidence synthesis. In Table  4 , we provide our assessment of the alignment between systematic review and QCA steps, specifically the identification of studies/cases to include, data collection, study/case assessment, analysis, and presentation of findings. Our retrospective application of the method was inefficient, requiring re-review of the original studies at various steps in the process. However, a retrospective approach was invaluable for identifying challenges and steps that might be required beyond a typical review process in order to apply QCA. Although we identified alignment at a number of steps, how best to present findings within the review deserves further prospective evaluation.

The alignment between systematic review processes and QCA at the study/case assessment step deserves highlighting because of the importance of this step for fidelity to standards of good QCA practice [ 21 ]. The distinction between the abstraction tasks of transcribing information from studies into evidence tables and making judgments about the use of various BCTs or implementation features based on information in the studies was not well defined during the original review. Calibration of sets for QCA requires a clear rubric for making set membership value assignments and a mechanism for recording the rationale for the assignment, similar to the approach used for risk of bias assessments. Making set membership value assignments in tandem with data abstraction may be efficient; however, calibration rubrics cannot always be determined a priori, and the familiarity with studies gained through abstraction may be helpful for finalizing the rubric. Even the most robust calibration processes may not overcome the paucity of information about intervention components, implementation features available in published study reports. We believe this may be the biggest challenge to applying QCA and encountered this issue in both our substantive analyses. Ultimately, enough information about the study needs to be available to support the set membership value assignment, though sensitivity analyses could mitigate the impact of missing information.

We identified several other applications of QCA within systematic reviews. To date, all applications of QCA to systematic reviews have been published and presented in separate manuscripts, and not as part of the main evidence report. Using data from a subset of studies in a review of community engagement interventions for public health and health promotion, Thomas and Brunton et al. applied QCA to identify which combinations of community engagement methods directed toward pregnant or new mothers were effective for promoting breastfeeding [ 13 , 27 ]. Although this study had limited diversity and low solution coverage, the investigators could derive additional meaning from the analysis that went beyond the initial qualitative synthesis. We agree with these authors’ assertions about the challenge of finding the right balance between parsimony and complexity when defining condition sets. Candy et al. used QCA with a completed Cochrane systematic review to explore relationships between what patients identify as important components of interventions to improve medication adherence for chronic clinical conditions with what components are actually represented within effective interventions [ 14 ]. The authors discuss the challenge with the selection and processing of data that is far removed from its primary source by the time it appears in a systematic review, a challenge we also acknowledge and had not previously encountered in our use of QCA within primary research studies. We concur with the observations of both study authors regarding the lack of intervention detail reported in primary studies limiting the robust application of QCA within a systematic review context.

Our experience is limited to conducting two analyses within the same completed systematic review. Whether QCA is feasible and adds value within reviews that include a smaller or larger numbers of studies or a review that includes many different outcomes or studies where interventions are complex but do not have easily discernible components is uncertain. The extent to which this method could be applied to other systematic reviews of complex interventions is determined by a number of factors, some based on requirements of the method itself. For example, variability in the outcome is essential to this method; we selected the medication adherence review to apply QCA in part because studies in the review included interventions with demonstrated effectiveness and interventions where effectiveness was not demonstrated. Lastly, our study did not evaluate how to present and integrate results from QCA within a traditional qualitative or quantitative review in a way that minimizes the need for an in-depth understanding of the method, yet provides enough transparency for readers to judge the validity and reliability of the findings.

We offer several recommendations for use of this method in systematic reviews. First, ensure some of the review research questions are configural and based on an a priori understanding of the phenomenon under evaluation. Reviews with fewer than ten studies may not be good candidates for QCA because no more than two to three condition sets can be accommodated without creating substantial limited diversity and patterns among condition sets may just as easily identified by “eye-balling.” Finally, we recommend initial calibration rubric design prior to study abstraction for efficiency, but teams should plan to re-specify and re-review studies if needed before making final calibration decisions.

In conclusion, QCA offers systematic reviewers an additional tool for evidence synthesis in reviews of complex interventions. Further prospective use of the method during a review is needed to identify further areas for process alignment, method refinement, and how best to integrate and present results from a QCA into a typical evidence synthesis report.

Abbreviations

Agency for Healthcare Research and Quality

behavioral change technique

human immunodeficiency virus/acquired immunodeficiency syndrome

patient, intervention, comparator, outcome, timing, and setting

qualitative comparative analysis

randomized controlled trial

United Kingdom

Burton C. Heavy tailed distributions of effect sizes in systematic reviews of complex interventions. PLoS ONE. 2012;7(3):e34222. doi: 10.1371/journal.pone.0034222 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Campbell M, Fitzpatrick R, Haines A, Kinmonth AL, Sandercock P, Spiegelhalter D, et al. Framework for design and evaluation of complex interventions to improve health. BMJ. 2000;321(7262):694–6.

Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10 Suppl 1:21–34. doi: 10.1258/1355819054308530 .

Article   PubMed   Google Scholar  

Rehfuess E, Akl E. Current experience with applying the GRADE approach to public health interventions: an empirical study. BMC Public Health. 2013;13(1):9.

Article   PubMed   PubMed Central   Google Scholar  

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655. doi: 10.1136/bmj.a1655 .

Shiell A, Hawe P, Gold L. Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008;336(7656):1281–3. doi: 10.1136/bmj.39569.510521.AD .

Guise JM, Chang C, Viswanathan M, Glick S, Treadwell J, Umscheid CA, et al. Agency for Healthcare Research and Quality Evidence-based Practice Center methods for systematically reviewing complex multicomponent health care interventions. J Clin Epidemiol. 2014;67(11):1181–91. doi: 10.1016/j.jclinepi.2014.06.010 .

Alexander JA, Hearld LR. Methods and metrics challenges of delivery-system research. Implementation Science: IS. 2012;7:15. doi: 10.1186/1748-5908-7-15 .

Ragin CC. The comparative method: moving beyond qualitative and quantitative strategies. Berkeley: University of California Press; 1987.

Google Scholar  

Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. J Health Serv Res Policy. 2005;10(1):45–53.

Ragin CC. Redesigning social inquiry: fuzzy sets and beyond. Chicago: University of Chicago Press; 2008.

Book   Google Scholar  

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis. Strategies for social inquiry. Cambridge: Cambridge University Press; 2012.

Brunton G, O’Mara-Eves A, Thomas J. The ‘active ingredients’ for successful community engagement with disadvantaged expectant and new mothers: a qualitative comparative analysis. J Adv Nurs. 2014;70(12):2847–60. doi: 10.1111/jan.12441 .

Candy B, King M, Jones L, Oliver S. Using qualitative evidence on patients’ views to help understand variation in effectiveness of complex interventions: a qualitative comparative analysis. Trials. 2013;14:179. doi: 10.1186/1745-6215-14-179 .

Viswanathan M, Golin CE, Jones CD, Ashok M, Blalock S, Wines RCM, Coker-Schwimmer EJL, Grodensky CA, Rosen DL, Yuen A, Sista P, Lohr KN. Medication adherence interventions: comparative effectiveness. Closing the quality gap: revisiting the state of the science. Evidence Report No. 208. (Prepared by RTI International–University of North Carolina Evidence-based Practice Center under Contract No. 290-2007-10056-I.) AHRQ Publication No. 12-E010-EF. Rockville, MD: Agency for Healthcare Research and Quality; 2012. www.effectivehealthcare.ahrq.gov/reports/final.cfm .

Viswanathan M, Golin CE, Jones CD, Ashok M, Blalock SJ, Wines RCM, et al. Interventions to improve adherence to self-administered medications for chronic diseases in the United States: a systematic review. Ann Intern Med. 2012;157(11):785–95. doi: 10.7326/0003-4819-157-11-201212040-00538 .

Osterberg L, Blaschke T. Adherence to medication. N Engl J Med. 2005;353(5):487–97. doi: 10.1056/NEJMra050100 .

Article   CAS   PubMed   Google Scholar  

Li T, Puhan MA, Vedula SS, Singh S, Dickersin K, Ad Hoc Network Meta-analysis Methods Meeting Working G. Network meta-analysis-highly attractive but more methodological research is needed. BMC medicine. 2011;9:79. doi: 10.1186/1741-7015-9-79 .

Kahwati L, Viswanathan M, Golin CE, Kane H, Lewis M and Jacobs S. Identifying configurations of behavior change techniques in effective medication adherence interventions: a qualitative comparative analysis. Sys Rev, 2016, In press.

Thiem A, Duşa A. Boolean minimization in social science research: a review of current software for qualitative comparative analysis (QCA). Soc Sci Comput Rev. 2013;31(4):505–21. doi: 10.1177/0894439313478999 .

Article   Google Scholar  

Schneider CQ, Wagemann C. Standards of good practice in qualitative comparative analysis (QCA). Comp Sociol. 2010;9(3):397–418.

Kane H, Lewis MA, Williams PA, Kahwati LC. Using qualitative comparative analysis to understand and quantify translation and implementation. Translational Behavioral Medicine. 2014;4(2):201–8. doi: 10.1007/s13142-014-0251-6 .

Kahwati LC, Lewis MA, Kane H, Williams PA, Nerz P, Jones KR, et al. Best practices in the Veterans Health Administration’s MOVE! Weight management program. Am J Prev Med. 2011;41(5):457–64. doi: 10.1016/j.amepre.2011.06.047 .

de Bruin M, Viechtbauer W, Schaalma HP, Kok G, Abraham C, Hospers HJ. Standard care impact on effects of highly active antiretroviral therapy adherence interventions: a meta-analysis of randomized controlled trials. Arch Intern Med. 2010;170(3):240–50. doi: 10.1001/archinternmed.2009.536 .

Abraham C, Michie S. A taxonomy of behavior change techniques used in interventions. Health Psychol. 2008;27(3):379–87. doi: 10.1037/0278-6133.27.3.379 .

Ragin CC, Drass KA, Davey S. Fuzzy-set/qualitative comparative analysis. 25th ed. Tucson: Department of Sociology, University of Arizona; 2006.

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3:67. doi: 10.1186/2046-4053-3-67 .

Download references

Acknowledgements

This work was supported by grant 1R03HS022563-01 from the Agency for Healthcare Research and Quality (AHRQ) to Dr. Kahwati. AHRQ had no role in the study design, data collection, analysis, or interpretation of findings. Part of Dr. Golin’s time was supported by the University of North Carolina at Chapel Hill Center for AIDS Research (CFAR), an NIH funded program P30 AI50410.

Author information

Authors and affiliations.

RTI International, 3040 E. Cornwallis Rd., Research Triangle Park, NC, 27709, USA

Leila Kahwati, Sara Jacobs, Heather Kane, Megan Lewis & Meera Viswanathan

Departments of Medicine and Health Behavior, University of North Carolina, Chapel Hill, NC, USA

Carol E. Golin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Leila Kahwati .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contributions

ML, MV, and LK conceived of the study, and LK secured the funding. LK, ML, MV, HK, and CG designed both analyses, and SJ had substantial intellectual contribution to the revisions for analysis 2. MV and CG contributed to the data collection in the original, completed review. LK and SJ performed the configural analyses. LK drafted the initial manuscript, and all authors critically reviewed the manuscript and approved the final version.

Additional files

Additional file 1:.

Detailed description, example, and glossary. This file provides a detailed description of the historical context of QCA, methodologic foundations, and walks through a hypothetical example analysis to illustrate key features of the methods. A glossary of terms is also included. (PDF 521 kb)

Additional file 2:

Completed review analytic framework, key questions, and summary of findings. This file provides the analytic framework, key questions, and a summary of findings from the completed AHRQ systematic review of interventions to improve medication adherence that was used to examine the suitability of using QCA in a review of a complex intervention. (PDF 156 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Kahwati, L., Jacobs, S., Kane, H. et al. Using qualitative comparative analysis in a systematic review of a complex intervention. Syst Rev 5 , 82 (2016). https://doi.org/10.1186/s13643-016-0256-y

Download citation

Received : 06 August 2015

Accepted : 25 April 2016

Published : 04 May 2016

DOI : https://doi.org/10.1186/s13643-016-0256-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative comparative analysis
  • Configurational analyses
  • Systematic review methods

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

comparative study research questions

  • Research Questions: Definitions, Types + [Examples]

busayo.longe

Research questions lie at the core of systematic investigation and this is because recording accurate research outcomes is tied to asking the right questions. Asking the right questions when conducting research can help you collect relevant and insightful information that ultimately influences your work, positively. 

The right research questions are typically easy to understand, straight to the point, and engaging. In this article, we will share tips on how to create the right research questions and also show you how to create and administer an online questionnaire with Formplus . 

What is a Research Question? 

A research question is a specific inquiry which the research seeks to provide a response to. It resides at the core of systematic investigation and it helps you to clearly define a path for the research process. 

A research question is usually the first step in any research project. Basically, it is the primary interrogation point of your research and it sets the pace for your work.  

Typically, a research question focuses on the research, determines the methodology and hypothesis, and guides all stages of inquiry, analysis, and reporting. With the right research questions, you will be able to gather useful information for your investigation. 

Types of Research Questions 

Research questions are broadly categorized into 2; that is, qualitative research questions and quantitative research questions. Qualitative and quantitative research questions can be used independently and co-dependently in line with the overall focus and objectives of your research. 

If your research aims at collecting quantifiable data , you will need to make use of quantitative research questions. On the other hand, qualitative questions help you to gather qualitative data bothering on the perceptions and observations of your research subjects. 

Qualitative Research Questions  

A qualitative research question is a type of systematic inquiry that aims at collecting qualitative data from research subjects. The aim of qualitative research questions is to gather non-statistical information pertaining to the experiences, observations, and perceptions of the research subjects in line with the objectives of the investigation. 

Types of Qualitative Research Questions  

  • Ethnographic Research Questions

As the name clearly suggests, ethnographic research questions are inquiries presented in ethnographic research. Ethnographic research is a qualitative research approach that involves observing variables in their natural environments or habitats in order to arrive at objective research outcomes. 

These research questions help the researcher to gather insights into the habits, dispositions, perceptions, and behaviors of research subjects as they interact in specific environments. 

Ethnographic research questions can be used in education, business, medicine, and other fields of study, and they are very useful in contexts aimed at collecting in-depth and specific information that are peculiar to research variables. For instance, asking educational ethnographic research questions can help you understand how pedagogy affects classroom relations and behaviors. 

This type of research question can be administered physically through one-on-one interviews, naturalism (live and work), and participant observation methods. Alternatively, the researcher can ask ethnographic research questions via online surveys and questionnaires created with Formplus.  

Examples of Ethnographic Research Questions

  • Why do you use this product?
  • Have you noticed any side effects since you started using this drug?
  • Does this product meet your needs?

ethnographic-research-questions

  • Case Studies

A case study is a qualitative research approach that involves carrying out a detailed investigation into a research subject(s) or variable(s). In the course of a case study, the researcher gathers a range of data from multiple sources of information via different data collection methods, and over a period of time. 

The aim of a case study is to analyze specific issues within definite contexts and arrive at detailed research subject analyses by asking the right questions. This research method can be explanatory, descriptive , or exploratory depending on the focus of your systematic investigation or research. 

An explanatory case study is one that seeks to gather information on the causes of real-life occurrences. This type of case study uses “how” and “why” questions in order to gather valid information about the causative factors of an event. 

Descriptive case studies are typically used in business researches, and they aim at analyzing the impact of changing market dynamics on businesses. On the other hand, exploratory case studies aim at providing answers to “who” and “what” questions using data collection tools like interviews and questionnaires. 

Some questions you can include in your case studies are: 

  • Why did you choose our services?
  • How has this policy affected your business output?
  • What benefits have you recorded since you started using our product?

case-study-example

An interview is a qualitative research method that involves asking respondents a series of questions in order to gather information about a research subject. Interview questions can be close-ended or open-ended , and they prompt participants to provide valid information that is useful to the research. 

An interview may also be structured, semi-structured , or unstructured , and this further influences the types of questions they include. Structured interviews are made up of more close-ended questions because they aim at gathering quantitative data while unstructured interviews consist, primarily, of open-ended questions that allow the researcher to collect qualitative information from respondents. 

You can conduct interview research by scheduling a physical meeting with respondents, through a telephone conversation, and via digital media and video conferencing platforms like Skype and Zoom. Alternatively, you can use Formplus surveys and questionnaires for your interview. 

Examples of interview questions include: 

  • What challenges did you face while using our product?
  • What specific needs did our product meet?
  • What would you like us to improve our service delivery?

interview-questions

Quantitative Research Questions

Quantitative research questions are questions that are used to gather quantifiable data from research subjects. These types of research questions are usually more specific and direct because they aim at collecting information that can be measured; that is, statistical information. 

Types of Quantitative Research Questions

  • Descriptive Research Questions

Descriptive research questions are inquiries that researchers use to gather quantifiable data about the attributes and characteristics of research subjects. These types of questions primarily seek responses that reveal existing patterns in the nature of the research subjects. 

It is important to note that descriptive research questions are not concerned with the causative factors of the discovered attributes and characteristics. Rather, they focus on the “what”; that is, describing the subject of the research without paying attention to the reasons for its occurrence. 

Descriptive research questions are typically closed-ended because they aim at gathering definite and specific responses from research participants. Also, they can be used in customer experience surveys and market research to collect information about target markets and consumer behaviors. 

Descriptive Research Question Examples

  • How often do you make use of our fitness application?
  • How much would you be willing to pay for this product?

descriptive-research-question

  • Comparative Research Questions

A comparative research question is a type of quantitative research question that is used to gather information about the differences between two or more research subjects across different variables. These types of questions help the researcher to identify distinct features that mark one research subject from the other while highlighting existing similarities. 

Asking comparative research questions in market research surveys can provide insights on how your product or service matches its competitors. In addition, it can help you to identify the strengths and weaknesses of your product for a better competitive advantage.  

The 5 steps involved in the framing of comparative research questions are: 

  • Choose your starting phrase
  • Identify and name the dependent variable
  • Identify the groups you are interested in
  • Identify the appropriate adjoining text
  • Write out the comparative research question

Comparative Research Question Samples 

  • What are the differences between a landline telephone and a smartphone?
  • What are the differences between work-from-home and on-site operations?

comparative-research-question

  • Relationship-based Research Questions  

Just like the name suggests, a relationship-based research question is one that inquires into the nature of the association between two research subjects within the same demographic. These types of research questions help you to gather information pertaining to the nature of the association between two research variables. 

Relationship-based research questions are also known as correlational research questions because they seek to clearly identify the link between 2 variables. 

Read: Correlational Research Designs: Types, Examples & Methods

Examples of relationship-based research questions include: 

  • What is the relationship between purchasing power and the business site?
  • What is the relationship between the work environment and workforce turnover?

relationship-based-research-question

Examples of a Good Research Question

Since research questions lie at the core of any systematic investigations, it is important to know how to frame a good research question. The right research questions will help you to gather the most objective responses that are useful to your systematic investigation. 

A good research question is one that requires impartial responses and can be answered via existing sources of information. Also, a good research question seeks answers that actively contribute to a body of knowledge; hence, it is a question that is yet to be answered in your specific research context.

  • Open-Ended Questions

 An open-ended question is a type of research question that does not restrict respondents to a set of premeditated answer options. In other words, it is a question that allows the respondent to freely express his or her perceptions and feelings towards the research subject. 

Examples of Open-ended Questions

  • How do you deal with stress in the workplace?
  • What is a typical day at work like for you?
  • Close-ended Questions

A close-ended question is a type of survey question that restricts respondents to a set of predetermined answers such as multiple-choice questions . Close-ended questions typically require yes or no answers and are commonly used in quantitative research to gather numerical data from research participants. 

Examples of Close-ended Questions

  • Did you enjoy this event?
  • How likely are you to recommend our services?
  • Very Likely
  • Somewhat Likely
  • Likert Scale Questions

A Likert scale question is a type of close-ended question that is structured as a 3-point, 5-point, or 7-point psychometric scale . This type of question is used to measure the survey respondent’s disposition towards multiple variables and it can be unipolar or bipolar in nature. 

Example of Likert Scale Questions

  • How satisfied are you with our service delivery?
  • Very dissatisfied
  • Not satisfied
  • Very satisfied
  • Rating Scale Questions

A rating scale question is a type of close-ended question that seeks to associate a specific qualitative measure (rating) with the different variables in research. It is commonly used in customer experience surveys, market research surveys, employee reviews, and product evaluations. 

Example of Rating Questions

  • How would you rate our service delivery?

  Examples of a Bad Research Question

Knowing what bad research questions are would help you avoid them in the course of your systematic investigation. These types of questions are usually unfocused and often result in research biases that can negatively impact the outcomes of your systematic investigation. 

  • Loaded Questions

A loaded question is a question that subtly presupposes one or more unverified assumptions about the research subject or participant. This type of question typically boxes the respondent in a corner because it suggests implicit and explicit biases that prevent objective responses. 

Example of Loaded Questions

  • Have you stopped smoking?
  • Where did you hide the money?
  • Negative Questions

A negative question is a type of question that is structured with an implicit or explicit negator. Negative questions can be misleading because they upturn the typical yes/no response order by requiring a negative answer for affirmation and an affirmative answer for negation. 

Examples of Negative Questions

  • Would you mind dropping by my office later today?
  • Didn’t you visit last week?
  • Leading Questions  

A l eading question is a type of survey question that nudges the respondent towards an already-determined answer. It is highly suggestive in nature and typically consists of biases and unverified assumptions that point toward its premeditated responses. 

Examples of Leading Questions

  • If you enjoyed this service, would you be willing to try out our other packages?
  • Our product met your needs, didn’t it?
Read More: Leading Questions: Definition, Types, and Examples

How to Use Formplus as Online Research Questionnaire Tool  

With Formplus, you can create and administer your online research questionnaire easily. In the form builder, you can add different form fields to your questionnaire and edit these fields to reflect specific research questions for your systematic investigation. 

Here is a step-by-step guide on how to create an online research questionnaire with Formplus: 

  • Sign in to your Formplus accoun t, then click on the “create new form” button in your dashboard to access the Form builder.

comparative study research questions

  • In the form builder, add preferred form fields to your online research questionnaire by dragging and dropping them into the form. Add a title to your form in the title block. You can edit form fields by clicking on the “pencil” icon on the right corner of each form field.

online-research-questionnaire

  • Save the form to access the customization section of the builder. Here, you can tweak the appearance of your online research questionnaire by adding background images, changing the form font, and adding your organization’s logo.

formplus-research-question

  • Finally, copy your form link and share it with respondents. You can also use any of the multiple sharing options available.

comparative study research questions

Conclusion  

The success of your research starts with framing the right questions to help you collect the most valid and objective responses. Be sure to avoid bad research questions like loaded and negative questions that can be misleading and adversely affect your research data and outcomes. 

Your research questions should clearly reflect the aims and objectives of your systematic investigation while laying emphasis on specific contexts. To help you seamlessly gather responses for your research questions, you can create an online research questionnaire on Formplus.  

Logo

Connect to Formplus, Get Started Now - It's Free!

  • abstract in research papers
  • bad research questions
  • examples of research questions
  • types of research questions
  • busayo.longe

Formplus

You may also like:

How to Write An Abstract For Research Papers: Tips & Examples

In this article, we will share some tips for writing an effective abstract, plus samples you can learn from.

comparative study research questions

How to do a Meta Analysis: Methodology, Pros & Cons

In this article, we’ll go through the concept of meta-analysis, what it can be used for, and how you can use it to improve how you...

Research Summary: What Is It & How To Write One

Introduction A research summary is a requirement during academic research and sometimes you might need to prepare a research summary...

How to Write a Problem Statement for your Research

Learn how to write problem statements before commencing any research effort. Learn about its structure and explore examples

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Popular searches

  • How to Get Participants For Your Study
  • How to Do Segmentation?
  • Conjoint Preference Share Simulator
  • MaxDiff Analysis
  • Likert Scales
  • Reliability & Validity

Request consultation

Do you need support in running a pricing or product study? We can help you with agile consumer research and conjoint analysis.

Looking for an online survey platform?

Conjointly offers a great survey tool with multiple question types, randomisation blocks, and multilingual support. The Basic tier is always free.

Research Methods Knowledge Base

  • Navigating the Knowledge Base
  • Five Big Words

Types of Research Questions

  • Time in Research
  • Types of Relationships
  • Types of Data
  • Unit of Analysis
  • Two Research Fallacies
  • Philosophy of Research
  • Ethics in Research
  • Conceptualizing
  • Evaluation Research
  • Measurement
  • Research Design
  • Table of Contents

Fully-functional online survey tool with various question types, logic, randomisation, and reporting for unlimited number of surveys.

Completely free for academics and students .

There are three basic types of questions that research projects can address:

  • Descriptive. When a study is designed primarily to describe what is going on or what exists. Public opinion polls that seek only to describe the proportion of people who hold various opinions are primarily descriptive in nature. For instance, if we want to know what percent of the population would vote for a Democratic or a Republican in the next presidential election, we are simply interested in describing something.
  • Relational. When a study is designed to look at the relationships between two or more variables. A public opinion poll that compares what proportion of males and females say they would vote for a Democratic or a Republican candidate in the next presidential election is essentially studying the relationship between gender and voting preference.
  • Causal. When a study is designed to determine whether one or more variables (e.g., a program or treatment variable) causes or affects one or more outcome variables. If we did a public opinion poll to try to determine whether a recent political advertising campaign changed voter preferences, we would essentially be studying whether the campaign (cause) changed the proportion of voters who would vote Democratic or Republican (effect).

The three question types can be viewed as cumulative. That is, a relational study assumes that you can first describe (by measuring or observing) each of the variables you are trying to relate. And, a causal study assumes that you can describe both the cause and effect variables and that you can show that they are related to each other. Causal studies are probably the most demanding of the three.

Cookie Consent

Conjointly uses essential cookies to make our site work. We also use additional cookies in order to understand the usage of the site, gather audience analytics, and for remarketing purposes.

For more information on Conjointly's use of cookies, please read our Cookie Policy .

Which one are you?

I am new to conjointly, i am already using conjointly.

This paper is in the following e-collection/theme issue:

Published on 14.2.2024 in Vol 26 (2024)

Analyzing Reddit Forums Specific to Abortion That Yield Diverse Dialogues Pertaining to Medical Information Seeking and Personal Worldviews: Data Mining and Natural Language Processing Comparative Study

Authors of this article:

Author Orcid Image

Original Paper

  • Danny Valdez, PhD   ; 
  • Lucrecia Mena-Meléndez, PhD   ; 
  • Brandon L Crawford, PhD   ; 
  • Kristen N Jozkowski, PhD  

Department of Applied Health Science, Indiana University School of Public Health, Bloomington, IN, United States

Corresponding Author:

Danny Valdez, PhD

Department of Applied Health Science

Indiana University School of Public Health

1025 E 7th Street

Bloomington, IN, 47403

United States

Phone: 1 8128038955

Email: [email protected]

Background: Attitudes toward abortion have historically been characterized via dichotomized labels, yet research suggests that these labels do not appropriately encapsulate beliefs on abortion. Rather, contexts, circumstances, and lived experiences often shape views on abortion into more nuanced and complex perspectives. Qualitative data have also been shown to underpin belief systems regarding abortion. Social media, as a form of qualitative data, could reveal how attitudes toward abortion are communicated publicly in web-based spaces. Furthermore, in some cases, social media can also be leveraged to seek health information.

Objective: This study applies natural language processing and social media mining to analyze Reddit (Reddit, Inc) forums specific to abortion, including r/Abortion (the largest subreddit about abortion) and r/AbortionDebate (a subreddit designed to discuss and debate worldviews on abortion). Our analytical pipeline intends to identify potential themes within the data and the affect from each post.

Methods: We applied a neural network–based topic modeling pipeline (BERTopic) to uncover themes in the r/Abortion (n=2151) and r/AbortionDebate (n=2815) subreddits. After deriving the optimal number of topics per subreddit using an iterative coherence score calculation, we performed a sentiment analysis using the Valence Aware Dictionary and Sentiment Reasoner to assess positive, neutral, and negative affect and an emotion analysis using the Text2Emotion lexicon to identify potential emotionality per post. Differences in affect and emotion by subreddit were compared.

Results: The iterative coherence score calculation revealed 10 topics for both r/Abortion (coherence=0.42) and r/AbortionDebate (coherence=0.35). Topics in the r/Abortion subreddit primarily centered on information sharing or offering a source of social support; in contrast, topics in the r/AbortionDebate subreddit centered on contextualizing shifting or evolving views on abortion across various ethical, moral, and legal domains. The average compound Valence Aware Dictionary and Sentiment Reasoner scores for the r/Abortion and r/AbortionDebate subreddits were 0.01 (SD 0.44) and −0.06 (SD 0.41), respectively. Emotionality scores were consistent across the r/Abortion and r/AbortionDebate subreddits; however, r/Abortion had a marginally higher average fear score of 0.36 (SD 0.39).

Conclusions: Our findings suggest that people posting on abortion forums on Reddit are willing to share their beliefs, which manifested in diverse ways, such as sharing abortion stories including how their worldview changed, which critiques the value of dichotomized abortion identity labels, and information seeking. Notably, the style of discourse varied significantly by subreddit. r/Abortion was principally leveraged as an information and outreach source; r/AbortionDebate largely centered on debating across various legal, ethical, and moral abortion domains. Collectively, our findings suggest that abortion remains an opaque yet politically charged issue for people and that social media can be leveraged to understand views and circumstances surrounding abortion.

Introduction

Although the abortion debate is often framed along strict proabortion or antiabortion stances (eg, prochoice versus prolife—terms common in the United States, Ireland, and other English-speaking countries; pro-elección versus provida and pro-aborto versus anti-aborto —terms used in Mexico, Argentina, and other Spanish-speaking countries), actual abortion beliefs are complex, contextual, and at times contradictory [ 1 - 4 ]. Notably, despite media characterizations of these 2 oppositional perspectives—for people ascribing to either proabortion or prochoice labels (ie, broad abortion support) or antiabortion or prolife labels (ie, broad abortion opposition)—there exist circumstances in which people’s views diverge from the dichotomy [ 5 ]. These circumstances include, for example, the gestation period of pregnancy [ 6 ], the context for seeking abortion [ 7 ], and whether people consider abortion as a legal versus moral issue [ 8 ]. In addition, attitudes toward abortion also vary across some demographic characteristics such as age, educational attainment, political affiliation, and race or ethnicity of a person or groups of people participating in a survey [ 1 , 9 , 10 ].

Beyond context-specific or cultural considerations that may predict complex abortion views, personal accounts, narratives, and discussions about abortion may similarly reveal the extent to which abortion views depart from a support or opposition dichotomy, including extreme abortion circumstances or personal experience with an abortion. Evidence suggests that these considerations are not ethnocentric but shared globally. Research comparing abortion beliefs between English-speaking US residents and Spanish-speaking US residents of diverse nations of origin demonstrates that clear general differences exist in abortion beliefs. Following investigations of the abovementioned considerations, we suggest that further research may yield more precise insights into evolving views on abortion [ 11 ].

Contextual, contradictory, and, in some cases, changing beliefs on abortion make it difficult to accurately assess global and US abortion climates beyond rote and dichotomized categories [ 12 ]. However, evidence strongly suggests the US and global populations hold views that depart from these 2 categories, reflecting abortion attitude complexity [ 1 , 10 ]. Although survey data have quantitatively supported the idea of abortion attitude complexity—qualitative data, broadly defined as any type of open-ended text, audio, visual, or language data, may add additional nuance to suggest where and how complexity may emerge. For example, interviews about abortion reveal specific circumstances that contribute to variability in people’s views on abortion [ 4 ] or reveal how current events and news cycles, in turn, shape social beliefs and attitudes [ 13 ]. Qualitative data can also inform how people contextualize assistance-related resources such as those found on social media.

Social media posts, as a novel form of qualitative data, may similarly reveal how people view abortion and the associated complexity of belief systems at a population-level scale. Notably, the inescapable role of social media in the public lexicon has evolved over time into an outlet for community building and information dissemination that can connect users over shared interests disregarding location [ 14 ]. For example, the Pew Research Center contends that more than three-quarters of the US adult population regularly use at least 1 social media platform [ 15 ], and half of all the users have actively maintained at least 1 account for more than a decade. Because social media data are part of the public domain, longitudinal tracking of such data can represent an open-access running diary of thoughts, perspectives, and affective indicators—particularly for issues deemed controversial or contentious, including COVID-19 vaccination status, marriage equality, transgender sports bans, and abortion [ 16 , 17 ]. Furthermore, social media data are also global, implying that shared languages, regardless of geographic constraint, can contribute to discourses about abortion and associated beliefs therein.

Research has documented that people use social media to share their opinions and views and engage in debates on various topics, as well as to seek help and information and solicit personal advice that pertains to their situation or to something they are going through in life [ 18 - 20 ]. These web-based interactions vary widely across social media platforms and topics but may include discussions about substance use disorders [ 21 ], mental health [ 22 ], sexual assault [ 23 ], and managing HIV treatment [ 24 ], among a wide range of other topics. Furthermore, some more limited research has explored social media users’ engagement and interactions as part of sharing personal experiences, soliciting help, and requesting information pertaining to abortion. This research has focused particularly on assessing how social media users rely on each other to discuss cost-related barriers to abortion care [ 25 ], to discuss decision-making processes regarding abortion methods [ 26 ], and to seek support to make abortion decisions when they may lack familial and medical support otherwise [ 27 ].

Reddit (Reddit, Inc) is a social networking website, which is defined by its structure that allows users to subscribe to forums on diverse topics, both controversial and noncontroversial. Their approach to topic discussion is distinct from other social media platforms in that users can opt into conversations with variably different foci depending on needs and interests. For example, previous research has demonstrated that Reddit can serve as a social connection metric, information-sharing tool, and outreach resource [ 28 ] for controversial or contentious social topics, including sexual assault [ 29 ], abortion [ 30 ], and addiction and recovery [ 21 ]. For most, Reddit forums are a source of information on these topics. However, many of these same topics, particularly those with political contexts, can also be discussed on different Reddit forums in more social commentary or debate-style perspectives. Abortion is one example of a contentious social topic with ranging subreddits pertaining to different aspects of abortion, including as a social connection and information-sharing tool and debate platform.

Analyzing different facets of the same topic through various subreddits could yield nuanced aspects regarding crucial health topics unique from other quantitative and qualitative abortion research. Notably, as of December 2022, Reddit was the 20th most accessed website globally (sixth in the United States), and 50% of all Reddit users reside in the United States, with Canada, Australia, and the United Kingdom comprising approximately 20% of the total Reddit users. Reddit data can principally serve as a window into views on abortion in the United States; however, because not all English language data originate in the United States, it is also possible to observe abortion attitude complexity in a more Westernized, but global context or global reactions to news related to abortion in the United States.

Advances in computational data mining have made it feasible to extract, analyze, and interpret these data en masse. This study used natural language processing (NLP) and data mining methods to identify and visualize latent themes across 2 distinct subreddits specific to abortion: r/Abortion and r/AbortionDebate. As a comparative study, we aimed to compare the semantic and content differences across these subreddits to gain a comprehensive social media portrait of abortion dialogue on Reddit. This study was guided by three research questions:

  • What themes emerge in a corpus of Reddit posts in r/Abortion, the largest subreddit dedicated to abortion social support and outreach?
  • What themes emerge in a corpus of Reddit posts in r/AbortionDebate?
  • What do similarities and differences by subreddits implicate regarding social media–derived beliefs and ideologies on abortion?

Data for this study were collected over 5 months (ie, from January 2020 to May 2022) from the social networking website Reddit. Reddit represents an open network of communities where users can engage and connect with others over shared interests, hobbies, or personal experiences. Unlike other popular social media websites used for computational analyses, including X (X Corp, formerly known as Twitter), Reddit is unique in that users can create specific channels to form communities with other interested parties on diverse issues or topics. These channels, otherwise known as subreddits, comprise people with shared identities who find, subscribe to, and post within these channels. For instance, people interested in gaming can join the r/Gaming subreddit and people with depression can join the r/Depression subreddit.

We leveraged the PRAW (Python Reddit Application programming interface Wrapper) [ 31 ], a third-party application programming interface (API), to collect data for this study and specifically to isolate and download content posted into subreddits in English germane to abortion—we queried the API to allocate similar subreddits also spanning abortion-related topics. This query returned 1 additional subreddit: r/AbortionDebate. Given observable differences in framing (ie, people’s abortion experiences vs debates about abortion), we included this subreddit in our study as an additional but mutually distinct unit of analysis; that is, we collected and stored data for r/Abortion and r/AbortionDebate as separate corpora intended for separate analyses. All data collected for this study were in English, which we selected for 2 reasons: first, >70% of all Reddit users originated from English-speaking countries, and second, at the time of data collection, Reddit posts originating in languages other than English were insufficient for analysis. In Spanish, for example, r/Aborto contained only 5 members, with no activity since 2019; similarly, we observed <50 Spanish-language posts in either r/Abortion or r/AbortionDebate.

Once we identified our subreddits of interest, we queried the API to collect new posts and top posts from the r/Abortion and r/AbortionDebate subreddits. After filtering our data for duplicates and accounting for API data scraping limits, our final sample size comprised 4966 posts, divided into 2 corpora: 56.68% (2815/4966) of r/AbortionDebate posts and 43.31% (2151/4966) of r/Abortion posts.

We aimed to use NLP to identify salient categories in the r/Abortion and r/AbortionDebate subreddits. In numerous studies, latent Dirichlet allocation (LDA) topic models have been predominantly used for this purpose. LDA is a well-regarded unsupervised probabilistic model that evaluates word co-occurrence patterns using an iterative Gibbs sampling method [ 32 ]. Although LDA is often considered the gold standard within many academic and professional communities, advancements in NLP, artificial intelligence, and neural networks have introduced innovative topic modeling methods that can more closely approximate the potential meaning in these categories [ 33 ].

For this study, we applied one such advancement, the Bidirectional Encoder Representations from Transformers (BERT) topic modeling tool, BERTopic. BERTopic is an NLP topic modeling approach used to identify latent themes or topics within a collection of interrelated documents [ 34 ]. Unlike LDA, which uses probabilistic modeling to identify latent topics, BERTopic leverages pretrained embeddings from one of many transformer models, a type of neural network architecture in which an input sequence is compared against large-scale language models to calculate embeddings [ 35 ]. Embeddings are used to convert unstructured data, including words and sentences, into fixed-length continuous vectors. These vectors enable mathematical operations to capture semantic meanings, relationships, and other properties related to natural human language.

The vectors calculated using this approach tend to be highly dimensional and difficult to interpret. To reduce dimensionality while maintaining the integrity of our data, we applied a principal component analysis, which is commonly applied in NLP approaches for general dimensionality reduction purposes [ 36 ]. This analysis allowed us to extract and more easily interpret a range of possible clusters or topics in both the r/Abortion and r/AbortionDebate subreddit data. Once we reduced the dimensionality of our vectors, we applied a Hierarchical Density-Based Spatial Clustering of Applications with Noise to identify latent clusters or topics [ 37 ], CountVectorizer to tokenize each topic, and class term frequency–inverse document frequency to extract topic words for each cluster [ 38 ].

Furthermore, to gauge the emotional tone or mood represented in each post from the studied corpora, we applied a Valence Aware Dictionary and Sentiment Reasoner (VADER), a rule-based sentiment analysis tool [ 39 ], and Text2Emotion, a rule-based emotion analysis tool [ 40 ]. VADER sentiment analysis is an algorithm and analysis that examines the polarity of words within each social media post. Posts are fed through a lexicon or web-based dictionary, which is precoded with values for all positive and negative words in the English language. When posts are run through the VADER lexicon, they receive a composite score. Negative VADER values denote lower affect (ie, −0.99 to −0.01), and positive values denote higher affect (ie, 0.01 to 0.99). Although an older tool, VADER is commonly used to assess content affect and emotional affect. In contrast, the Text2Emotion tool for emotion analysis scans each entry for key phrases and terms denoting one of four base emotions: (1) happy, (2) surprise, (3) fear, and (4) sadness. Collectively, these 2 tools can identify potential tonal differences in each post, again implicating the different uses of each subreddit included in the analysis. Both tools have been applied extensively in computational public health studies owing to their ease of access, replicability, and numerous validation studies [ 16 , 21 , 41 , 42 ].

Our workflow is depicted in Figure 1 . First, we queried the Reddit API to archive top and new posts from the r/Abortion and r/AbortionDebate subreddits. Data collected from the r/Abortion and r/AbortionDebate subreddits were saved as separate data files. After removing duplicate and non-English posts in either data file, we applied standard preprocessing steps to remove parts of speech that would detract from the clarity of our models, including articles, prepositions, punctuation, abbreviations, and numbers [ 43 ]. Once the data were cleaned, we tokenized our data at the sentence level before calculating embeddings. Once the data were preprocessed and tokenized, we proceeded with our BERTopic pipeline. First, to calculate embeddings in our data we applied all-MiniLM-L6_v2 [ 44 ], a transformer-based model developed by Microsoft Corp. This model is designed to be a smaller and more efficient transformer model than larger models, including a generative pretrained transformer or T5, which may make it more appropriate for smaller data sets; however, more research is needed to confirm this notion. Once we calculated embeddings for all sentences in each corpus, we applied a principal component analysis to reduce dimensionality in our data, retaining 5 components. We then ran an iterative topic model ranging from 10 to 80 topics and calculated coherence scores [ 45 ] to identify an optimal number of topics, retaining a topic solution with the highest coherence score. For both r/Abortion and r/AbortionDebate, the optimal solution was 10 topics, yielding a respective coherence score of 0.42 and 0.35, which indicates a marginal fit. After we extracted key terms per topic, we applied a sorting function to examine key terms in each entry. Each entry was then classified into one of 10 possible topics in either corpus. We lastly performed a VADER sentiment analysis and Text2Emotion emotion analysis for each entry in both corpora.

comparative study research questions

Ethical Considerations

This study involved a secondary analysis of deidentified and anonymized Reddit posts collected between January and May 2022. As this was an observational study with no contact between human subjects and no possible way to trace posts to any individual author, this study was exempt from Institutional Review Board review.

Our study applied computational tools to collect and analyze subreddits specific to abortion. We aimed to examine how abortion was discussed on the social media platform Reddit, both as an information-sharing tool and as a platform for debating worldviews.

What Themes Emerged in a Corpus of Reddit Posts About r/Abortion, the Largest Subreddit Dedicated to Abortion Social Support and Outreach?

Our coherence score analysis indicated a 10-topic solution for the r/Abortion subreddit. Table 1 outlines each topic by keywords, the number of sentences belonging to each topic, and the percentage of each topic relative to the larger corpus. Names for each topic were derived by reviewing a small excerpt of Reddit data that were sorted into one of 10 topics by a sorting function using keywords.

The r/Abortion subreddit analysis revealed numerous ways in which abortion was discussed in a social support context. The most prominent topic of our study, topic 1: sharing support , comprised the bulk of the conversation with >18% total representation. Social support was commonly manifested by people sharing their own experiences with abortion or by friends and family members who may have experienced abortion. This was further evident by multiple topics containing information-sharing content: abortion experience (topics 3, 7, and 10). Beyond social support, several of our topics also appeared to discuss abortion in a neutral and educational information-sharing context: general abortion (topic 5) and general pregnancy (topic 8).

Further review of the topics added context to our findings. Table 2 provides a summary of each topic and the key excerpts that denote additional meaning. As shown in Table 2 , there was little content indicative of debate or questioning one’s position on abortion. Instead, we observed personal experience sharing, including narrative accounts of one’s experience with abortion generally, miscarriage, and medication abortion specifically.

Perhaps one of the most recurring patterns in our data was frank discussions about postabortion feelings in a clinical setting, (“I felt so nauseous in that waiting room, I was not sure I could go through with it”), a postabortion setting (“It took me a few days to finally feel like myself again post-abortion”), or a medication abortion context (“The mifepristone caused some pretty intense clotting after I took the pill”). The medication abortion narratives were sometimes framed as someone explaining their decision (“I chose the pill because where I live you cannot have someone with you when getting an abortion due to COVID-19”).

What Themes Emerged in a Corpus of Reddit Posts in the r/AbortionDebate Subreddit?

Our coherence score analysis indicated a 10-topic solution for the r/AbortionDebate subreddit. Table 3 outlines each topic by keywords, the number of sentences belonging to each topic, and the percentage of each topic relative to the larger corpus. Names for each topic were derived by reviewing a small excerpt of Reddit data that was sorted into one of 10 topics by a sorting function based on keywords.

Unlike the r/Abortion subreddit, which we determined seemed to be used in a social support and information-sharing context, the r/AbortionDebate subreddit comprised conversations dedicated to critically assessing abortion from legal, moral, and ethical perspectives. The topic with the greatest representation was topic 1: Reddit forum rules and regulations . In topic 1, we observed several posts directly from moderators explicitly warning against outright attacks, misinformation, and vitriol targeted at people with opposing views on abortion; this topic was absent completely in the r/Abortion topic model. The second most prominent topic, topic 2: abortion morality , was centered on debating abortion from a moral perspective. The topic with the smallest representation was topic 5, pertaining to general pregnancy . At face value, we did not observe a great overlap in topic content in the r/AbortionDebate subreddit compared with the r/Abortion subreddit. However, we reviewed additional excerpts to ascribe a deeper meaning to these topics to examine precisely how abortion debates manifested on these forums.

Table 4 outlines additional information about each topic, including a summary and key excerpts that implicate deeper meaning. This additional analysis allowed us to examine more precise moral, legal, and ethical arguments pertaining to people’s expressed views on abortion.

For example, we observed that the abortion morality topic typically contained content related to drawing lines about abortion permissibility (“Where do [people] draw the line between acceptable and not acceptable”). This style of discussion was mirrored in conversations about fetal personhood (“Who here honestly believes a zygote is a person with rights?”) and the role spirituality plays in moral arguments about abortion (“But what do Catholics really think on this issue?”). Discussions and arguments about abortion morality were notably similar to the content in topic 3: abortion legality . Content on this topic typically discussed new abortion-related laws, the merits of those laws, and opinions about their relative effectiveness (“Texas passed a very restrictive law and it will serve as a benchmark for other states, watch”). Importantly, and across topics, we observed that people declared their abortion views (“I am pro-choice and I will always be”) and, in some cases, discussed how their abortion views evolved over time (“I am pro-life, but we should be discussing the merits of abortion as a life-saving tool here”). Here, we observed more opinions than the outright support articulated in the r/Abortion subreddit.

What Do Similarities and Differences by Subreddit Implicate About Social Media–Derived Abortion Beliefs and Ideologies?

Figure 2 visually represents data from each subreddit, where dense, overlapping clusters signify similar topics (or higher collinearity) and nonoverlapping circles indicate dissimilar topics (or lower collinearity).

In both the r/Abortion and r/AbortionDebate subreddits, the intertopic distance maps depict mutual exclusivity in general abortion and pregnancy topics, distinguished by basic sharing of language and specific information related to pregnancy and abortion (“Sometimes a pregnancy can end without warning or reason”; “abortion is a women’s health issue”). Beyond these statements, however, other conversations exhibit a richer and more nuanced discourse about abortion, overlapping between topics and offering deeper insights into an individual’s worldview on abortion, and portraying how various co-occurring factors influence one’s beliefs and worldviews (“Laws are one thing but have you considered the humanistic side of it all?”).

We used VADER and Text2Emotion tools to discern affective differences between r/Abortion and r/AbortionDebate subreddits. The r/Abortion subreddit displayed a compound VADER score of 0.10, reflecting overall neutral content, whereas the r/AbortionDebate subreddit displayed a score of −0.06, denoting neutral to slightly negative content. The emotion analysis findings for the r/Abortion subreddit were as follows: happy (mean 0.06, SD 0.19) , angry (mean 0.20, SD 0.31) , surprise (mean 0.12, SD 0.26) , sad (mean 0.20, SD 0.31) , and fear (mean 0.36, SD 0.39) . The emotion analysis findings for r/AbortionDebate subreddit were as follows: happy (mean 0.12, SD 0.27), angry (mean 0.05, SD 0.18), surprise (mean 0.11, SD 0.25), sad (mean 0.22, SD 0.35), fear (mean 0.28, SD 0.35).

comparative study research questions

Furthermore, the Text2Emotion variable fear was prominent in r/Abortion, whereas happy was slightly more elevated in r/AbortionDebate. These observed differences are likely attributed to the differing nature and scope of the subreddits. For instance, the manifestation of fear may be more related to personal abortion narratives in r/Abortion, whereas happiness may arise from occasional friendly exchanges of views in r/AbortionDebate.

Despite their different foci, both subreddits contain myriad conversation topics, allowing for civil and enlightening discussions on evolving abortion views and ideologies. The discourse in these forums sometimes hints at the evolution of individual ideologies with time, reflecting the dynamic nature of personal beliefs and the influences shaping them (adapted excerpt: “I guess I just don’t know my views”; excerpt: “My opinion changed over time, growing up in a Christian household I was always against abortion...until I needed one myself”) . This phenomenon underscores the essential role of such platforms in fostering understanding and dialogue on the multifaceted issue of abortion.

Our study leveraged Reddit data as a novel, big data form of qualitative data to examine abortion discourse on r/Abortion and r/AbortionDebate subreddits. We observed several important themes, including evidence of complexity in abortion-related social media posts, which warrant further discussion.

The r/Abortion Subreddit as an Information-Seeking or Information-Sharing Platform for People With Questions About Their Abortion Experiences

Within the r/Abortion subreddit, we noticed posters using this platform to discuss abortion in diverse, sometimes overlapping contexts. However, each topic emerging from r/Abortion typically involved a degree of information sharing, whether through the provision of available resources or sharing personal narratives and experiences with abortion. We primarily observed these types of posts in topic 1: sharing support ; topic 2: postabortion emotions ; topics 3, 7, and 10: abortion experience ; and topic 5: clinical experience . The content within these topics typically involved direct sharing of one’s experiences related to abortion or posing highly specific questions about access (eg, excerpt: “Is abortion legal past 6 weeks gestation in Oklahoma?”) and medication abortion (eg, excerpt: “Abortion is legal here; can I get abortion pills by mail?”). Within the medication abortion topic, the content was both informative and supportive, with some posters sharing their experience in solidarity with others facing a similar choice. Notably, we did not observe any critiques against anyone’s abortion narratives; rather, the tone and structure, as also evident in this study’s VADER and emotion analysis, are largely informative and overall supportive of abortion. Given that the rules and guidelines established this subreddit as a place of nonconfrontational discussion, perhaps people advocating for other reproductive choices may have shared their perspectives in other subreddits, such as those related to adoption.

We acknowledge the possible connection between personal tendencies to share intimate information and the continually evolving role of the internet as a medium for social connection and information acquisition [ 46 ]. Notably, for the past 3 decades, the internet has become the most influential medium for information-seeking globally. The Pew Research Center indicates that approximately 80% of the adult population in the United States regularly use the internet to acquire general information or understand unfamiliar topics [ 47 ]. For example, an individual contemplating an abortion might opt to seek guidance in web-based forums to avoid potential ostracism from friends and family. Similarly, a friend or family member of someone considering an abortion might turn to web-based forums to secure advice or perspectives on assisting their loved one. Discourse on such platforms is crucial, especially when addressing sensitive topics that many may feel uneasy to discuss openly. This emphasizes the significance of the internet as a confidential and reliable resource for information and advice. Importantly, this also supports Reddit as a source of information for people needing abortion-related counseling.

These excerpts, and others in our composite sample, illustrate that social networking websites serve as a potentially crucial source of information for some [ 48 ], offering insights and details that may be otherwise unavailable, including local and state resources for abortion. This finding becomes particularly salient in light of the overturning of Roe v. Wade , which marked the end of federal protections for abortion until viability [ 49 ]. In the wake of this decision, 24 states enacted bans with limited exceptions or additional restrictions on abortion—generally earlier in terms of weeks of gestation than previously occurring under Roe v. Wade [ 50 ]. For those residing in states where abortion transitioned from being broadly legal to almost entirely illegal, web-based resources may have played a pivotal role during instances of unplanned pregnancy, as observed previously [ 51 ]. Further research is imperative to assess the efficacy of Reddit and other social networking sites in offering support and resources on this and other health-related topics. Notably, this subreddit contained little to no expression about personal abortion beliefs and ideologies.

The r/AbortionDebate Subreddit and Discussions of Abortion Identity and Changing Views Over Time

We did not observe much information or support sharing in the r/AbortionDebate subreddit. Rather, content in this subreddit discussed values and beliefs about abortion across many domains, including ethical, moral, legal, and humanistic. In several circumstances, we observed complex and nuanced abortion perspectives that do not correspond neatly to prochoice or prolife frameworks—2 commonly used but contested abortion identity labels used to outline personal abortion beliefs. For example, as many as half of the topics uncovered by r/AbortionDebate contained contradictory expressions regarding abortion and how the abortion debate was framed. These posts were broadly delineated as those deconstructing or debating prochoice and prolife movements and others explaining how circumstances contributed to moral and ethical shifts in abortion views, for example, in the following excerpts: “I was and will always be pro-choice, but my reaction was absolutely not [to abort a fetus with serious birth defects] even though I knew it was the right answer” and “I was pro-life and never thought I’d need Planned Parenthood until I did. My experience changed my opinion of them, but [I still wish] they didn’t primarily exist to perform abortions.” Here, the emphasis is far less on information or support sharing, rather the purpose is to articulate personal views about abortion and defend them accordingly. These findings align with ongoing abortion attitude research citing complex or nuanced abortion views that do not neatly fit into a singular label [ 52 - 54 ].

In addition to discussing and debating abortion values, we observed more combative content in the r/AbortionDebate subreddit. This is likely by design, namely to parse out people seeking information about abortion versus people looking to debate abortion [ 55 ]. Such differences between the r/Abortion and r/AbortionDebate subreddits were particularly evident in our sentiment and emotion analyses. For example, r/AbortionDebate yielded slightly more negative VADER affect scores and decreased emotion analysis scores for fear . We attributed more negative VADER scores to the often contentious exchanges among users (excerpt: “All these pro-choicers in here trying to lump as all as anti-women bigots”). We attributed lower fear scores to the apparent use of r/AbortionDebate as a forum to discuss abortion views and not for sharing information or narrative accounts about abortion. In other words, negative language was reflected via discourse in the r/AbortionDebate subreddit, as opposed to expressing personal fears or concerns about abortion, which may have surfaced more in the r/Abortion subreddit. In this context, the r/AbortionDebate subreddit may be more useful for mining insights into abortion ideologies, particularly when examining precise factors about abortion, including moral and legal arguments, gestational limits, and others. However, to gain insights into how abortion, as a medical procedure, is communicated from a decision-making perspective, r/Abortion may be more informative.

We identified 2 main implications from the content differences observed in r/Abortion and r/AbortionDebate. First, opting for the right Reddit forum is critically important. Reddit’s structure—where users select forums based on interests or needs—is different from other social networking sites. For people looking for ideally accurate, impartial information about abortion, r/Abortion or similar subreddits are suitable. Meanwhile, r/AbortionDebate is better for those wanting to discuss and ponder the ethical aspects of abortion. However, this choice is dependent on knowing how Reddit works. We project that a significant proportion of people may join the wrong forum and get exposed to unintended outcomes and viewpoints owing to a lack of preexisting knowledge about Reddit and its operations. Second, our observations support the idea that Reddit’s higher moderation levels make it a valuable tool for social science research. Historically, Reddit has carried the reputation of fostering trolls and hate speech. However, for health content, subreddits tend to be more effectively moderated by content experts. As evidenced in our data, both subreddits seemed relatively free from hate speech and trolling because of this moderation, which is unique to Reddit compared with other social media platforms. Therefore, Reddit remains a fairly reliable platform for both users and researchers, especially in the wake of recent changes in APIs and data access on other platforms, including X (X Corp, formerly known as Twitter).

Social Media as a Resource and Triangulation Tool to Support Ongoing Quantitative and Qualitative Research on Abortion

Our findings, particularly those critiquing abortion identity labels or people explaining their contextual abortion beliefs, support extant research demonstrating that people’s attitudes toward abortion are complex. Notably, this larger body of research argues that abortion attitudes are not unidimensional or polar but rather vary along legal, moral, social, and other similar domains [ 2 , 3 , 56 , 57 ]. This work is composed of both quantitative (surveys) and qualitative (interviews) data collections, which collectively yield deep insights into social attitude formation in the United States and how beliefs vary based on context and other dimensions. Consistent with these studies, our results support the notion that abortion attitudes and abortion decision-making are not unidimensional but involve multiple co-occurring considerations.

The novel nature of social media as data adds additional validity to previous abortion attitude research. This is particularly salient regarding how our findings triangulate or corroborate previous research on abortion attitude complexity. Notably, by mining Reddit abortion forums, we observed at least two principal uses of these forums: (1) as a space to share narratives and resources about abortion and (2) as a dedicated channel to debate abortion views. For many, Reddit forums could be a place where some people feel comfortable sharing or debating abortion views, although we acknowledge that more research on this area is needed. Furthermore, Reddit offers a somewhat anonymous space where people can gather the information they need about abortion or inform their perspectives on abortion. These shared Reddit perspectives, which are generally top of the mind, spontaneous, and unprompted [ 58 ], may provide a window into collective abortion beliefs that support or refute previous findings from other conventional forms of data collection. Similar uses of social media data, namely to corroborate findings on social issues, including gun control [ 59 ], marriage equality [ 60 ], and vaccination mandates [ 61 ], have been similarly leveraged. Therefore, we argue that social media can be a valuable source of data to help elucidate people’s opinions on relevant social issues.

Furthermore, we argue that national surveys, strategic qualitative interviews, and mass social media scrapes as data sources yield specific outcomes that, when combined, provide a robust and comprehensive portrait of social issues. Survey data, which are strengthened when participants are identified via probability-based sampling protocols [ 62 ], reveal nationwide associations between demographic variables and other variables of interest. Qualitative data can reveal insights into highly specific research questions, for example, whether changing auxiliary verbs leads to diverging responses about abortion beliefs [ 63 ]. Social media data scrapes can offer population-level insights that support or contradict findings from previous studies at the population-level scope and scale [ 41 ]. Our Reddit data support previous findings from surveys and qualitative research, demonstrating how social media data can serve as a triangulation tool. We contend that further strategic applications of social media mining with traditional quantitative and qualitative research can provide highly accurate portrayals of social views in the United States.

Limitations and Future Research

This study has several limitations that we hope to address in future research. First, although Reddit posts can be construed as qualitative data, we did not perform a formal qualitative analysis using these data. Owing to the scope of this study, we instead leveraged NLP algorithms to categorize and visualize all data simultaneously. In the future, researchers could perform detailed qualitative inquiries with these data, which can occur with the entire data set or among one or several clusters depending on the scope and research questions. Second, our study was limited to exploratory analyses. Although more refined algorithms could more effectively annotate and classify our data, we believed that these approaches would better serve as a follow-up to our exploratory approach to mining Reddit data. Future studies should consider using our data for more refined machine learning–driven or artificial intelligence-driven tasks. Finally, our study was limited by its relatively small timeframe (5 months). It is likely that collecting data for an even longer period may have yielded more nuanced findings.

Conclusions

With the decision in Dobbs v. Jackson Women’s Health Organization overturning Roe v. Wade , there is renewed attention to abortion as a contentious political and social issue. Despite abortion being an exceedingly complex topic, political debate and discussions about abortion are generally framed dichotomously as a support or opposition, or prolife or prochoice issue. However, extensive research indicates that public opinion about abortion does not ascribe neatly to that dichotomy and that circumstances beyond a person’s control may lead to shifts in views of abortion over time. Our research corroborates such findings that detail the myriad ways in which abortion attitudes are complex and contextual, beyond simple information-seeking. Furthermore, our findings provide evidence that social media data can be a helpful triangulation tool for public opinion survey research.

Data Availability

The data are currently stored in a secure GitHub repository and are available for further analysis upon request.

Conflicts of Interest

None declared.

  • Jozkowski KN, Crawford BL, Hunt ME. Complexity in attitudes toward abortion access: results from two studies. Sex Res Soc Policy. Mar 10, 2018;15(4):464-482. [ CrossRef ]
  • Jozkowski KN, Crawford BL, Turner RC, Lo WJ. Knowledge and sentiments of Roe v. Wade in the wake of justice Kavanaugh’s nomination to the U.S. Supreme Court. Sex Res Soc Policy. May 31, 2019;17(2):285-300. [ CrossRef ]
  • Jozkowski KN, Crawford BL, Willis M. Abortion complexity scores from 1972 to 2018: a cross-sectional time-series analysis using data from the general social survey. Sex Res Soc Policy. Mar 09, 2020;18(1):13-26. [ CrossRef ]
  • Maier JM, Jozkowski KN, Valdez D, Crawford BL, Turner RC, Lo WJ. Applicability of a salient belief elicitation to measure abortion beliefs. Am J Health Behav. Jan 01, 2021;45(1):81-94. [ CrossRef ] [ Medline ]
  • Hans JD, Kimberly C. Abortion attitudes in context: a multidimensional vignette approach. Soc Sci Res. Nov 2014;48:145-156. [ CrossRef ] [ Medline ]
  • Crawford BL, LaRoche KJ, Jozkowski KN. Examining abortion attitudes in the context of gestational age. Soc Sci Q. May 16, 2022;103(4):855-867. [ CrossRef ]
  • Smith TW. An evaluation of Spanish questions on the 2006 general social survey. NORC/University of Chicago. Mar 2007. URL: https:/​/gss.​norc.org/​Documents/​reports/​methodological-reports/​MR109%20An%20Evaluation%20of%20Spanish%​20Questions%20on%20the%202006%20General%20Social%20Survey.​pdf [accessed 2024-01-29]
  • Bowman K, Goldstein S. Attitudes about abortion: a comprehensive review of polls from the 1970s to today. American Enterprise Institute. Nov 2, 2021. URL: https:/​/www.​aei.org/​research-products/​report/​attitudes-about-abortion-a-​comprehensive-review-of-polls-from-the-1970s-to-today/​ [accessed 2022-07-21]
  • Doherty D. What can conjoint experiments tell us about Americans’ abortion attitudes? Am Politics Res. Jan 21, 2022;50(2):147-156. [ CrossRef ]
  • Jelen TG, Wilcox C. Causes and consequences of public attitudes toward abortion: a review and research agenda. Polit Res Q. Jul 02, 2016;56(4):489-500. [ CrossRef ]
  • Buyuker BE, LaRoche KJ, Bueno X, Jozkowski KN, Crawford BL, Turner RC, et al. A mixed-methods approach to understanding the disconnection between perceptions of abortion acceptability and support for Roe v. Wade among US adults. J Health Polit Policy Law. Aug 01, 2023;48(4):649-678. [ CrossRef ] [ Medline ]
  • Friedersdorf C. There are more than two sides to the abortion debate. The Atlantic. Dec 10, 2021. URL: https:/​/www.​theatlantic.com/​ideas/​archive/​2021/​12/​there-are-more-than-two-sides-to-the-abortion-debate/​620978/​ [accessed 2022-05-27]
  • Adamo C, Carpenter J. Sentiment and the belief in fake news during the 2020 presidential primaries. Oxf Open Econ. 2023;2:odad051. [ CrossRef ]
  • Milakovich ME, Wise JM. Internet technology as a global connector. In: Digital Learning. Cheltenham, UK. Edward Elgar Publishing; 2019. [ CrossRef ]
  • Perrin A. Social media usage: 2005-2015. Pew Research Center. Oct 8, 2015. URL: https://www.pewresearch.org/internet/2015/10/08/social-networking-usage-2005-2015/ [accessed 2024-01-29]
  • Bathina KC, Ten Thij M, Valdez D, Rutter LA, Bollen J. Declining well-being during the COVID-19 pandemic reveals US social inequities. PLoS One. Jul 8, 2021;16(7):e0254114. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zafarani R, Abbasi MA, Liu H. Social Media Mining: An Introduction. Cambridge, MA. Cambridge University Press; 2014. URL: http://www.socialmediamining.info/SMM.pdf
  • Jacques L, Valley T, Zhao S, Lands M, Rivera N, Higgins JA. "I'm going to be forced to have a baby": a study of COVID-19 abortion experiences on Reddit. Perspect Sex Reprod Health. Jun 11, 2023;55(2):86-93. [ CrossRef ] [ Medline ]
  • Priya S, Sequeira R, Chandra J, Dandapat SK. Where should one get news updates: Twitter or Reddit. Online Soc Netw Media. Jan 2019;9:17-29. [ CrossRef ]
  • Ong E, Davis L, Sanchez A, Stohl HE, Nelson AL, Robinson N. A review of women’s unanswered questions following miscarriage on different social media platforms [A207]. Obstet Gynecol. May 2022;139:60S. [ CrossRef ]
  • Valdez D, Patterson MS. Computational analyses identify addiction help-seeking behaviors on the social networking website Reddit: insights into online social interactions and addiction support communities. PLOS Digit Health. Nov 2022;1(11):e0000143. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sit M, Elliott SA, Wright KS, Scott SD, Hartling L. Youth mental health help-seeking information needs and experiences: a thematic analysis of Reddit posts. Youth Soc. Oct 29, 2022;56(1):24-41. [ CrossRef ]
  • Abavi R, Branston A, Mason R, Du Mont J. An exploration of sexual assault survivors' discourse online on help-seeking. Violence Vict. Feb 03, 2020;35(1):126-140. [ CrossRef ]
  • Ayers JW, Zhu Z, Harrigian K, Wightman GP, Dredze M, Strathdee SA, et al. Managing HIV during the COVID-19 pandemic: a study of help-seeking behaviors on a social media forum. AIDS Behav. Jul 21, 2023 (forthcoming). [ CrossRef ] [ Medline ]
  • Higgins J, Lands M, Valley T, Carpenter E, Jacques L. Real-time effects of payer restrictions on reproductive healthcare: a qualitative analysis of cost-related barriers and their consequences among U.S. abortion seekers on Reddit. Int J Environ Res Public Health. Aug 26, 2021;18(17):9013. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jacques L, Carpenter E, Valley T, Alvarez B, Higgins J. Medication or surgical abortion? An exploratory study of patient decision making on a popular social media platform. Am J Obstet Gynecol. Sep 2021;225(3):344-347. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Richards NK, Masud A, Arocha J. P28 Breaking down abortion barriers: Reddit users’ empowerment in absence of parental and medical support. Contraception. Oct 2020;102(4):286. [ CrossRef ]
  • Sawicki J, Ganzha M, Paprzycki M, Watanobe Y. Reddit CrosspostNet—studying Reddit communities with large-scale Crosspost graph networks. Algorithms. Sep 04, 2023;16(9):424. [ CrossRef ]
  • Lanthier S, Mason R, Logie CH, Myers T, Du Mont J. "Coming out of the closet about sexual assault": intersectional sexual assault stigma and (non) disclosure to formal support providers among survivors using Reddit. Soc Sci Med. Jul 2023;328:115978. [ CrossRef ] [ Medline ]
  • Richards NK, Masud A, Arocha JF. Online abortion empowerment in absence of parental and medical support: a thematic analysis of a reddit community’s contributions to decision-making and access. Research Square. Preprint posted online May 24, 2021. 2024 [ FREE Full text ]
  • Madan P. Web scraping Reddit with python: a complete guide with code. GoLogin. Mar 23, 2023. URL: https://gologin.com/blog/web-scraping-reddit [accessed 2023-09-26]
  • Blei DM, Ng AY, Jordan MI. Latent dirichlet allocation. J Mach Learn Res. 2003;3:993-1022. [ CrossRef ]
  • Resnik P, Armstrong W, Claudino L, Nguyen T, Nguyen VA, Boyd-Graber J. Beyond LDA: exploring supervised topic modeling for depression-related language in Twitter. In: Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality. Presented at: 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality; June 5, 2015, 2015; Denver, CO. [ CrossRef ]
  • Egger R, Yu J. A topic modeling comparison between LDA, NMF, Top2Vec, and BERTopic to demystify Twitter posts. Front Sociol. May 6, 2022;7:886498. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, et al. Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Presented at: 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations; November 16-20, 2020, 2020; Online. [ CrossRef ]
  • Drikvandi R, Lawal O. Sparse principal component analysis for natural language processing. Ann Data Sci. May 18, 2020;10(1):25-41. [ CrossRef ]
  • Stewart  G, Al-Khassaweneh M. An implementation of the HDBSCAN* clustering algorithm. Appl Sci. Feb 25, 2022;12(5):2405. [ CrossRef ]
  • Kim SW, Gil JM. Research paper classification systems based on TF-IDF and LDA schemes. Hum Cent Comput Inf Sci. Aug 26, 2019;9:30. [ CrossRef ]
  • Hutto C, Gilbert E. VADER: a parsimonious rule-based model for sentiment analysis of social media text. Proc Int AAAI Conf Web Soc Media. May 16, 2014;8(1):216-225. [ FREE Full text ] [ CrossRef ]
  • Aslam N, Rustam F, Lee E, Washington PB, Ashraf I. Sentiment analysis and emotion detection on cryptocurrency related tweets using ensemble LSTM-GRU model. IEEE Access. 2022;10:39313-39324. [ FREE Full text ] [ CrossRef ]
  • Valdez D, Ten Thij M, Bathina K, Rutter LA, Bollen J. Social media insights into US mental health during the COVID-19 pandemic: longitudinal analysis of Twitter data. J Med Internet Res. Dec 14, 2020;22(12):e21418. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Adarsh R, Patil A, Rayar S, Veena KM. Comparison of VADER and LSTM for sentiment analysis. Int J Recent Technol Eng. Mar 2019;7(6):543. [ FREE Full text ]
  • Nesca M, Katz A, Leung C, Lix L. A scoping review of preprocessing methods for unstructured text data to assess data quality. Int J Popul Data Sci. 2022;7(1) [ CrossRef ]
  • Hertling S, Portisch J, Paulheim H. KERMIT -- a transformer-based approach for knowledge graph matching. arXiv. Preprint posted online April 29, 2022. 2024 [ FREE Full text ] [ CrossRef ]
  • O’Callaghan D, Greene D, Carthy J, Cunningham P. An analysis of the coherence of descriptors in topic modeling. Expert Syst Appl. Aug 2015;42(13):5645-5657. [ CrossRef ]
  • Szymkowiak A, Melović B, Dabić M, Jeganathan K, Kundi GS. Information technology and Gen Z: the role of teachers, the internet, and technology in the education of young people. Technol Soc. May 2021;65:101565. [ CrossRef ]
  • Auxier B, Anderson M. Social media use in 2021. Pew Research Center. Apr 7, 2021. URL: https://www.pewresearch.org/internet/2021/04/07/social-media-use-in-2021/ [accessed 2023-03-20]
  • Frey E, Bonfiglioli C, Brunner M, Frawley J. Parents' use of social media as a health information source for their children: a scoping review. Acad Pediatr. May 2022;22(4):526-539. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Crawford BL, Simmons MK, Turner RC, Lo WJ, Jozkowski KN. Perceptions of abortion access across the United States prior to the Dobbs v. Jackson Women's Health Organization decision: results from a national survey. Perspect Sex Reprod Health. Sep 20, 2023;55(3):153-164. [ CrossRef ] [ Medline ]
  • Tracking abortion bans across the country. The New York Times. URL: https://www.nytimes.com/interactive/2022/us/abortion-laws-roe-v-wade.html [accessed 2023-09-26]
  • Reis BY, Brownstein JS. Measuring the impact of health policies using internet search patterns: the case of abortion. BMC Public Health. Aug 25, 2010;10:514. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kim T, Steinberg JR. Individual changes in abortion knowledge and attitudes. Soc Sci Med. Mar 2023;320:115722. [ CrossRef ] [ Medline ]
  • Bueno X, Asamoah NA, LaRoche KJ, Dennis B, Crawford BL, Turner RC, et al. People's perception of changes in their abortion attitudes over the life course: a mixed methods approach. Adv Life Course Res. Sep 2023;57:100558. [ CrossRef ] [ Medline ]
  • Jozkowski KN, Mena-Meléndez L, Crawford BL, Turner RC. Abortion stigma: attitudes toward abortion responsibility, illegal abortion, and perceived punishments of “illegal abortion”. Psychol Women Q. Jul 04, 2023;47(4):443-461. [ CrossRef ]
  • Shen Q, Rosé CP. A tale of two subreddits: measuring the impacts of quarantines on political engagement on Reddit. Proc IntAAAI Conf Web Soc Media. May 31, 2022;16(1):932-943. [ CrossRef ]
  • Crawford BL, Jozkowski KN, Turner RC, Lo WJ. Examining the relationship between Roe v. Wade knowledge and sentiment across political party and abortion identity. Sex Res Soc Policy. May 28, 2021;19(3):837-848. [ CrossRef ]
  • LaRoche KJ, Jozkowski KN, Crawford BL, Haus KR. Attitudes of US adults toward using telemedicine to prescribe medication abortion during COVID-19: a mixed methods study. Contraception. Jul 2021;104(1):104-110. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kulkarni V, Kern ML, Stillwell D, Kosinski M, Matz S, Ungar L, et al. Latent human traits in the language of social media: an open-vocabulary approach. PLoS One. Nov 28, 2018;13(11):e0201703. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dowler K. Media influence on attitudes toward guns and gun control. Am J Crim Just. Mar 2002;26(2):235-247. [ CrossRef ]
  • O'Connor C. 'Appeals to nature' in marriage equality debates: a content analysis of newspaper and social media discourse. Br J Soc Psychol. Sep 27, 2017;56(3):493-514. [ CrossRef ] [ Medline ]
  • Chen L, Ling Q, Cao T, Han K. Mislabeled, fragmented, and conspiracy-driven: a content analysis of the social media discourse about the HPV vaccine in China. Asian J Commun. Sep 08, 2020;30(6):450-469. [ CrossRef ]
  • Catania JA, Dolcini MM, Orellana R, Narayanan V. Nonprobability and probability-based sampling strategies in sexual science. J Sex Res. 2015;52(4):396-411. [ CrossRef ] [ Medline ]
  • Maier JM, Jozkowski KN, Montenegro MS, Willis M, Turner RC, Crawford BL, et al. Examining auxiliary verbs in a salient belief elicitation. Health Behav Policy Rev. Jul 2021;8(4):374-393. [ CrossRef ]

Abbreviations

Edited by A Mavragani; submitted 18.03.23; peer-reviewed by L Jacques, T Zhang; comments to author 27.07.23; revised version received 27.09.23; accepted 20.12.23; published 14.02.24.

©Danny Valdez, Lucrecia Mena-Meléndez, Brandon L Crawford, Kristen N Jozkowski. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 14.02.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. FREE 9+ Comparative Research Templates in PDF

    comparative study research questions

  2. PPT

    comparative study research questions

  3. Comparative Analysis Template

    comparative study research questions

  4. Comparative Research

    comparative study research questions

  5. Comparative Research

    comparative study research questions

  6. 😂 Example of comparative research paper. How To Write A Comparative

    comparative study research questions

VIDEO

  1. IP-EN COMPARATIVE EDUCATION L3 AL Lesson 2 Historical perspectives and methods of comparisons

  2. Comparative Research Design

  3. EXPERIMENTAL Research Design & Comparative Methods. #researchmethods #sociology

  4. 58- Comparative study

  5. EXPERIMENTAL Research Design & Comparative Methods.#researchmethods #sociology #socioclasses

  6. Part 11: A comparative study of Pre-trained encoders for low-resource NER

COMMENTS

  1. An Effective Guide to Comparative Research Questions

    Conclusion Comparative research questions are a type of quantitative research question. It aims to gather information on the differences between two or more research objects based on different variables. These kinds of questions assist the researcher in identifying distinctive characteristics that distinguish one research subject from another.

  2. How to structure quantitative research questions

    There are five steps required to construct a comparative research question: (1) choose your starting phrase; (2) identify and name the dependent variable; (3) identify the groups you are interested in; (4) identify the appropriate adjoining text; and (5) write out the comparative research question. Each of these steps is discussed in turn:

  3. (PDF) A Short Introduction to Comparative Research

    ... A comparative approach is a methodology that analyses phenomena by putting them together to establish points of similarity and difference between them (Shahrokh & Miri, 2019).

  4. 10 Research Question Examples to Guide your Research Project

    10 Research Question Examples to Guide your Research Project Published on October 30, 2022 by Shona McCombes . Revised on October 19, 2023. The research question is one of the most important parts of your research paper, thesis or dissertation. It's important to spend some time assessing and refining your question before you get started.

  5. A Practical Guide to Writing Quantitative and Qualitative Research

    Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1, 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3, 4 Both research questions and hypotheses are essentially formulated based on conventional theories and...

  6. Comparative Research Methods

    Comparative research in communication and media studies is conventionally understood as the contrast among different macro-level units, such as world regions, countries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time.

  7. PDF The Comparative approach: theory and method

    'core subject' that enables us to study the relationship between 'politics and society' in a CONTENTS 2.1 Introduction 2.2 Comparative Research and case selection 2.3 The Use of Comparative analysis in political science: relating politics, polity and policy to society 2.4 End matter - Exercises & Questions - Further Reading

  8. Chapter 10 Methods for Comparative Studies

    According to the typology by Friedman and Wyatt (2006), comparative studies take on an objective view where events such as the use and effect of an eHealth system can be defined, measured and compared through a set of variables to prove or disprove a hypothesis.

  9. Study Objectives and Questions

    Chapter 1 Study Objectives and Questions Scott R Smith, PhD. Author Information and Affiliations The steps involved in the process of developing research questions and study objectives for conducting observational comparative effectiveness research (CER) are described in this chapter.

  10. How to Develop a Good Research Question?

    Therefore, comparative questions are helpful when studying groups with dependent variables. Example: Do men and women have comparable metabolisms? iii. Relationship-Based Questions. This type of research question answers influence of one variable on another. Therefore, experimental studies use this type of research questions are majorly.

  11. 15

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  12. Comparative Research Methods

    As comparative research offers the opportunity to address a particular set of questions that are of crucial importance for our understanding of a wide range of communicative processes, it deserves a central position in communication science. Foundations

  13. Research Question 101

    Types of research questions. Now that we've defined what a research question is, let's look at the different types of research questions that you might come across. Broadly speaking, there are (at least) four different types of research questions - descriptive, comparative, relational, and explanatory. Descriptive questions ask what is happening. In other words, they seek to describe a ...

  14. Types of quantitative research question

    and "What are?". Often, descriptive research questions focus on only one variable and one group, but they can include multiple variables and groups. We provide some examples below: In each of these example descriptive research questions, we are quantifying the variables we are interested in.

  15. How to Write Quantitative Research Questions: Types With Examples

    2. Comparative. Comparative research questions help you identify the difference between two or more groups based on one or more variables. In general, a comparative research question is used to quantify one variable; however, you can use two or more variables depending on your market research objectives. Comparative research questions examples ...

  16. 189 questions with answers in COMPARATIVE STUDIES

    Questions related to Comparative Studies 1 2 Marieta Hristova asked a question related to Comparative Studies How results from a pre-electoral surveys are announced in different European...

  17. Types of Research Questions: Descriptive, Predictive, or Causal

    A previous Evidence in Practice article explained why a specific and answerable research question is important for clinicians and researchers. Determining whether a study aims to answer a descriptive, predictive, or causal question should be one of the first things a reader does when reading an article. Any type of question can be relevant and useful to support evidence-based practice, but ...

  18. Using qualitative comparative analysis in a systematic review of a

    We will use the elements of Fig. 1 to summarize our process of using QCA with systematic review data. Specify configural research questions. As indicated in Fig. 1, we first specified a configural research question, which is a question designed to identify the combinations of conditions that produce an outcome.For each substantive analysis, we specified a single question that combined two of ...

  19. Comparative research

    [2] There are certainly methods that are far more common than others in comparative studies, however. Quantitative analysis is much more frequently pursued than qualitative, and this is seen by the majority of comparative studies which use quantitative data.

  20. Research Questions: Definitions, Types + [Examples]

    A case study is a qualitative research approach that involves carrying out a detailed investigation into a research subject(s) or variable(s). In the course of a case study, the researcher gathers a range of data from multiple sources of information via different data collection methods, and over a period of time. ... Comparative Research ...

  21. Examples of good research questions

    Research questions are typically divided into three broad categories: qualitative, quantitative, and mixed-method. ... Comparative research questions compare two or more groups according to specific criteria and analyze their similarities and differences. ... A qualitative research study based on this question could extrapolate what visitors on ...

  22. Types of Research Questions

    There are three basic types of questions that research projects can address: Descriptive. When a study is designed primarily to describe what is going on or what exists. Public opinion polls that seek only to describe the proportion of people who hold various opinions are primarily descriptive in nature. For instance, if we want to know what ...

  23. Journal of Medical Internet Research

    Background: Attitudes toward abortion have historically been characterized via dichotomized labels, yet research suggests that these labels do not appropriately encapsulate beliefs on abortion. Rather, contexts, circumstances, and lived experiences often shape views on abortion into more nuanced and complex perspectives. Qualitative data have also been shown to underpin belief systems ...