• AI Content Shield
  • AI KW Research
  • AI Assistant
  • SEO Optimizer
  • AI KW Clustering
  • Customer reviews
  • The NLO Revolution
  • Press Center
  • Help Center
  • Content Resources
  • Facebook Group

An Effective Guide to Comparative Research Questions

Table of Contents

Comparative research questions are a type of quantitative research question. It aims to gather information on the differences between two or more research objects based on different variables. 

These kinds of questions assist the researcher in identifying distinctive characteristics that distinguish one research subject from another.

A systematic investigation is built around research questions. Therefore, asking the right quantitative questions is key to gathering relevant and valuable information that will positively impact your work.

This article discusses the types of quantitative research questions with a particular focus on comparative questions.

What Are Quantitative Research Questions?

Quantitative research questions are unbiased queries that offer thorough information regarding a study topic . You can statistically analyze numerical data yielded from quantitative research questions.

This type of research question aids in understanding the research issue by examining trends and patterns. The data collected can be generalized to the overall population and help make informed decisions. 

comparative study research questions

Types of Quantitative Research Questions

Quantitative research questions can be divided into three types which are explained below:

Descriptive Research Questions

Researchers use descriptive research questions to collect numerical data about the traits and characteristics of study subjects. These questions mainly look for responses that bring into light the characteristic pattern of the existing research subjects.

However, note that the descriptive questions are not concerned with the causes of the observed traits and features. Instead, they focus on the “what,” i.e., explaining the topic of the research without taking into account its reasons.

Examples of Descriptive research questions:

  • How often do you use our keto diet app?
  • What price range are you ready to accept for this product?

Comparative Research Questions

Comparative research questions seek to identify differences between two or more distinct groups based on one or more dependent variables. These research questions aim to identify features that differ one research subject from another while emphasizing their apparent similarities.

In market research surveys, asking comparative questions can reveal how your product or service compares to its competitors. It can also help you determine your product’s benefits and drawbacks to gain a competitive edge.

The steps in formulating comparative questions are as follows:

  • Choose the right starting phrase
  • Specify the dependent variable
  • Choose the groups that interest you
  • Identify the relevant adjoining text
  • Compose the comparative research question

Relationship-Based Research Questions

A relationship-based research question refers to the nature of the association between research subjects of the same category. These kinds of research question assist you in learning more about the type of relationship between two study variables.

Because they aim to distinctly define the connection between two variables, relationship-based research questions are also known as correlational research questions.

Examples of Comparative Research Questions

  • What is the difference between men’s and women’s daily caloric intake in London?
  • What is the difference in the shopping attitude of millennial adults and those born in 1980?
  • What is the difference in time spent on video games between people of the age group 15-17 and 18-21?
  • What is the difference in political views of Mexicans and Americans in the US?
  • What are the differences between Snapchat usage of American male and female university students?
  • What is the difference in views towards the security of online banking between the youth and the seniors?
  • What is the difference in attitude between Gen-Z and Millennial toward rock music?
  • What are the differences between online and offline classes?
  • What are the differences between on-site and remote work?
  • What is the difference between weekly Facebook photo uploads between American male and female college students?
  • What are the differences between an Android and an Apple phone?

Comparative research questions are a great way to identify the difference between two study subjects of the same group.

Asking the right questions will help you gain effective and insightful data to conduct your research better . This article discusses the various aspects of quantitative research questions and their types to help you make data-driven and informed decisions when needed.

An Effective Guide to Comparative Research Questions

Abir Ghenaiet

Abir is a data analyst and researcher. Among her interests are artificial intelligence, machine learning, and natural language processing. As a humanitarian and educator, she actively supports women in tech and promotes diversity.

Explore All Engaging Questions Tool Articles

Consider these fun questions about spring.

Spring is a season in the Earth’s yearly cycle after Winter and before Summer. It is the time life and…

  • Engaging Questions Tool

Fun Spouse Game Questions For Couples

Answering spouse game questions together can be fun. It’ll help begin conversations and further explore preferences, history, and interests. The…

Best Snap Game Questions to Play on Snapchat

Are you out to get a fun way to connect with your friends on Snapchat? Look no further than snap…

How to Prepare for Short Response Questions in Tests

When it comes to acing tests, there are a few things that will help you more than anything else. Good…

Top 20 Reflective Questions for Students

As students, we are constantly learning new things. Every day, we are presented with further information and ideas we need…

Random History Questions For History Games

A great icebreaker game is playing trivia even though you don’t know the answer. It is always fun to guess…

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • 10 Research Question Examples to Guide Your Research Project

10 Research Question Examples to Guide your Research Project

Published on October 30, 2022 by Shona McCombes . Revised on October 19, 2023.

The research question is one of the most important parts of your research paper , thesis or dissertation . It’s important to spend some time assessing and refining your question before you get started.

The exact form of your question will depend on a few things, such as the length of your project, the type of research you’re conducting, the topic , and the research problem . However, all research questions should be focused, specific, and relevant to a timely social or scholarly issue.

Once you’ve read our guide on how to write a research question , you can use these examples to craft your own.

Note that the design of your research question can depend on what method you are pursuing. Here are a few options for qualitative, quantitative, and statistical research questions.

Other interesting articles

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, October 19). 10 Research Question Examples to Guide your Research Project. Scribbr. Retrieved March 28, 2024, from https://www.scribbr.com/research-process/research-question-examples/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, writing strong research questions | criteria & examples, how to choose a dissertation topic | 8 steps to follow, evaluating sources | methods & examples, what is your plagiarism score.

Grad Coach

Research Question Examples 🧑🏻‍🏫

25+ Practical Examples & Ideas To Help You Get Started 

By: Derek Jansen (MBA) | October 2023

A well-crafted research question (or set of questions) sets the stage for a robust study and meaningful insights.  But, if you’re new to research, it’s not always clear what exactly constitutes a good research question. In this post, we’ll provide you with clear examples of quality research questions across various disciplines, so that you can approach your research project with confidence!

Research Question Examples

  • Psychology research questions
  • Business research questions
  • Education research questions
  • Healthcare research questions
  • Computer science research questions

Examples: Psychology

Let’s start by looking at some examples of research questions that you might encounter within the discipline of psychology.

How does sleep quality affect academic performance in university students?

This question is specific to a population (university students) and looks at a direct relationship between sleep and academic performance, both of which are quantifiable and measurable variables.

What factors contribute to the onset of anxiety disorders in adolescents?

The question narrows down the age group and focuses on identifying multiple contributing factors. There are various ways in which it could be approached from a methodological standpoint, including both qualitatively and quantitatively.

Do mindfulness techniques improve emotional well-being?

This is a focused research question aiming to evaluate the effectiveness of a specific intervention.

How does early childhood trauma impact adult relationships?

This research question targets a clear cause-and-effect relationship over a long timescale, making it focused but comprehensive.

Is there a correlation between screen time and depression in teenagers?

This research question focuses on an in-demand current issue and a specific demographic, allowing for a focused investigation. The key variables are clearly stated within the question and can be measured and analysed (i.e., high feasibility).

Free Webinar: How To Find A Dissertation Research Topic

Examples: Business/Management

Next, let’s look at some examples of well-articulated research questions within the business and management realm.

How do leadership styles impact employee retention?

This is an example of a strong research question because it directly looks at the effect of one variable (leadership styles) on another (employee retention), allowing from a strongly aligned methodological approach.

What role does corporate social responsibility play in consumer choice?

Current and precise, this research question can reveal how social concerns are influencing buying behaviour by way of a qualitative exploration.

Does remote work increase or decrease productivity in tech companies?

Focused on a particular industry and a hot topic, this research question could yield timely, actionable insights that would have high practical value in the real world.

How do economic downturns affect small businesses in the homebuilding industry?

Vital for policy-making, this highly specific research question aims to uncover the challenges faced by small businesses within a certain industry.

Which employee benefits have the greatest impact on job satisfaction?

By being straightforward and specific, answering this research question could provide tangible insights to employers.

Examples: Education

Next, let’s look at some potential research questions within the education, training and development domain.

How does class size affect students’ academic performance in primary schools?

This example research question targets two clearly defined variables, which can be measured and analysed relatively easily.

Do online courses result in better retention of material than traditional courses?

Timely, specific and focused, answering this research question can help inform educational policy and personal choices about learning formats.

What impact do US public school lunches have on student health?

Targeting a specific, well-defined context, the research could lead to direct changes in public health policies.

To what degree does parental involvement improve academic outcomes in secondary education in the Midwest?

This research question focuses on a specific context (secondary education in the Midwest) and has clearly defined constructs.

What are the negative effects of standardised tests on student learning within Oklahoma primary schools?

This research question has a clear focus (negative outcomes) and is narrowed into a very specific context.

Need a helping hand?

comparative study research questions

Examples: Healthcare

Shifting to a different field, let’s look at some examples of research questions within the healthcare space.

What are the most effective treatments for chronic back pain amongst UK senior males?

Specific and solution-oriented, this research question focuses on clear variables and a well-defined context (senior males within the UK).

How do different healthcare policies affect patient satisfaction in public hospitals in South Africa?

This question is has clearly defined variables and is narrowly focused in terms of context.

Which factors contribute to obesity rates in urban areas within California?

This question is focused yet broad, aiming to reveal several contributing factors for targeted interventions.

Does telemedicine provide the same perceived quality of care as in-person visits for diabetes patients?

Ideal for a qualitative study, this research question explores a single construct (perceived quality of care) within a well-defined sample (diabetes patients).

Which lifestyle factors have the greatest affect on the risk of heart disease?

This research question aims to uncover modifiable factors, offering preventive health recommendations.

Research topic evaluator

Examples: Computer Science

Last but certainly not least, let’s look at a few examples of research questions within the computer science world.

What are the perceived risks of cloud-based storage systems?

Highly relevant in our digital age, this research question would align well with a qualitative interview approach to better understand what users feel the key risks of cloud storage are.

Which factors affect the energy efficiency of data centres in Ohio?

With a clear focus, this research question lays a firm foundation for a quantitative study.

How do TikTok algorithms impact user behaviour amongst new graduates?

While this research question is more open-ended, it could form the basis for a qualitative investigation.

What are the perceived risk and benefits of open-source software software within the web design industry?

Practical and straightforward, the results could guide both developers and end-users in their choices.

Remember, these are just examples…

In this post, we’ve tried to provide a wide range of research question examples to help you get a feel for what research questions look like in practice. That said, it’s important to remember that these are just examples and don’t necessarily equate to good research topics . If you’re still trying to find a topic, check out our topic megalist for inspiration.

comparative study research questions

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

What is a research question?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Cookies & Privacy
  • GETTING STARTED
  • Introduction
  • FUNDAMENTALS
  • Acknowledgements
  • Research questions & hypotheses
  • Concepts, constructs & variables
  • Research limitations
  • Getting started
  • Sampling Strategy
  • Research Quality
  • Research Ethics
  • Data Analysis

Types of quantitative research question

Dissertations that are based on a quantitative research design attempt to answer at least one quantitative research question . In some cases, these quantitative research questions will be followed by either research hypotheses or null hypotheses . However, this article focuses solely on quantitative research questions. Furthermore, since there is more than one type of quantitative research question that you can attempt to answer in a dissertation (i.e., descriptive research questions, comparative research questions and relationship-based research questions), we discuss each of these in this article. If you do not know much about quantitative research and quantitative research questions at this stage, we would recommend that you first read the article, Quantitative research questions: What do I have to think about , as well as an overview article on types of variables , which will help to familiarise you with terms such as dependent and independent variable , as well as categorical and continuous variables [see the article: Types of variables ]. The purpose of this article is to introduce you to the three different types of quantitative research question (i.e., descriptive, comparative and relationship-based research questions) so that you can understand what type(s) of quantitative research question you want to create in your dissertation. Each of these types of quantitative research question is discussed in turn:

Descriptive research questions

Comparative research questions.

  • Relationship-based research questions

Descriptive research questions simply aim to describe the variables you are measuring. When we use the word describe , we mean that these research questions aim to quantify the variables you are interested in. Think of research questions that start with words such as "How much?" , "How often?" , "What percentage?" , and "What proportion?" , but also sometimes questions starting "What is?" and "What are?" . Often, descriptive research questions focus on only one variable and one group, but they can include multiple variables and groups. We provide some examples below:

In each of these example descriptive research questions, we are quantifying the variables we are interested in. However, the units that we used to quantify these variables will differ depending on what is being measured. For example, in the questions above, we are interested in frequencies (also known as counts ), such as the number of calories, photos uploaded, or comments on other users? photos. In the case of the final question, What are the most important factors that influence the career choices of Australian university students? , we are interested in the number of times each factor (e.g., salary and benefits, career prospects, physical working conditions, etc.) was ranked on a scale of 1 to 10 (with 1 = least important and 10 = most important). We may then choose to examine this data by presenting the frequencies , as well as using a measure of central tendency and a measure of spread [see the section on Data Analysis to learn more about these and other statistical tests].

However, it is also common when using descriptive research questions to measure percentages and proportions , so we have included some example descriptive research questions below that illustrate this.

In terms of the first descriptive research question about daily calorific intake , we are not necessarily interested in frequencies , or using a measure of central tendency or measure of spread , but instead want understand what percentage of American men and women exceed their daily calorific allowance . In this respect, this descriptive research question differs from the earlier question that asked: How many calories do American men and women consume per day? Whilst this question simply wants to measure the total number of calories (i.e., the How many calories part that starts the question); in this case, the question aims to measure excess ; that is, what percentage of these two groups (i.e., American men and American women) exceed their daily calorific allowance, which is different for males (around 2500 calories per day) and females (around 2000 calories per day).

If you are performing a piece of descriptive , quantitative research for your dissertation, you are likely to need to set quite a number of descriptive research questions . However, if you are using an experimental or quasi-experimental research design , or a more involved relationship-based research design , you are more likely to use just one or two descriptive research questions as a means to providing background to the topic you are studying, helping to give additional context for comparative research questions and/or relationship-based research questions that follow.

Comparative research questions aim to examine the differences between two or more groups on one or more dependent variables (although often just a single dependent variable). Such questions typically start by asking "What is the difference in?" a particular dependent variable (e.g., daily calorific intake) between two or more groups (e.g., American men and American women). Examples of comparative research questions include:

Groups reflect different categories of the independent variable you are measuring (e.g., American men and women = "gender"; Australian undergraduate and graduate students = "educational level"; pirated music that is freely distributed and pirated music that is purchased = "method of illegal music acquisition").

Comparative research questions also differ in terms of their relative complexity , by which we are referring to how many items/measures make up the dependent variable or how many dependent variables are investigated. Indeed, the examples highlight the difference between very simple comparative research questions where the dependent variable involves just a single measure/item (e.g., daily calorific intake) and potentially more complex questions where the dependent variable is made up of multiple items (e.g., Facebook usage behaviour including a wide range of items, such as logins, weekly photo uploads, status changes, etc.); or where each of these items should be written out as dependent variables.

Overall, whilst the dependent variable(s) highlight what you are interested in studying (e.g., attitudes towards music piracy, perceptions towards Internet banking security), comparative research questions are particularly appropriate if your dissertation aims to examine the differences between two or more groups (e.g., men and women, adolescents and pensioners, managers and non-managers, etc.).

Relationship research questions

Whilst we refer to this type of quantitative research question as a relationship-based research question, the word relationship should be treated simply as a useful way of describing the fact that these types of quantitative research question are interested in the causal relationships , associations , trends and/or interactions amongst two or more variables on one or more groups. We have to be careful when using the word relationship because in statistics, it refers to a particular type of research design, namely experimental research designs where it is possible to measure the cause and effect between two or more variables; that is, it is possible to say that variable A (e.g., study time) was responsible for an increase in variable B (e.g., exam scores). However, at the undergraduate and even master's level, dissertations rarely involve experimental research designs , but rather quasi-experimental and relationship-based research designs [see the section on Quantitative research designs ]. This means that you cannot often find causal relationships between variables, but only associations or trends .

However, when we write a relationship-based research question , we do not have to make this distinction between causal relationships, associations, trends and interactions (i.e., it is just something that you should keep in the back of your mind). Instead, we typically start a relationship-based quantitative research question, "What is the relationship?" , usually followed by the words, "between or amongst" , then list the independent variables (e.g., gender) and dependent variables (e.g., attitudes towards music piracy), "amongst or between" the group(s) you are focusing on. Examples of relationship-based research questions are:

As the examples above highlight, relationship-based research questions are appropriate to set when we are interested in the relationship, association, trend, or interaction between one or more dependent (e.g., exam scores) and independent (e.g., study time) variables, whether on one or more groups (e.g., university students).

The quantitative research design that we select subsequently determines whether we look for relationships , associations , trends or interactions . To learn how to structure (i.e., write out) each of these three types of quantitative research question (i.e., descriptive, comparative, relationship-based research questions), see the article: How to structure quantitative research questions .

Literature Searching

Phillips-Wangensteen Building.

Types of Research Questions

Research questions can be categorized into different types, depending on the type of research to be undertaken.

Qualitative questions concern broad areas or more specific areas of research and focus on discovering, explaining and exploring.  Types of qualitative questions include:

  • Exploratory Questions, which seeks to understand without influencing the results.  The objective is to learn more about a topic without bias or preconceived notions.
  • Predictive Questions, which seek to understand the intent or future outcome around a topic.
  • Interpretive Questions, which tries to understand people’s behavior in a natural setting.  The objective is to understand how a group makes sense of shared experiences with regards to various phenomena.

Quantitative questions prove or disprove a  researcher’s hypothesis and are constructed to express the relationship between variables  and whether this relationship is significant.  Types of quantitative questions include:

  • Descriptive questions , which are the most basic type of quantitative research question and seeks to explain the when, where, why or how something occurred. 
  • Comparative questions are helpful when studying groups with dependent variables where one variable is compared with another.
  • Relationship-based questions try to answer whether or not one variable has an influence on another.  These types of question are generally used in experimental research questions.

References/Additional Resources

Lipowski, E. E. (2008). Developing great research questions . American Journal of Health-System Pharmacy, 65(17), 1667–1670.

Ratan, S. K., Anand, T., & Ratan, J. (2019). Formulation of Research Question - Stepwise Approach .  Journal of Indian Association of Pediatric Surgeons ,  24 (1), 15–20.

Fandino W.(2019). Formulating a good research question: Pearls and pitfalls . I ndian J Anaesth. 63(8) :611-616. 

Beck, L. L. (2023). The question: types of research questions and how to develop them . In Translational Surgery: Handbook for Designing and Conducting Clinical and Translational Research (pp. 111-120). Academic Press. 

Doody, O., & Bailey, M. E. (2016). Setting a research question, aim and objective. Nurse Researcher, 23(4), 19–23.

Plano Clark, V., & Badiee, M. (2010). Research questions in mixed methods research . In: SAGE Handbook of Mixed Methods in Social & Behavioral Research .  SAGE Publications, Inc.,

Agee, J. (2009). Developing qualitative research questions: A reflective process .  International journal of qualitative studies in education ,  22 (4), 431-447. 

Flemming, K., & Noyes, J. (2021). Qualitative Evidence Synthesis: Where Are We at? I nternational Journal of Qualitative Methods, 20.  

Research Question Frameworks

Research question frameworks have been designed to help structure research questions and clarify the main concepts. Not every question can fit perfectly into a framework, but using even just parts of a framework can help develop a well-defined research question. The framework to use depends on the type of question to be researched.   There are over 25 research question frameworks available.  The University of Maryland has a nice table listing out several of these research question frameworks, along with what the acronyms mean and what types of questions/disciplines that may be used for.

The process of developing a good research question involves taking your topic and breaking each aspect of it down into its component parts.

Booth, A., Noyes, J., Flemming, K., Moore, G., Tunçalp, Ö., & Shakibazadeh, E. (2019). Formulating questions to explore complex interventions within qualitative evidence synthesis.   BMJ global health ,  4 (Suppl 1), e001107. (See supplementary data#1)

The "Well-Built Clinical Question“: PICO(T)

One well-established framework that can be used both for refining questions and developing strategies is known as PICO(T). The PICO framework was designed primarily for questions that include interventions and comparisons, however other types of questions may also be able to follow its principles.  If the PICO(T) framework does not precisely fit your question, using its principles (see alternative component suggestions) can help you to think about what you want to explore even if you do not end up with a true PICO question.

A PICO(T) question has the following components:

  • P : The patient’s disorder or disease or problem of interest / research object
  • I: The intervention, exposure or finding under review / Application of a theory or method
  • C: A comparison intervention or control (if applicable- not always present)/ Alternative theories or methods (or, in their absence, the null hypothesis)
  • O : The outcome(s) (desired or of interest) / Knowledge generation
  • T : (The time factor or period)

Keep in mind that solely using a tool will not enable you to design a good question. What is required is for you to think, carefully, about exactly what you want to study and precisely what you mean by each of the things that you think you want to study.

Rzany, & Bigby, M. (n.d.). Formulating Well-Built Clinical Questions. In Evidence-based dermatology / (pp. 27–30). Blackwell Pub/BMJ Books.  

Nishikawa-Pacher, A. (2022). Research questions with PICO: a universal mnemonic.   Publications ,  10 (3), 21.

  • << Previous: Characteristics of a good research question
  • Next: Choosing the Search Terms >>
  • (855) 776-7763

Training Maker

All Products

Qualaroo Insights

ProProfs.com

  • Sign Up Free

Do you want a free Survey Software?

We have the #1 Online Survey Maker Software to get actionable user insights.

How to Write Quantitative Research Questions: Types With Examples

How to Write Quantitative Research Questions: Types With Examples

For research to be effective, it becomes crucial to properly formulate the quantitative research questions in a correct way. Otherwise, you will not get the answers you were looking for.

Has it ever happened that you conducted a quantitative research study and found out the results you were expecting are quite different from the actual results?

This could happen due to many factors like the unpredictable nature of respondents, errors in calculation, research bias, etc. However, your quantitative research usually does not provide reliable results when questions are not written correctly.

We get it! Structuring the quantitative research questions can be a difficult task.

Hence, in this blog, we will share a few bits of advice on how to write good quantitative research questions. We will also look at different types of quantitative research questions along with their examples.

Let’s start:

How to Write Quantitative Research Questions?

When you want to obtain actionable insight into the trends and patterns of the research topic to make sense of it, quantitative research questions are your best bet.

Being objective in nature, these questions provide you with detailed information about the research topic and help in collecting quantifiable data that can be easily analyzed. This data can be generalized to the entire population and help make data-driven and sound decisions.

Respondents find it easier to answer quantitative survey questions than qualitative questions . At the same time, researchers can also analyze them quickly using various statistical models.

However, when it comes to writing the quantitative research questions, one can get a little overwhelmed as the entire study depends on the types of questions used.

There is no “one good way” to prepare these questions. However, to design well-structured quantitative research questions, you can follow the 4-steps approach given below:

1. Select the Type of Quantitative Question

The first step is to determine which type of quantitative question you want to add to your study. There are three types of quantitative questions:

  • Descriptive
  • Comparative 
  • Relationship-based

This will help you choose the correct words and phrases while constructing the question. At the same time, it will also assist readers in understanding the question correctly.

2. Identify the Type of Variable

The second step involves identifying the type of variable you are trying to measure, manipulate, or control. Basically, there are two types of variables:

  • Independent variable (a variable that is being manipulated)
  • Dependent variable (outcome variable)

quantitative questions examples

If you plan to use descriptive research questions, you have to deal with a number of dependent variables. However, where you plan to create comparative or relationship research questions, you will deal with both dependent and independent variables.

3. Select the Suitable Structure

The next step is determining the structure of the research question. It involves:

  • Identifying the components of the question. It involves the type of dependent or independent variable and a group of interest (the group from which the researcher tries to conclude the population).
  • The number of different components used. Like, as to how many variables and groups are being examined.
  • Order in which these are presented. For example, the independent variable before the dependent variable or vice versa.

4. Draft the Complete Research Question

The last step involves identifying the problem or issue that you are trying to address in the form of complete quantitative survey questions. Also, make sure to build an exhaustive list of response options to make sure your respondents select the correct response. If you miss adding important answer options, then the ones chosen by respondents may not be entirely true.

Types of Quantitative Research Questions With Examples

Quantitative research questions are generally used to answer the “who” and “what” of the research topic. For quantitative research to be effective, it is crucial that the respondents are able to answer your questions concisely and precisely. With that in mind, let’s look in greater detail at the three types of formats you can use when preparing quantitative market research questions.

1. Descriptive

Descriptive research questions are used to collect participants’ opinions about the variable that you want to quantify. It is the most effortless way to measure the particular variable (single or multiple variables) you are interested in on a large scale. Usually, descriptive research questions begin with “ how much,” “how often,” “what percentage,” “what proportion,” etc.

Examples of descriptive research questions include:

2. Comparative

Comparative research questions help you identify the difference between two or more groups based on one or more variables. In general, a comparative research question is used to quantify one variable; however, you can use two or more variables depending on your market research objectives.

Comparative research questions examples include:

3. Relationship-based

Relationship research questions are used to identify trends, causal relationships, or associations between two or more variables. It is not vital to distinguish between causal relationships, trends, or associations while using these types of questions. These questions begin with “What is the relationship” between independent and dependent variables, amongst or between two or more groups.

Relationship-based quantitative questions examples include:

Ready to Write Your Quantitative Research Questions?

So, there you have it. It was all about quantitative research question types and their examples. By now, you must have figured out a way to write quantitative research questions for your survey to collect actionable customer feedback.

Now, the only thing you need is a good survey maker tool, like ProProfs Survey Maker, that will glide your process of designing and conducting your surveys . You also get access to various survey question types, both qualitative and quantitative, that you can add to any kind of survey along with professionally-designed survey templates .

Jared Cornell

About the author

Jared cornell.

Jared is a customer support expert. He has been published in CrazyEgg , Foundr , and CXL . As a customer support executive at ProProfs, he has been instrumental in developing a complete customer support system that more than doubled customer satisfaction. You can connect and engage with Jared on Twitter , Facebook , and LinkedIn .

Popular Posts in This Category

comparative study research questions

Course Evaluation Survey: Questions & Tips to Create

comparative study research questions

30+ Employee Benefits Survey Questions to Ask

comparative study research questions

Employee Surveys is Still the Best Way to Measure Engagement

comparative study research questions

A Comprehensive Guide to Quantitative Research: Types, Characteristics, Methods & Examples

comparative study research questions

What Is a Customer Satisfaction Score and How to Measure It

comparative study research questions

How to Create Popup Surveys for Your Website

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Questions – Types, Examples and Writing Guide

Research Questions – Types, Examples and Writing Guide

Table of Contents

Research Questions

Research Questions

Definition:

Research questions are the specific questions that guide a research study or inquiry. These questions help to define the scope of the research and provide a clear focus for the study. Research questions are usually developed at the beginning of a research project and are designed to address a particular research problem or objective.

Types of Research Questions

Types of Research Questions are as follows:

Descriptive Research Questions

These aim to describe a particular phenomenon, group, or situation. For example:

  • What are the characteristics of the target population?
  • What is the prevalence of a particular disease in a specific region?

Exploratory Research Questions

These aim to explore a new area of research or generate new ideas or hypotheses. For example:

  • What are the potential causes of a particular phenomenon?
  • What are the possible outcomes of a specific intervention?

Explanatory Research Questions

These aim to understand the relationship between two or more variables or to explain why a particular phenomenon occurs. For example:

  • What is the effect of a specific drug on the symptoms of a particular disease?
  • What are the factors that contribute to employee turnover in a particular industry?

Predictive Research Questions

These aim to predict a future outcome or trend based on existing data or trends. For example :

  • What will be the future demand for a particular product or service?
  • What will be the future prevalence of a particular disease?

Evaluative Research Questions

These aim to evaluate the effectiveness of a particular intervention or program. For example:

  • What is the impact of a specific educational program on student learning outcomes?
  • What is the effectiveness of a particular policy or program in achieving its intended goals?

How to Choose Research Questions

Choosing research questions is an essential part of the research process and involves careful consideration of the research problem, objectives, and design. Here are some steps to consider when choosing research questions:

  • Identify the research problem: Start by identifying the problem or issue that you want to study. This could be a gap in the literature, a social or economic issue, or a practical problem that needs to be addressed.
  • Conduct a literature review: Conducting a literature review can help you identify existing research in your area of interest and can help you formulate research questions that address gaps or limitations in the existing literature.
  • Define the research objectives : Clearly define the objectives of your research. What do you want to achieve with your study? What specific questions do you want to answer?
  • Consider the research design : Consider the research design that you plan to use. This will help you determine the appropriate types of research questions to ask. For example, if you plan to use a qualitative approach, you may want to focus on exploratory or descriptive research questions.
  • Ensure that the research questions are clear and answerable: Your research questions should be clear and specific, and should be answerable with the data that you plan to collect. Avoid asking questions that are too broad or vague.
  • Get feedback : Get feedback from your supervisor, colleagues, or peers to ensure that your research questions are relevant, feasible, and meaningful.

How to Write Research Questions

Guide for Writing Research Questions:

  • Start with a clear statement of the research problem: Begin by stating the problem or issue that your research aims to address. This will help you to formulate focused research questions.
  • Use clear language : Write your research questions in clear and concise language that is easy to understand. Avoid using jargon or technical terms that may be unfamiliar to your readers.
  • Be specific: Your research questions should be specific and focused. Avoid broad questions that are difficult to answer. For example, instead of asking “What is the impact of climate change on the environment?” ask “What are the effects of rising sea levels on coastal ecosystems?”
  • Use appropriate question types: Choose the appropriate question types based on the research design and objectives. For example, if you are conducting a qualitative study, you may want to use open-ended questions that allow participants to provide detailed responses.
  • Consider the feasibility of your questions : Ensure that your research questions are feasible and can be answered with the resources available. Consider the data sources and methods of data collection when writing your questions.
  • Seek feedback: Get feedback from your supervisor, colleagues, or peers to ensure that your research questions are relevant, appropriate, and meaningful.

Examples of Research Questions

Some Examples of Research Questions with Research Titles:

Research Title: The Impact of Social Media on Mental Health

  • Research Question : What is the relationship between social media use and mental health, and how does this impact individuals’ well-being?

Research Title: Factors Influencing Academic Success in High School

  • Research Question: What are the primary factors that influence academic success in high school, and how do they contribute to student achievement?

Research Title: The Effects of Exercise on Physical and Mental Health

  • Research Question: What is the relationship between exercise and physical and mental health, and how can exercise be used as a tool to improve overall well-being?

Research Title: Understanding the Factors that Influence Consumer Purchasing Decisions

  • Research Question : What are the key factors that influence consumer purchasing decisions, and how do these factors vary across different demographics and products?

Research Title: The Impact of Technology on Communication

  • Research Question : How has technology impacted communication patterns, and what are the effects of these changes on interpersonal relationships and society as a whole?

Research Title: Investigating the Relationship between Parenting Styles and Child Development

  • Research Question: What is the relationship between different parenting styles and child development outcomes, and how do these outcomes vary across different ages and developmental stages?

Research Title: The Effectiveness of Cognitive-Behavioral Therapy in Treating Anxiety Disorders

  • Research Question: How effective is cognitive-behavioral therapy in treating anxiety disorders, and what factors contribute to its success or failure in different patients?

Research Title: The Impact of Climate Change on Biodiversity

  • Research Question : How is climate change affecting global biodiversity, and what can be done to mitigate the negative effects on natural ecosystems?

Research Title: Exploring the Relationship between Cultural Diversity and Workplace Productivity

  • Research Question : How does cultural diversity impact workplace productivity, and what strategies can be employed to maximize the benefits of a diverse workforce?

Research Title: The Role of Artificial Intelligence in Healthcare

  • Research Question: How can artificial intelligence be leveraged to improve healthcare outcomes, and what are the potential risks and ethical concerns associated with its use?

Applications of Research Questions

Here are some of the key applications of research questions:

  • Defining the scope of the study : Research questions help researchers to narrow down the scope of their study and identify the specific issues they want to investigate.
  • Developing hypotheses: Research questions often lead to the development of hypotheses, which are testable predictions about the relationship between variables. Hypotheses provide a clear and focused direction for the study.
  • Designing the study : Research questions guide the design of the study, including the selection of participants, the collection of data, and the analysis of results.
  • Collecting data : Research questions inform the selection of appropriate methods for collecting data, such as surveys, interviews, or experiments.
  • Analyzing data : Research questions guide the analysis of data, including the selection of appropriate statistical tests and the interpretation of results.
  • Communicating results : Research questions help researchers to communicate the results of their study in a clear and concise manner. The research questions provide a framework for discussing the findings and drawing conclusions.

Characteristics of Research Questions

Characteristics of Research Questions are as follows:

  • Clear and Specific : A good research question should be clear and specific. It should clearly state what the research is trying to investigate and what kind of data is required.
  • Relevant : The research question should be relevant to the study and should address a current issue or problem in the field of research.
  • Testable : The research question should be testable through empirical evidence. It should be possible to collect data to answer the research question.
  • Concise : The research question should be concise and focused. It should not be too broad or too narrow.
  • Feasible : The research question should be feasible to answer within the constraints of the research design, time frame, and available resources.
  • Original : The research question should be original and should contribute to the existing knowledge in the field of research.
  • Significant : The research question should have significance and importance to the field of research. It should have the potential to provide new insights and knowledge to the field.
  • Ethical : The research question should be ethical and should not cause harm to any individuals or groups involved in the study.

Purpose of Research Questions

Research questions are the foundation of any research study as they guide the research process and provide a clear direction to the researcher. The purpose of research questions is to identify the scope and boundaries of the study, and to establish the goals and objectives of the research.

The main purpose of research questions is to help the researcher to focus on the specific area or problem that needs to be investigated. They enable the researcher to develop a research design, select the appropriate methods and tools for data collection and analysis, and to organize the results in a meaningful way.

Research questions also help to establish the relevance and significance of the study. They define the research problem, and determine the research methodology that will be used to address the problem. Research questions also help to determine the type of data that will be collected, and how it will be analyzed and interpreted.

Finally, research questions provide a framework for evaluating the results of the research. They help to establish the validity and reliability of the data, and provide a basis for drawing conclusions and making recommendations based on the findings of the study.

Advantages of Research Questions

There are several advantages of research questions in the research process, including:

  • Focus : Research questions help to focus the research by providing a clear direction for the study. They define the specific area of investigation and provide a framework for the research design.
  • Clarity : Research questions help to clarify the purpose and objectives of the study, which can make it easier for the researcher to communicate the research aims to others.
  • Relevance : Research questions help to ensure that the study is relevant and meaningful. By asking relevant and important questions, the researcher can ensure that the study will contribute to the existing body of knowledge and address important issues.
  • Consistency : Research questions help to ensure consistency in the research process by providing a framework for the development of the research design, data collection, and analysis.
  • Measurability : Research questions help to ensure that the study is measurable by defining the specific variables and outcomes that will be measured.
  • Replication : Research questions help to ensure that the study can be replicated by providing a clear and detailed description of the research aims, methods, and outcomes. This makes it easier for other researchers to replicate the study and verify the results.

Limitations of Research Questions

Limitations of Research Questions are as follows:

  • Subjectivity : Research questions are often subjective and can be influenced by personal biases and perspectives of the researcher. This can lead to a limited understanding of the research problem and may affect the validity and reliability of the study.
  • Inadequate scope : Research questions that are too narrow in scope may limit the breadth of the study, while questions that are too broad may make it difficult to focus on specific research objectives.
  • Unanswerable questions : Some research questions may not be answerable due to the lack of available data or limitations in research methods. In such cases, the research question may need to be rephrased or modified to make it more answerable.
  • Lack of clarity : Research questions that are poorly worded or ambiguous can lead to confusion and misinterpretation. This can result in incomplete or inaccurate data, which may compromise the validity of the study.
  • Difficulty in measuring variables : Some research questions may involve variables that are difficult to measure or quantify, making it challenging to draw meaningful conclusions from the data.
  • Lack of generalizability: Research questions that are too specific or limited in scope may not be generalizable to other contexts or populations. This can limit the applicability of the study’s findings and restrict its broader implications.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

  • Research Questions: Definitions, Types + [Examples]

busayo.longe

Research questions lie at the core of systematic investigation and this is because recording accurate research outcomes is tied to asking the right questions. Asking the right questions when conducting research can help you collect relevant and insightful information that ultimately influences your work, positively. 

The right research questions are typically easy to understand, straight to the point, and engaging. In this article, we will share tips on how to create the right research questions and also show you how to create and administer an online questionnaire with Formplus . 

What is a Research Question? 

A research question is a specific inquiry which the research seeks to provide a response to. It resides at the core of systematic investigation and it helps you to clearly define a path for the research process. 

A research question is usually the first step in any research project. Basically, it is the primary interrogation point of your research and it sets the pace for your work.  

Typically, a research question focuses on the research, determines the methodology and hypothesis, and guides all stages of inquiry, analysis, and reporting. With the right research questions, you will be able to gather useful information for your investigation. 

Types of Research Questions 

Research questions are broadly categorized into 2; that is, qualitative research questions and quantitative research questions. Qualitative and quantitative research questions can be used independently and co-dependently in line with the overall focus and objectives of your research. 

If your research aims at collecting quantifiable data , you will need to make use of quantitative research questions. On the other hand, qualitative questions help you to gather qualitative data bothering on the perceptions and observations of your research subjects. 

Qualitative Research Questions  

A qualitative research question is a type of systematic inquiry that aims at collecting qualitative data from research subjects. The aim of qualitative research questions is to gather non-statistical information pertaining to the experiences, observations, and perceptions of the research subjects in line with the objectives of the investigation. 

Types of Qualitative Research Questions  

  • Ethnographic Research Questions

As the name clearly suggests, ethnographic research questions are inquiries presented in ethnographic research. Ethnographic research is a qualitative research approach that involves observing variables in their natural environments or habitats in order to arrive at objective research outcomes. 

These research questions help the researcher to gather insights into the habits, dispositions, perceptions, and behaviors of research subjects as they interact in specific environments. 

Ethnographic research questions can be used in education, business, medicine, and other fields of study, and they are very useful in contexts aimed at collecting in-depth and specific information that are peculiar to research variables. For instance, asking educational ethnographic research questions can help you understand how pedagogy affects classroom relations and behaviors. 

This type of research question can be administered physically through one-on-one interviews, naturalism (live and work), and participant observation methods. Alternatively, the researcher can ask ethnographic research questions via online surveys and questionnaires created with Formplus.  

Examples of Ethnographic Research Questions

  • Why do you use this product?
  • Have you noticed any side effects since you started using this drug?
  • Does this product meet your needs?

ethnographic-research-questions

  • Case Studies

A case study is a qualitative research approach that involves carrying out a detailed investigation into a research subject(s) or variable(s). In the course of a case study, the researcher gathers a range of data from multiple sources of information via different data collection methods, and over a period of time. 

The aim of a case study is to analyze specific issues within definite contexts and arrive at detailed research subject analyses by asking the right questions. This research method can be explanatory, descriptive , or exploratory depending on the focus of your systematic investigation or research. 

An explanatory case study is one that seeks to gather information on the causes of real-life occurrences. This type of case study uses “how” and “why” questions in order to gather valid information about the causative factors of an event. 

Descriptive case studies are typically used in business researches, and they aim at analyzing the impact of changing market dynamics on businesses. On the other hand, exploratory case studies aim at providing answers to “who” and “what” questions using data collection tools like interviews and questionnaires. 

Some questions you can include in your case studies are: 

  • Why did you choose our services?
  • How has this policy affected your business output?
  • What benefits have you recorded since you started using our product?

case-study-example

An interview is a qualitative research method that involves asking respondents a series of questions in order to gather information about a research subject. Interview questions can be close-ended or open-ended , and they prompt participants to provide valid information that is useful to the research. 

An interview may also be structured, semi-structured , or unstructured , and this further influences the types of questions they include. Structured interviews are made up of more close-ended questions because they aim at gathering quantitative data while unstructured interviews consist, primarily, of open-ended questions that allow the researcher to collect qualitative information from respondents. 

You can conduct interview research by scheduling a physical meeting with respondents, through a telephone conversation, and via digital media and video conferencing platforms like Skype and Zoom. Alternatively, you can use Formplus surveys and questionnaires for your interview. 

Examples of interview questions include: 

  • What challenges did you face while using our product?
  • What specific needs did our product meet?
  • What would you like us to improve our service delivery?

interview-questions

Quantitative Research Questions

Quantitative research questions are questions that are used to gather quantifiable data from research subjects. These types of research questions are usually more specific and direct because they aim at collecting information that can be measured; that is, statistical information. 

Types of Quantitative Research Questions

  • Descriptive Research Questions

Descriptive research questions are inquiries that researchers use to gather quantifiable data about the attributes and characteristics of research subjects. These types of questions primarily seek responses that reveal existing patterns in the nature of the research subjects. 

It is important to note that descriptive research questions are not concerned with the causative factors of the discovered attributes and characteristics. Rather, they focus on the “what”; that is, describing the subject of the research without paying attention to the reasons for its occurrence. 

Descriptive research questions are typically closed-ended because they aim at gathering definite and specific responses from research participants. Also, they can be used in customer experience surveys and market research to collect information about target markets and consumer behaviors. 

Descriptive Research Question Examples

  • How often do you make use of our fitness application?
  • How much would you be willing to pay for this product?

descriptive-research-question

  • Comparative Research Questions

A comparative research question is a type of quantitative research question that is used to gather information about the differences between two or more research subjects across different variables. These types of questions help the researcher to identify distinct features that mark one research subject from the other while highlighting existing similarities. 

Asking comparative research questions in market research surveys can provide insights on how your product or service matches its competitors. In addition, it can help you to identify the strengths and weaknesses of your product for a better competitive advantage.  

The 5 steps involved in the framing of comparative research questions are: 

  • Choose your starting phrase
  • Identify and name the dependent variable
  • Identify the groups you are interested in
  • Identify the appropriate adjoining text
  • Write out the comparative research question

Comparative Research Question Samples 

  • What are the differences between a landline telephone and a smartphone?
  • What are the differences between work-from-home and on-site operations?

comparative-research-question

  • Relationship-based Research Questions  

Just like the name suggests, a relationship-based research question is one that inquires into the nature of the association between two research subjects within the same demographic. These types of research questions help you to gather information pertaining to the nature of the association between two research variables. 

Relationship-based research questions are also known as correlational research questions because they seek to clearly identify the link between 2 variables. 

Read: Correlational Research Designs: Types, Examples & Methods

Examples of relationship-based research questions include: 

  • What is the relationship between purchasing power and the business site?
  • What is the relationship between the work environment and workforce turnover?

relationship-based-research-question

Examples of a Good Research Question

Since research questions lie at the core of any systematic investigations, it is important to know how to frame a good research question. The right research questions will help you to gather the most objective responses that are useful to your systematic investigation. 

A good research question is one that requires impartial responses and can be answered via existing sources of information. Also, a good research question seeks answers that actively contribute to a body of knowledge; hence, it is a question that is yet to be answered in your specific research context.

  • Open-Ended Questions

 An open-ended question is a type of research question that does not restrict respondents to a set of premeditated answer options. In other words, it is a question that allows the respondent to freely express his or her perceptions and feelings towards the research subject. 

Examples of Open-ended Questions

  • How do you deal with stress in the workplace?
  • What is a typical day at work like for you?
  • Close-ended Questions

A close-ended question is a type of survey question that restricts respondents to a set of predetermined answers such as multiple-choice questions . Close-ended questions typically require yes or no answers and are commonly used in quantitative research to gather numerical data from research participants. 

Examples of Close-ended Questions

  • Did you enjoy this event?
  • How likely are you to recommend our services?
  • Very Likely
  • Somewhat Likely
  • Likert Scale Questions

A Likert scale question is a type of close-ended question that is structured as a 3-point, 5-point, or 7-point psychometric scale . This type of question is used to measure the survey respondent’s disposition towards multiple variables and it can be unipolar or bipolar in nature. 

Example of Likert Scale Questions

  • How satisfied are you with our service delivery?
  • Very dissatisfied
  • Not satisfied
  • Very satisfied
  • Rating Scale Questions

A rating scale question is a type of close-ended question that seeks to associate a specific qualitative measure (rating) with the different variables in research. It is commonly used in customer experience surveys, market research surveys, employee reviews, and product evaluations. 

Example of Rating Questions

  • How would you rate our service delivery?

  Examples of a Bad Research Question

Knowing what bad research questions are would help you avoid them in the course of your systematic investigation. These types of questions are usually unfocused and often result in research biases that can negatively impact the outcomes of your systematic investigation. 

  • Loaded Questions

A loaded question is a question that subtly presupposes one or more unverified assumptions about the research subject or participant. This type of question typically boxes the respondent in a corner because it suggests implicit and explicit biases that prevent objective responses. 

Example of Loaded Questions

  • Have you stopped smoking?
  • Where did you hide the money?
  • Negative Questions

A negative question is a type of question that is structured with an implicit or explicit negator. Negative questions can be misleading because they upturn the typical yes/no response order by requiring a negative answer for affirmation and an affirmative answer for negation. 

Examples of Negative Questions

  • Would you mind dropping by my office later today?
  • Didn’t you visit last week?
  • Leading Questions  

A l eading question is a type of survey question that nudges the respondent towards an already-determined answer. It is highly suggestive in nature and typically consists of biases and unverified assumptions that point toward its premeditated responses. 

Examples of Leading Questions

  • If you enjoyed this service, would you be willing to try out our other packages?
  • Our product met your needs, didn’t it?
Read More: Leading Questions: Definition, Types, and Examples

How to Use Formplus as Online Research Questionnaire Tool  

With Formplus, you can create and administer your online research questionnaire easily. In the form builder, you can add different form fields to your questionnaire and edit these fields to reflect specific research questions for your systematic investigation. 

Here is a step-by-step guide on how to create an online research questionnaire with Formplus: 

  • Sign in to your Formplus accoun t, then click on the “create new form” button in your dashboard to access the Form builder.

comparative study research questions

  • In the form builder, add preferred form fields to your online research questionnaire by dragging and dropping them into the form. Add a title to your form in the title block. You can edit form fields by clicking on the “pencil” icon on the right corner of each form field.

online-research-questionnaire

  • Save the form to access the customization section of the builder. Here, you can tweak the appearance of your online research questionnaire by adding background images, changing the form font, and adding your organization’s logo.

formplus-research-question

  • Finally, copy your form link and share it with respondents. You can also use any of the multiple sharing options available.

comparative study research questions

Conclusion  

The success of your research starts with framing the right questions to help you collect the most valid and objective responses. Be sure to avoid bad research questions like loaded and negative questions that can be misleading and adversely affect your research data and outcomes. 

Your research questions should clearly reflect the aims and objectives of your systematic investigation while laying emphasis on specific contexts. To help you seamlessly gather responses for your research questions, you can create an online research questionnaire on Formplus.  

Logo

Connect to Formplus, Get Started Now - It's Free!

  • abstract in research papers
  • bad research questions
  • examples of research questions
  • types of research questions
  • busayo.longe

Formplus

You may also like:

How to Write An Abstract For Research Papers: Tips & Examples

In this article, we will share some tips for writing an effective abstract, plus samples you can learn from.

comparative study research questions

Research Summary: What Is It & How To Write One

Introduction A research summary is a requirement during academic research and sometimes you might need to prepare a research summary...

How to do a Meta Analysis: Methodology, Pros & Cons

In this article, we’ll go through the concept of meta-analysis, what it can be used for, and how you can use it to improve how you...

How to Write a Problem Statement for your Research

Learn how to write problem statements before commencing any research effort. Learn about its structure and explore examples

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Comparative Case Studies: Methodological Discussion

  • Open Access
  • First Online: 25 May 2022

Cite this chapter

You have full access to this open access chapter

Book cover

  • Marcelo Parreira do Amaral 7  

Part of the book series: Palgrave Studies in Adult Education and Lifelong Learning ((PSAELL))

10k Accesses

5 Citations

Case Study Research has a long tradition and it has been used in different areas of social sciences to approach research questions that command context sensitiveness and attention to complexity while tapping on multiple sources. Comparative Case Studies have been suggested as providing effective tools to understanding policy and practice along three different axes of social scientific research, namely horizontal (spaces), vertical (scales), and transversal (time). The chapter, first, sketches the methodological basis of case-based research in comparative studies as a point of departure, also highlighting the requirements for comparative research. Second, the chapter focuses on presenting and discussing recent developments in scholarship to provide insights on how comparative researchers, especially those investigating educational policy and practice in the context of globalization and internationalization, have suggested some critical rethinking of case study research to account more effectively for recent conceptual shifts in the social sciences related to culture, context, space and comparison. In a third section, it presents the approach to comparative case studies adopted in the European research project YOUNG_ADULLLT that has set out to research lifelong learning policies in their embeddedness in regional economies, labour markets and individual life projects of young adults. The chapter is rounded out with some summarizing and concluding remarks.

  • Case-based research
  • Comparative case studies

Download chapter PDF

Similar content being viewed by others

comparative study research questions

Introduction to the Book and the Comparative Study

comparative study research questions

Comparative Studies, the Experience of COMPALL Winter School

comparative study research questions

Theoretical and Methodological Considerations

1 introduction.

Exploring landscapes of lifelong learning in Europe is a daunting task as it involves a great deal of differences across places and spaces; it entails attending to different levels and dimensions of the phenomena at hand, but not least it commands substantial sensibility to cultural and contextual idiosyncrasies. As such, case-based methodologies come to mind as tested methodological approaches to capturing and examining singular configurations such as the local settings in focus in this volume, in which lifelong learning policies for young people are explored in their multidimensional reality. The ensuing question, then, is how to ensure comparability across cases when departing from the assumption that cases are unique. Recent debates in Comparative and International Education (CIE) research are drawn from that offer important insights into the issues involved and provide a heuristic approach to comparative cases studies. Since the cases focused on in the chapters of this book all stem from a common European research project, the comparative case study methodology allows us to at once dive into the specifics and uniqueness of each case while at the same time pay attention to common treads at the national and international (European) levels.

The chapter, first, sketches the methodological basis of case-based research in comparative studies as a point of departure, also highlighting the requirements in comparative research. In what follows, second, the chapter focuses on presenting and discussing recent developments in scholarship to provide insights on how comparative researchers, especially those investigating educational policy and practice in the context of globalization and internationalization, have suggested some critical rethinking of case study research to account more effectively for recent conceptual shifts in the social sciences related to culture, context, space and comparison. In a third section, it presents the approach to comparative case studies adopted in the European research project YOUNG_ADULLLT that has set out to research lifelong learning policies in their embeddedness in regional economies, labour markets and individual life projects of young adults. The chapter is rounded out with some summarizing and concluding remarks.

2 Case-Based Research in Comparative Studies

In the past, comparativists have oftentimes regarded case study research as an alternative to comparative studies proper. At the risk of oversimplification: methodological choices in comparative and international education (CIE) research, from the 1960s onwards, have fallen primarily on either single country (small n) contextualized comparison, or on cross-national (usually large n, variable) decontextualized comparison (see Steiner-Khamsi, 2006a , 2006b , 2009). These two strands of research—notably characterized by Development and Area Studies on the one side and large-scale performance surveys of the International Association for the Evaluation of Educational Achievement (IEA) type, on the other—demarcated their fields by resorting to how context and culture were accounted for and dealt with in the studies they produced. Since the turn of the century, though, comparativists are more comfortable with case study methodology (see Little, 2000 ; Vavrus and Bartlett 2006 , 2009 ; Bartlett & Vavrus, 2017 ) and diagnoses of an “identity crisis” of the field due to a mass of single-country studies lacking comparison proper (see Schriewer, 1990 ; Wiseman & Anderson, 2013 ) started dying away. Greater acceptance of and reliance on case-based methodology has been related with research on policy and practice in the context of globalization and coupled with the intention to better account for culture and context, generating scholarship that is critical of power structures, sensitive to alterity and of other ways of knowing.

The phenomena that have been coined as constituting “globalization” and “internationalization” have played, as mentioned, a central role in the critical rethinking of case study research. In researching education under conditions of globalization, scholars placed increasing attention on case-based approaches as opportunities for investigating the contemporary complexity of policy and practice. Further, scholarly debates in the social sciences and the humanities surrounding key concepts such as culture, context, space, and place but also comparison have also contributed to a reconceptualization of case study methodology in CIE. In terms of the requirements for such an investigation, scholarship commands an adequate conceptualization that problematizes the objects of study and that does not take them as “unproblematic”, “assum[ing] a constant shared meaning”; in short, objects of study that are “fixed, abstract and absolute” (Fine, quoted in Dale & Robertson, 2009 , p. 1114). Case study research is thus required to overcome methodological “isms” in their research conceptualization (see Dale & Robertson, 2009 ; Robertson & Dale, 2017 ; see also Lange & Parreira do Amaral, 2018 ). In response to these requirements, the approaches to case study discussed in CIE depart from a conceptualization of the social world as always dynamic, emergent, somewhat in motion, and always contested. This view considers the fact that the social world is culturally produced and is never complete or at a standstill, which goes against an understanding of case as something fixed or natural. Indeed, in the past cases have often been understood almost in naturalistic ways, as if they existed out there, waiting for researchers to “discover” them. Usually, definitions of case study also referred to inquiry that aims at elucidating features of a phenomenon to yield an understanding of why, how and with what consequences something happens. One can easily find examples of cases understood simply as sites to observe/measure variables—in a nomothetic cast—or examples, where cases are viewed as specific and unique instances that can be examined in the idiographic paradigm. In contrast, rather than taking cases as pre-existing entities that are defined and selected as cases, recent case-oriented research has argued for a more emergent approach which recognizes that boundaries between phenomenon and context are often difficult to establish or overlap. For this reason, researchers are incited to see this as an exercise of “casing”, that is, of case construction. In this sense, cases here are seen as complex systems (Ragin & Becker, 1992 ) and attention is devoted to the relationships between the parts and the whole, pointing to the relevance of configurations and constellations within as well as across cases in the explanation of complex and contingent phenomena. This is particularly relevant for multi-case, comparative research since the constitution of the phenomena that will be defined, as cases will differ. Setting boundaries will thus also require researchers to account for spatial, scalar (i.e., level or levels with which a case is related) and temporal aspects.

Further, case-based research is also required to account for multiple contexts while not taking them for granted. One of the key theoretical and methodological consequences of globalization for CIE is that it required us to recognize that it alters the nature and significance of what counts as contexts (see Parreira do Amaral, 2014 ). According to Dale ( 2015 ), designating a process, or a type of event, or a particular organization, as a context, entails bestowing a particular significance on them, as processes, events, and so on that are capable of affecting other processes and events. The key point is that rather than being so intrinsically, or naturally, contexts are constructed as “contexts”. In comparative research, contexts have been typically seen as the place (or the variables) that enable us to explain why what happens in one case is different from what happens another case; what counts as context then is seen as having the same effect everywhere, although the forms it takes vary substantially (see Dale, 2015 ). In more general terms, recent case study approaches aim at accounting for the increasing complexity of the contexts in which they are embedded, which, in turn, is related to the increasing impact of globalization as the “context of contexts” (Dale, 2015 , p. 181f; see also Carter & Sealey, 2013 ; Mjoset, 2013 ). It also aims at accounting for overlapping contexts. Here it is important to note that contexts are not only to be seen in spatio-geographical terms (i.e., local, regional, national, international), but contexts may also be provided by different institutional and/or discursive contexts that create varying opportunity structures (Dale & Parreira do Amaral, 2015 ; see also Chap. 2 in this volume). What one can call temporal contexts also plays an important role, for what happens in the case unfolds as embedded not only in historical time, but may be related to different temporalities (see the concept of “timespace” as discussed by Lingard & Thompson, 2016 ) and thus are influenced by path dependence or by specific moments of crisis (Rhinard, 2019 ; see also McLeod, 2016 ). Moreover, in CIE research, the social-cultural production of the world is influenced by developments throughout the globe that take place at various places and on several scales, which in turn influence each other, but in the end, become locally relevant in different facets. As Bartlett and Vavrus write, “context is not a primordial or autonomous place; it is constituted by social interactions, political processes, and economic developments across scales and times.” ( Bartlett & Vavrus, 2017 , p. 14). Indeed, in this sense, “context is not a container for activity, it is the activity” (Bartlett & Vavrus, 2017 , p. 12, emphasis in orig.).

Also, dealing with the complexity of education policy and practice requires us to transcend the dichotomy of idiographic versus nomothetic approaches to causation. Here, it can be argued that case studies allow us to grasp and research the complexity of the world, thus offering conceptual and methodological tools to explore how phenomena viewed as cases “depend on all of the whole, the parts, the interactions among parts and whole, and the interactions of any system with other complex systems among which it is nested and with which it intersects” (Byrne, 2013 , p. 2). The understanding of causation that undergirds recent developments in case-based research aims at generalization, yet it resists ambitions to establishing universal laws in social scientific research. Focus is placed on processes while tracking the relevant factors, actors and features that help explain the “how” and the “why” questions (Bartlett and Vavrus 2017 , p. 38ff), and on “causal mechanisms”, as varying explanations of outcomes within and across cases, always contingent on interaction with other variables and dependent contexts (see Byrne, 2013 ; Ragin, 2000 ). In short, the nature of causation underlying the recent case study approaches in CIE is configurational and not foundational.

This is also in line with how CIE research regards education practice, research, and policy as a socio-cultural practice. And it refers to the production of social and cultural worlds through “social actors, with diverse motives, intentions, and levels of influence, [who] work in tandem with and/or in response to social forces” (Bartlett and Vavrus 2017 , p. 1). From this perspective, educational phenomena, such as in policymaking, are seen as a “deeply political process of cultural production engaged in and shaped by social actors in disparate locations who exert incongruent amounts of influence over the design, implementation, and evaluation of policy” ( Bartlett & Vavrus, 2017 , p. 1f). Culture here is understood in non-static and complex ways that reinforce the “importance of examining processes of sense-making as they develop over time, in distinct settings, in relation to systems of power and inequality, and in increasingly interconnected conversation with actors who do not sit physically within the circle drawn around the traditional case” (Bartlett & Vavrus, 2017 , p. 11, emphasis in orig.).

In sum, the approaches to case study put forward in CIE provide conceptual and methodological tools that allow for an analysis of education in the global context throughout scale, space, and time, which is always regarded as complexly integrated and never as isolated or independent. The following subsection discusses Comparative Case Studies (CCS) as suggested in recent comparative scholarship, which aims at attending to the methodological requirements discussed above by integrating horizontal, vertical, and transversal dimensions of comparison.

2.1 Comparative Case Studies: Horizontal, Vertical and Transversal Dimensions

Building up on their previous work on vertical case studies (Bartlett and Vavrus 2017 ; Vavrus & Bartlett, 2006 , 2009 ), Frances Vavrus and Lesley Bartlett have proposed a comparative approach to case study research that aims at meeting the requirements of culture and context sensitive research as discussed in this special issue.

As a research approach, CCS offers two theoretical-methodological lenses to research education as a socio-cultural practice. These lenses represent different views on the research object and account for the complexity of education practice, policy, and research in globalized contexts. The first lens is “context-sensitive”, which focuses on how social practices and interactions constitute and produce social contexts. As quoted above, from the perspective of a socio-cultural practice, “context is not a container for activity, it is the activity” (Vavrus and Bartlett 2017: 12, emphasis in orig.). The settings that influence and condition educational phenomena are culturally produced in different and sometimes overlapping (spatial, institutional, discursive, temporal) contexts as just mentioned. The second CCS lens is “culture-sensitive” and focuses on how socio-cultural practices produce social structures. As such, culture is a process that is emergent, dynamic, and constitutive of meaning-making as well as social structuration.

The CCS approach aims at studying educational phenomena throughout scale, time, and space by providing three axes for a “studying through” of the phenomena in question. As stated by Lesley Bartlett and Frances Vavrus with reference to comparative analyses of global education policy:

the horizontal axis compares how similar policies unfold in distinct locations that are socially produced […] and ‘complexly connected’ […]. The vertical axis insists on simultaneous attention to and across scales […]. The transversal comparison historically situates the processes or relations under consideration (Bartlett and Vavrus 2017 : 3, emphasis in orig.).

These three axes allow for a methodological conceptualization of “policy formation and appropriation across micro-, meso-, and macro levels” by not theorizing them as distinct or unrelated (Bartlett and Vavrus 2017 , p. 4). In following Latour, they state:

the macro is neither “above” nor “below” the intersections but added to them as another of their connections’ […]. In CCS research, one would pay close attention to how actions at different scales mutually influence one another (Bartlett and Vavrus 2017 , p. 13f, emphasis in orig.)

Thus, these three axes contain

processes across space and time; and [the CCS as a research design] constantly compares what is happening in one locale with what has happened in other places and historical moments. These forms of comparison are what we call horizontal, vertical, and transversal comparisons (Bartlett and Vavrus 2017 , p. 11, emphasis in orig.)

In terms of the three axes along with comparison is organized, the authors state that horizontal comparison commands attention to how historical and contemporary processes have variously influenced the “cases”, which might be constructed by focusing “people, groups of people, sites, institutions, social movements, partnerships, etc.” (Bartlett and Vavrus 2017 , p. 53) Horizontal comparisons eschew pressing categories resultant from one case others, which implies including multiple cases at the same scale in a comparative case study, while at the same time attending to “valuable contextual information” about each of them. Horizontal comparisons use units of analysis that are homologous, that is, equivalent in terms of shape, function, or institutional/organizational nature (for instance, schools, ministries, countries, etc.) ( Bartlett & Vavrus, 2017 , p. 53f). Similarly, comparative case studies may also entail tracing a phenomenon across sites, as in multi-sited ethnography (see Coleman & von Hellermann, 2012 ; Marcus, 1995 ).

Vertical comparison, in turn, does not simply imply the comparison of levels; rather it involves analysing networks and their interrelationships at different scales. For instance, in the study of policymaking in a specific case, vertical comparison would consider how actors at different scales variably respond to a policy issued at another level—be it inter−/supranational or at the subnational level. CCS assumes that their different appropriation of policy as discourse and as practice is often due to different histories of racial, ethnic, or gender politics in their communities that appropriately complicate the notion of a single cultural group (Bartlett and Vavrus 2017 , p. 73f). Establishing what counts as context in such a study would be done “by tracing the formation and appropriation of a policy” at different scales; and “by tracing the processes by which actors and actants come into relationship with one another and form non-permanent assemblages aimed at producing, implementing, resisting, and appropriating policy to achieve particular aims” ( Bartlett & Vavrus, 2017 , p. 76). A further element here is that, in this way, one may counter the common problem that comparison of cases (oftentimes countries) usually overemphasizes boundaries and treats them as separated or as self-sustaining containers, when, in reality, actors and institutions at other levels/scales significantly impact policymaking (Bartlett & Vavrus, 2017 ).

In terms of the transversal axis of comparison, Bartlett and Vavrus argue that the social phenomena of interest in a case study have to be seen in light of their historical development (Bartlett & Vavrus, 2017 , p. 93), since these “historical roots” impacted on them and “continues to reverberate into the present, affecting economic relations and social issues such as migration and educational opportunities.” As such, understanding what goes on in a case requires to “understand how it came to be in the first place.” ( Bartlett & Vavrus, 2017 , p. 93) argue:

history offers an extensive fount of evidence regarding how social institutions function and how social relations are similar and different around the world. Historical analysis provides an essential opportunity to contrast how things have changed over time and to consider what has remained the same in one locale or across much broader scales. Such historical comparison reveals important insights about the flexible cultural, social, political, and economic systems humans have developed and sustained over time (Bartlett & Vavrus, 2017 , p. 94).

Further, time and space are intimately related and studying the historical development of the social phenomena of interest in a case study “allows us to assess evidence and conflicting interpretations of a phenomenon,” but also to interrogate our own assumptions about them in contemporary times (Bartlett and Vavrus 2017 ), thus analytically sharpening our historical analyses.

As argued by the authors, researching the global dimension of education practice, research or policy aims at a “studying through” of phenomena horizontally, vertically, and transversally. That is, comparative case study builds on an emergent research design and on a strong process orientation that aims at tracing not only “what”, but also “why” and “how” phenomena emerge and evolve. This approach entails “an open-ended, inductive approach to discover what […] meanings and influences are and how they are involved in these events and activities—an inherently processual orientation” (Bartlett and Vavrus 2017 , p. 7, emphasis in orig.).

The emergent research design and process orientation of the CCS relativizes a priori, somewhat static notions of case construction in CIE and emphasizes the idea of a processual “casing”. The process of casing put forward by CCS has to be understood as a dynamic and open-ended embedding of “cased” research phenomena within moments of scale, space, and time that produce varying sets of conditions or configurations.

In terms of comparison, the primary logic is well in line with more sophisticated approaches to comparison that not simply establish relationships between observable facts or pre-existing cases; rather, the comparative logic aims at establishing “relations between sets of relationships”, as argued by Jürgen Schriewer:

[the] specific method of science dissociates comparison from its quasi-natural union with resemblances; the interest in identifying similarities shifts from the level of factual contents to the level of generalizable relationships. […] One of the primary ways of extending their scope, or examining their explanatory power, is the controlled introduction of varying sets of conditions. The logic of relating relationships, which distinguishes the scientific method of comparison, comes close to meeting these requirements by systematically exploring and analysing sociocultural differences with respect to scrutinizing the credibility of theories, models or constructs (Schriewer, 1990 , p. 36).

The notion of establishing relations between sets of relationships allows to treat cases not as homogeneous (thus avoiding a universalizing notion of comparison); it establishes comparability not along similarity but based on conceptual, functional and/or theoretical equivalences and focuses on reconstructing ‘varying sets of conditions’ that are seen as relevant in social scientific explanation and theorizing, and to which then comparative case studies may contribute.

The following section aims presents the adaptation and application of a comparative case study approach in the YOUNG_ADULLLT research project.

3 Exploring Landscapes of Lifelong Learning through Case Studies

This section illustrates the usage of comparative case studies by drawing from research conducted in a European research project upon which the chapters in this volume are based. The project departed from the observation that most current European lifelong learning (LLL) policies have been designed to create economic growth and, at the same time, guarantee social inclusion and argued that, while these objectives are complementary, they are, however, not linearly nor causally related and, due to distinct orientations, different objectives, and temporal horizons, conflicts and ambiguities may arise. The project was designed as a mixed-method comparative study and aimed at results at the national, regional, and local levels, focusing in particular on policies targeting young adults in situations of near social exclusion. Using a multi-level approach with qualitative and quantitative methods, the project conducted, amongst others, local/regional 18 case studies of lifelong learning policies through a multi-method and multi-level design (see Parreira do Amaral et al., 2020 for more information). The localisation of the cases in their contexts was carried out by identifying relevant areas in terms of spatial differentiation and organisation of social and economic relations. The so defined “functional regions” allowed focus on territorial units which played a central role within their areas, not necessarily overlapping with geographical and/or administrative borders. Footnote 1

Two main objectives guided the research: first, to analyse policies and programmes at the regional and local level by identifying policymaking networks that included all social actors involved in shaping, formulating, and implementing LLL policies for young adults; second, to recognize strengths and weaknesses (overlapping, fragmented or unfocused policies and projects), thus identifying different patterns of LLL policymaking at regional level, and investigating their integration with the labour market, education and other social policies. The European research project focused predominantly on the differences between the existing lifelong learning policies in terms of their objectives and orientations and questioned their impact on young adults’ life courses, especially those young adults who find themselves in vulnerable positions. What concerned the researchers primarily was the interaction between local institutional settings, education, labour markets, policymaking landscapes, and informal initiatives that together nurture the processes of lifelong learning. They argued that it is by inquiring into the interplay of these components that the regional and local contexts of lifelong learning policymaking can be better assessed and understood. In this regard, the multi-layered approach covered a wide range of actors and levels and aimed at securing compatibility throughout the different phases and parts of the research.

The multi-level approach adopted aimed at incorporating the different levels from transnational to regional/local to individual, that is, the different places, spaces, and levels with which policies are related. The multi-method design was used to bring together the results from the quantitative, qualitative and policy/document analysis (for a discussion: Parreira do Amaral, 2020 ).

Studying the complex relationships between lifelong learning (LLL) policymaking on the one hand, and young adults’ life courses on the other, requires a carefully established research approach. This task becomes even more challenging in the light of the diverse European countries and their still more complex local and regional structures and institutions. One possible way of designing a research framework able to deal with these circumstances clearly and coherently is to adopt a multi-level or multi-layered approach. This approach recognises multiple levels and patterns of analysis and enables researchers to structure the workflow according to various perspectives. It was this multi-layered approach that the research consortium of YOUNG_ADULLLT adopted and applied in its attempts to better understand policies supporting young people in their life course.

3.1 Constructing Case Studies

In constructing case studies, the project did not apply an instrumental approach focused on the assessment of “what worked (or not)?” Rather, consistently with Bartlett and Vavrus’s proposal (Bartlett & Vavrus, 2017 ), the project decided to “understand policy as a deeply political process of cultural production engaged in and shaped by social actors in disparate locations who exert incongruent amounts of influence over the design, implementation, and evaluation of policy” ( Bartlett & Vavrus, 2017 , p. 1f). This was done in order to enhance the interactive and relational dimension among actors and levels, as well as their embeddedness in local infra-structures (education, labour, social/youth policies) according to project’s three theoretical perspectives. The analyses of the information and data integrated by our case study approach aimed at a cross-reading of the relations among the macro socio-economic dimensions, structural arrangements, governance patterns, addressee biographies and mainstream discourses that underlie the process of design and implementation of the LLL policies selected as case study. The subjective dimensions of agency and sense-making animated these analyses, and the multi-level approach contextualized them from the local to the transnational levels. Figure 3.1 below represents the analytical approach to the research material gathered in constructing the case studies. Specifically, it shows the different levels, from the transnational level down to the addressees.

figure 1

Multi-level and multi-method approach to case studies in YOUNG_ADULLLT. Source: Palumbo et al., 2019

The project partners aimed at a cross-dimensional construction of the case studies, and this implied the possibility of different entry points, for instance by moving the analytical perspective top-down or bottom-up, as well as shifting from left to right of the matrix and vice versa. Considering the “horizontal movement”, the multidimensional approach has enabled taking into consideration the mutual influence and relations among the institutional, individual, and structural dimensions (which in the project corresponded to the theoretical frames of CPE, LCR, and GOV). In addition, the “vertical movement” from the transnational to the individual level and vice versa was meant to carefully carry out a “study of flows of influence, ideas, and actions through these levels” (Bartlett and Vavrus 2017 , p. 11), emphasizing the correspondences/divergences among the perspectives of different actors at different levels. The transversal dimension, that is, the historical process, focused on the period after the financial crisis of 2007/2008 as it has impacted differently on the social and economic situations of young people, often resulting in stern conditions and higher competition in education and labour markets, which also called for a reassessment of existing policies targeting young adults in the countries studied.

Concerning the analyses, a further step included the translation of the conceptual model illustrated in Fig. 3.1 above into a heuristic table used to systematically organize the empirical data collected and guide the analyses cases constructed as multi-level and multidimensional phenomena, allowing for the establishment of interlinkages and relationships. By this approach, the analysis had the possibility of grasping the various levels at which LLL policies are negotiated and displaying the interplay of macro-structures, regional environments and institutions/organizations as well as individual expectations. Table 3.1 illustrates the operationalization of the data matrix that guided the work.

In order to ensure the presentability and intelligibility of the results, Footnote 2 a narrative approach to case studies analysis was chosen whose main task was one of “storytelling” aimed at highlighting what made each case unique and what difference it makes for LLL policymaking and to young people’s life courses. A crucial element of this entails establishing relations “between sets of relationships”, as argued above.

LLL policies were selected as starting points from which the cases themselves could be constructed and of which different stories could be developed. That stories can be told differently does not mean that they are arbitrary, rather this refers to different ways of accounting for the embedding of the specific case to its context, namely the “diverging policy frameworks, patterns of policymaking, networks of implementation, political discourses and macro-structural conditions at local level” (see Palumbo et al., 2020 , p. 220). Moreover, developing different narratives aimed at representing the various voices of the actors involved in the process—from policy-design and appropriation through to implementation—and making the different stakeholders’ and addressees’ opinions visible, creating thus intelligible narratives for the cases (see Palumbo et al., 2020 ). Analysing each case started from an entry point selected, from which a story was told. Mainly, two entry points were used: on the one hand, departing from the transversal dimension of the case and which focused on the evolution of a policy in terms of its main objectives, target groups, governance patterns and so on in order to highlight the intended and unintended effects of the “current version” of the policy within its context and according to the opinions of the actors interviewed. On the other hand, biographies were selected as starting points in an attempt to contextualize the life stories within the biographical constellations in which the young people came across the measure, the access procedures, and how their life trajectories continued in and possibly after their participation in the policy (see Palumbo et al., 2020 for examples of these narrative strategies).

4 Concluding Remarks

This chapter presented and discussed the methodological basis and requirements of conducting case studies in comparative research, such as those presented in the subsequent chapters of this volume. The Comparative Case Study approach suggested in the previous discussion offers productive and innovative ways to account sensitively to culture and contexts; it provides a useful heuristic that deals effectively with issues related to case construction, namely an emergent and dynamic approach to casing, instead of simply assuming “bounded”, pre-defined cases as the object of research; they also offer a helpful procedural, configurational approach to “causality”; and, not least, a resourceful approach to comparison that allows researchers to respect the uniqueness and integrity of each case while at the same time yielding insights and results that transcend the idiosyncrasy of the single case. In sum, CCS offers a sound approach to CIE research that is culture and context sensitive.

For a discussion of the concept of functional region, see Parreira do Amaral et al., 2020 .

This analytical move is in line with recent developments that aim at accounting for a cultural turn (Jameson, 1998 ) or ideational turn (Béland & Cox, 2011 ) in policy analysis methodology, called interpretive policy analysis (see Münch, 2016 ).

Bartlett, L., & Vavrus, F. (2017). Rethinking case study research. A comparative approach . Routledge.

Google Scholar  

Béland, D., & Cox, R. H. (Eds.). (2011). Ideas and politics in social science research . Oxford University Press.

Byrne, D. (2013). Introduction. Case-based methods: Why we need them; what they are; how to do them. In D. Byrne & C. C. Ragin (Eds.), The SAGE handbook of case-based methods (pp. 1–13). SAGE.

Carter, B., & Sealey, A. (2013). Reflexivity, realism and the process of casing. In D. Byrne & C. C. Ragin (Eds.), The SAGE handbook of case-based methods (pp. 69–83). SAGE.

Coleman, S., & von Hellermann, P. (Eds.). (2012). Multi-sited ethnography: Problems and possibilities in the translocation of research methods . Routledge.

Dale, R., & Parreira do Amaral, M. (2015). Discursive and institutional opportunity structures in the governance of educational trajectories. In M. P. do Amaral, R. Dale, & P. Loncle (Eds.), Shaping the futures of young Europeans: Education governance in eight European countries (pp. 23–42). Symposium Books.

Dale, R., & Robertson, S. (2009). Beyond methodological ʻismsʼ in comparative education in an era of globalisation. In R. Cowen & A. M. Kazamias (Eds.), International handbook of comparative education (pp. 1113–1127). Springer Netherlands.

Chapter   Google Scholar  

Dale, R. (2015). Globalisierung in der Vergleichenden Erziehungswissenschaft re-visited: Die Relevanz des Kontexts des Kontexts. In M. P. do Amaral & S.K. Amos (Hrsg.) (Eds.), Internationale und Vergleichende Erziehungswissenschaft. Geschichte, Theorie, Methode und Forschungsfelder (pp. 171–187). Waxmann.

Jameson, F. (1998). The cultural turn. Selected writings on the postmodern. 1983–1998 . Verso.

Lange, S., & Parreira do Amaral, M. (2018). Leistungen und Grenzen internationaler und vergleichender Forschung— Regulative Ideen für die methodologische Reflexion? Tertium Comparationis, 24 (1), 5–31.

Lingard, B., & Thompson, G. (2016). Doing time in the sociology of education. British Journal of Sociology of Education, 38 (1), 1–12.

Article   Google Scholar  

Little, A. (2000). Development studies and comparative education: Context, content, comparison and contributors. Comparative Education, 36 (3), 279–296.

Marcus, G. E. (1995). Ethnography in/of the world system: The emergence of multi-sited ethnography. Annual Review of Anthropology, 24 , 95–117.

McLeod, J. (2016). Marking time, making methods: Temporality and untimely dilemmas in the sociology of youth and educational change. British Journal of Sociology of Education, 38 (1), 13–25.

Mjoset, L. (2013). The Contextualist approach to social science methodology. In D. Byrne & C. C. Ragin (Eds.), The SAGE handbook of case-based methods (pp. 39–68). SAGE.

Münch, S. (2016). Interpretative Policy-Analyse. Eine Einführung . Springer VS.

Book   Google Scholar  

Palumbo, M., Benasso, S., Pandolfini, V., Verlage, T., & Walther, A. (2019). Work Package 7 Cross-case and cross-national Report. YOUNG_ADULLLT Working Paper. http://www.young-adulllt.eu/publications/working-paper/index.php . Accessed 31 Aug 2021.

Palumbo, M., Benasso, S., & Parreira do Amaral, M. (2020). Telling the story. Exploring lifelong learning policies for young adults through a narrative approach. In M. P. do Amaral, S. Kovacheva, & X. Rambla (Eds.), Lifelong learning policies for young adults in Europe. Navigating between knowledge and economy (pp. 217–239). Policy Press.

Parreira do Amaral, M. (2014). Globalisierung im Fokus vergleichender Forschung. In C. Freitag (Ed.), Methoden des Vergleichs. Komparatistische Methodologie und Forschungsmethodik in interdisziplinärer Perspektive (pp. 117–138). Budrich.

Parreira do Amaral, M. (2020). Lifelong learning policies for young adults in Europe: A conceptual and methodological discussion. In M. P. do Amaral, S. Kovacheva, & X. Rambla (Eds.), Lifelong learning policies for young adults in Europe. Navigating between knowledge and economy (pp. 3–20). Policy Press.

Parreira do Amaral, M., Lowden, K., Pandolfini, V., & Schöneck, N. (2020). Coordinated policy-making in lifelong learning: Functional regions as dynamic units. In M. P. do Amaral, S. Kovacheva, & X. Rambla (Eds.), Lifelong learning policies for young adults in Europe navigating between knowledge and economy (pp. 21–42). Policy Press.

Ragin, C. C., & Becker, H. (1992). What is a case? Cambridge UP.

Ragin, C. C. (2000). Fuzzy set social science . Chicago UP.

Rhinard, M. (2019). The Crisisification of policy-making in the European Union. Journal of Common Market Studies, 57 (3), 616–633.

Robertson, S., & Dale, R. (2017). Comparing policies in a globalizing world. Methodological reflections. Educação & Realidade, 42 (3), 859–876.

Schriewer, J. (1990). The method of comparison and the need for externalization: Methodological criteria and sociological concepts. In J. Schriewer & B. Holmes (Eds.), Theories and methods in comparative education (pp. 25–83). Lang.

Steiner-Khamsi, G. (2006a). The development turn in comparative and international education. European Education: Issues and Studies, 38 (3), 19–47.

Steiner-Khamsi, G. (2006b). U.S. social and educational research during the cold war: An interview with Harold J. Noah. European Education: Issues and Studies, 38 (3), 9–18.

Vavrus, F., & Bartlett, L. (2006). Comparatively knowing: Making a case for vertical case study. Current Issues in Comparative Education, 8 (2), 95–103.

Vavrus, F., & Bartlett, L. (Eds.). (2009). Critical approaches to comparative education: Vertical case studies from Africa, Europe, the Middle East, and the Americas . Palgrave Macmillan.

Wiseman, A. W., & Anderson, E. (2013). Introduction to part 3: Conceptual and methodological developments. In A. W. Wiseman & E. Anderson (Eds.), Annual review of comparative and international education 2013 (international perspectives on education and society, Vol. 20) (pp. 85–90). Emerald Group Publishing Limited.

Download references

Author information

Authors and affiliations.

Institute of Education, University of Münster, Münster, Germany

Marcelo Parreira do Amaral

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marcelo Parreira do Amaral .

Editor information

Editors and affiliations.

Department of Educational Sciences, University of Genoa, Genova, Italy

Sebastiano Benasso

Faculty of Teacher Education, University of Zagreb, Zagreb, Croatia

Dejana Bouillet

Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal

Tiago Neves

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2022 The Author(s)

About this chapter

do Amaral, M.P. (2022). Comparative Case Studies: Methodological Discussion. In: Benasso, S., Bouillet, D., Neves, T., Parreira do Amaral, M. (eds) Landscapes of Lifelong Learning Policies across Europe. Palgrave Studies in Adult Education and Lifelong Learning. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-96454-2_3

Download citation

DOI : https://doi.org/10.1007/978-3-030-96454-2_3

Published : 25 May 2022

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-030-96453-5

Online ISBN : 978-3-030-96454-2

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Communication

iResearchNet

Custom Writing Services

Comparative research.

A specific comparative research methodology is known in most social sciences. Its definition often refers to countries and cultures at the same time, because cultural differences between countries can be rather small (e.g., in Scandinavian countries), whereas very different cultural or ethnic groups may live within one country (e.g., minorities in the United States). Comparative studies have their problems on every level of research, i.e., from theory to types of research questions, operationalization, instruments, sampling, and interpretation of results.

The major problem in comparative research, regardless of the discipline, is that all aspects of the analysis from theory to datasets may vary in definitions and/or categories. As the objects to compare usually belong to different systemic contexts, the establishment of equivalence and comparability is thus a major challenge of comparative research. This is often “operationalized” as functional equivalence, i.e., the functionality of the research objects within the different system contexts must be equivalent. Neither equivalence nor its absence, “bias,” can be presumed. It has to be analyzed and tested for on all the different levels of the research process.

Equivalence And Bias

Equivalence has to be analyzed and established on at least three levels: on the levels of the construct, the item, and the method (van de Vijver & Tanzer 1997). Whenever a test on any of these levels shows negative results, a cultural bias is supposable. Thus, bias on these three levels can be described as the opposite of equivalence. Van de Vijver and Leung (1997) define bias as the variance within certain variables or indicators that can only be caused by culturally unspecific measurement. For example, a media content analysis could examine the amount of foreign affairs coverage in one variable, by measuring the length of newspaper articles. If, however, newspaper articles in country A are generally longer than they are in country B, irrespective of their topic, the result of a sum or mean index of foreign affairs coverage would almost inevitably lead to the conclusion that the amount of foreign affairs coverage in country A is higher than in country B. This outcome would be hardly surprising and not in focus with the research question, because the countries’ average amount of foreign affairs coverage is not related to the national average length of articles. To avoid cultural bias, the results must be standardized or weighted, for example by the mean article length.

To find out whether construct equivalence can be assumed, the researcher will generally require external data and rather complex procedures of culture-specific construct validation(s). Ideally, this includes analyses of the external structure, i.e., theoretical references to other constructs, as well as an examination of the latent or internal structure. The internal structure consists of the relationships between the construct’s sub-dimensions. It can be tested using confirmatory factor analyses, multidimensional scaling, or item analyses. Equivalence can be assumed if the construct validation for every culture has been successful and if the internal and external structures are identical in every country. However, it has to be stated that it is hardly possible to prove construct equivalence beyond any doubts (Wirth & Kolb 2004).

Even with a given construct equivalence, bias can still occur on the item level. The verbalization of items in surveys and of definitions and categories in content analyses can cause bias due to culture-specific connotations. Item bias is mostly evoked by bad, in the sense of nonequivalent, translation or by culture-specific questions and categories (van de Vijver & Leung 1997). Compared to the complex procedures discussed in the case of construct equivalence, the testing for item bias is rather simple (once construct equivalence has been established): Persons from different cultures who take the same positions or ranks on an imaginary construct scale must show the same answering attitude toward every item that measures the construct. Statistically, the correlation of the single items with the total (sum) score have to be identical in every culture, as test theory generally uses the total score to estimate the position of any individual on the construct scale. In brief, equivalence on the item level is established whenever the same sub-dimensions or issues can be used to explain the same theoretical construct in every country (Wirth & Kolb 2004).

When the instruments are ready for application, method equivalence comes to the fore. Method equivalence consists of sample equivalence, instrument equivalence, and administration equivalence. Violation of any of these equivalences produces a method bias. Sample equivalence refers to an equivalent selection of subjects or units of analysis. Instrument equivalence deals with the examination of whether people in every culture agree to take part in the study equivalently, and whether they are used to the instruments equivalently (Lauf & Peter 2001). Finally, a bias on the administration level can occur due to culturespecific attitudes of the interviewers that might produce culture-specific answers. Another source of administration bias could be found in socio-demographic differences between the various national interviewer teams (van de Vijver & Tanzer 1997).

The Role Of Theory

Theory plays a major role in three dimensions when looking for a comparative research strategy: theoretical diversity, theory drivenness, and contextual factors (Wirth & Kolb 2004). Swanson (1992) distinguishes between three principal strategies of dealing with international theoretical diversity. A common possibility is called the avoidance strategy. Many international comparisons are made by teams that come from one culture or nation only. Usually, their research interests are restricted to their own (scientific) socialization. Within this monocultural context, broad approaches cannot be applied and “intertheoretical” questions cannot be answered. This strategy includes atheoretical and unitheoretical (referring to one national theory) studies with or without contextualization (van den Vijver & Leung 2000; Wirth & Kolb 2004).

The pretheoretical strategy tries to avoid cultural and theoretical bias in another way: these studies are undertaken without a strict theoretical background until results are to be interpreted. The advantage of this strategy lies in the exploration, i.e., in developing new theories. Although, following the strict principles of critical rationalism, because of the missing theoretical background the proving of theoretical deduced hypotheses is not applicable (Popper 1994). Most of the results remain on a descriptive level and never reach theoretical diversity. Besides, the instruments for pretheoretical studies must be almost “holistic,” in order to integrate every theoretical construct conceivable for the interpretation. These studies are mostly contextualized and can, thus, become rather extensive (Swanson 1992).

Finally, when a research team develops a meta-theoretical orientation to build a framework for the basic theories and research questions, the data can be analyzed using different theoretical backgrounds. This meta-theoretical strategy allows the extensive use of all data and contextual factors, producing, however, quite a variety of often very different results, which are not easily summarized in one report (Swanson 1992). It is obvious that the higher is the level of theoretical diversity, the greater has to be the effort for construct equivalence.

Research Questions

Van de Vijver and Leung (1996, 1997) distinguish between two types of research questions: structure-oriented questions are mostly interested in the relationship between certain variables, whereas level-oriented questions focus on the parameter values. If, for example, a knowledge gap study analyzes the relationship between the knowledge gained from television news by recipients with high and low socio-economic status (SES) in the UK and the US, the question is structure oriented, because the focus is on a national relationship of knowledge indices and the mean gain of knowledge is not taken into account. Usually, structure-oriented data require correlation or regression analyses. If the main interest of the study is a comparison of the mean gain of knowledge of people with low SES in the UK and the US, the research question is level oriented, because the two knowledge indices of the two nations are to be compared. In this case, one would most probably use analyses of variance. The risk for cultural bias is the same for both kinds of research questions.

Emic And Etic Strategies Of Operationalization

Before the operationalization of an international comparison, the research team has to analyze construct equivalence to prove comparability. In the case of missing internal construct equivalence, the construct cannot be measured equivalently in every country. The decision of whether or not to use the same instruments in every country does not have any impact on this problem of missing construct equivalence. An emic approach could solve this problem. The operationalization for the measurement of the construct(s) is developed nationally, so that the culture-specific adequacy of each of the national instruments will be high. Comparison on the construct level remains possible, even though the instruments vary culturally, because functional equivalence has been established on the construct level by the culture-specific measurement. In general, this procedure will even be possible if national instruments already exist.

As measurement differs from culture to culture, the integration of the national results can be very difficult. Strictly speaking, this disadvantage of emic studies only allows for the interpretation of a structure-oriented outcome with a thorny validation process. It has to be proven that the measurements with different indicators on different scales really lead to data on equivalent constructs. By using external reference data from every culture, complex weighting and standardization procedures can possibly lead to valid equalization of levels and variance (van de Vijver & Leung 1996, 1997). In research practice, emic measuring and data analysis is often used to cast light on cultural differences.

If construct equivalence can be assumed after an in-depth analysis, an etic modus operandi could be recommended. In this logic, approaching the different cultures by using the same or a slightly adapted instrument is valid because the constructs are “functioning” equally in every culture. Consequently, an emic proceeding should most probably come to similar instruments in every culture. Reciprocally, an etic approach must lead to bias and measurement artifacts when applied under the circumstances of missing construct equivalence.

It is obvious that the advantages of emic proceedings are not only the adequate measurement of culture-specific elements, but also the possible inclusion of, e.g., idiographic elements of each culture. Thus, this approach can be seen as a compromise of qualitative and quantitative methodologies. Sometimes comparative researchers suggest analyzing cultural processes in a holistic way without crushing them into variables; psychometric, quantitative data collection would be suitable for similar cultures only. As an objection to this simplification, one should remember the emic approach’s potential to provide the researchers with comparable data, as described above. In contrast, holistic analyses produce culture-specific outcomes that will not be comparable; the problem of equivalence and bias has only been moved to the interpretation of results.

Adaptation Of The Instruments

Difficulties in establishing equivalence are regularly linked to linguistic problems. How can a researcher try to establish functional equivalence without knowledge of every language of the cultures under examination? For a linguistic adaptation of the theoretical background as well as for the instruments, one can again discriminate between “more etic” and “more emic” approaches.

Translation-oriented approaches produce two translated versions of the text: one in the “foreign” language and one after retranslation into the original language. The latter version can be compared to the original version to evaluate the translation. Note that this method produces eticly formed instruments, which can only work whenever functional equivalence has been established on every superior level. Van de Vijver and Tanzer (1997) call this procedure application of an instrument in another language. In a “more emic” cultural adaptation, cultural singularities can be included if, e.g., culture-specific connotations are counterbalanced by a different item formulation.

Purely emic approaches develop entirely culture-specific instruments without translation. Two assembly approaches are available (van de Vijver & Tanzer 1997). First, in order to maintain the committee approach, an international interdisciplinary group of experts of the cultures, languages, and research field decides whether the instruments are to be formed culture-specifically or whether a cultural adaptation will be sufficient. Second, the dual-focus approach tries to find a compromise between literal, grammatical, syntactical, and construct equivalence. Native speakers and/or bilinguals arrange the different language versions together with the research team in a multistep procedure (Erkut et al. 1999).

Usually, researchers use personal preference and accessibility of data to select the countries or cultures to study. This kind of forming of an atheoretical sample avoids many problems (but not cultural bias!). At the same time, it ignores some advantages. Przeworski and Teune (1970) suggest two systematic and theory-driven approaches. The quasiexperimental most similar systems design tries to stress cultural differences. To minimize the possible causes for the differences, those countries are chosen that are the “most similar,” so that the few dissimilarities between these countries are most likely to be the reason for the different outcomes. Whenever the hypotheses highlight intercultural similarities, the most different systems design is appropriate. Here, in a kind of turned-around quasi-experimental logic, the focus lies on similarities between cultures, even though these differ in the greatest possible way (Kolb 2004; Wirth & Kolb 2004).

Random sampling and representativeness play a minor role in international comparisons. The number of states in the world is limited and a normal distribution for the social factors under examination, i.e., the precondition of random sampling, cannot be assumed. Moreover, many statistical methods meet problems when applied under the condition of a low number of cases (Hartmann 1995).

Data Analysis And Interpretation Of Results

Given the presented conceptual and methodological problems of international research, special care must be taken over data analysis and the interpretation of results. As the implementation of every single variable of relevance is hardly accomplishable in international research, the documentation of methods, work process, and data analysis is even more important here than in single-culture studies. Thus, the evaluation of the results must ensue in additional studies. At any rate, an intensive use of different statistical analyses beyond the “general” comparison of arithmetic means can lead to further validation of the results and especially of the interpretation. Van de Vijver and Leung (1997) present a widespread summary of data analysis procedures, including structureand level-oriented approaches, examples of SPSS syntax, and references.

Following Przeworski’s and Teune’s research strategies (1970), results of comparative research can be classified into differences and similarities between the research objects. For both types, Kohn (1989) introduces two separate ways of interpretation. Intercultural similarities seem to be easier to interpret, at first glance. The difficulties emerge when regarding equivalence on the one hand (i.e., there may be covert cultural differences within culturally biased similarities), and when regarding the causes of similarities on the other. The causes will be especially hard to determine in the case of “most different” countries, as different combinations of different indicators can theoretically produce the same results. Esser (2000) refers to diverse theoretical backgrounds that will lead either to differences (e.g., action-theoretically based micro-research) or to similarities (e.g., system-theoretically oriented macro-approaches). In general, the starting point of Przeworski and Teune (1970) seems to be the easier way to come to interesting results and interpretations, using the quasi-experimental approach for “most similar systems with different outcome.” In addition to the advantages of causal interpretation, the “most similar” systems are likely to be equivalent from the top level of the construct to the bottom level of indicators and items. “Controlling” other influences can minimize methodological problems and makes analysis and interpretation more valid.

References:

  • Erkut, S., Alarcón, O., García Coll, C., Tropp, L. R., & Vázquez García, H. A. (1999). The dual-focus approach to creating bilingual measures. Journal of Cross-Cultural Psychology, 30(2), 206 –218.
  • Esser, F. (2000). Journalismus vergleichen: Journalismustheorie und komparative Forschung [Comparing journalism: Journalism theory and comparative research]. In M. Löffelholz (ed.), Theorien des Journalismus: Ein diskursives Handbuch [Journalism theories: A discoursal handbook]. Wiesbaden: Westdeutscher, pp. 123 –146.
  • Esser, F., & Pfetsch, B. (eds.) (2004). Comparing political communication: Theories, cases, and challenges. Cambridge: Cambridge University Press.
  • Hartmann, J. (1995). Vergleichende Politikwissenschaft: Ein Lehrbuch [Comparative political science: A textbook]. Frankfurt: Campus.
  • Kohn, M. L. (1989). Cross-national research as an analytic strategy. In M. L. Kohn (ed.), Crossnational research in sociology. Newbury Park, CA: Sage, pp. 77–102.
  • Kolb, S. (2004). Voraussetzungen für und Gewinn bringende Anwendung von quasiexperimentellen Ansätzen in der kulturvergleichenden Kommunikationsforschung [Precondition for and advantageous application of quasi-experimental approaches in comparative communication research]. In W. Wirth, E. Lauf, & A. Fahr (eds.), Forschungslogik und – design in der Kommunikationswissenschaft, vol. 1: Einführung, Problematisierungen und Aspekte der Methodenlogik aus kommunikationswissenschaftlicher Perspektive [Logic of inquiry and research designs in communication research, vol. 1: Introduction, problematization, and aspects of methodology from a communications point of view]. Cologne: Halem, 2004, pp. 157–178.
  • Lauf, E., & Peter, J. (2001). Die Codierung verschiedensprachiger Inhalte: Erhebungskonzepte und Gütemaße [Coding of content in different languages: Concepts of inquiry and quality indices]. In E. Lauf & W. Wirth (eds.), Inhaltsanalyse: Perspektiven, Probleme, Potentiale [Content analysis: Perspectives, problems, potentialities]. Cologne: Halem, pp. 199 –217.
  • Popper, K. R. (1994). Logik der Forschung [Logic of inquiry], 10th edn. Tübingen: Mohr.
  • Przeworski, A., & Teune, H. (1970). The logic of comparative social inquiry. Malabar, FL: Krieger.
  • Swanson, D. L. (1992). Managing theoretical diversity in cross-national studies of political In J. G. Blumler, J. M. McLeod, & K. E. Rosengren (eds.), Comparatively speaking: Communication and culture across space and time. Newbury Park, CA: Sage, pp. 19 –34.
  • Vijver, F. van de, & Leung, K. (1996). Methods and data analysis of comparative research. In J. W. Berry, Y. H. Poortinga, & J. Pandey (eds.), Handbook of cross-cultural research. Boston, MA: Allyn and Bacon, pp. 257–300.
  • Vijver, F. van de, & Leung, K. (1997). Methods and data analysis of cross-cultural research. Thousand Oaks, CA: Sage.
  • Vijver, F. van de, & Leung, K. (2000). Methodological issues in psychological research on culture. Journal of Cross-Cultural Psychology, 31(1), 33 –51.
  • Vijver, F. van de, & Tanzer, N. K. (1997). Bias and equivalence in cross-cultural assessment: An overview. European Journal of Applied Psychology, 47(4), 263 –279.
  • Wirth, W., & Kolb, S. (2004). Designs and methods of comparative political communication research. In F. Esser, & B. Pfetsch (eds.), Comparing political communication: Theories, cases, and challenges. Cambridge: Cambridge University Press, pp. 87–111.

Examples logo

Comparative Research

Comparative Research Examples 1

Although not everyone would agree, comparing is not always bad. Comparing things can also give you a handful of benefits. For instance, there are times in our life where we feel lost. You may not be getting the job that you want or have the sexy body that you have been aiming for a long time now. Then, you happen to cross path with an old friend of yours, who happened to get the job that you always wanted. This scenario may put your self-esteem down, knowing that this friend got what you want, while you didn’t. Or you can choose to look at your friend as an example that your desire is actually attainable. Come up with a plan to achieve your  personal development goal . Perhaps, ask for tips from this person or from the people who inspire you. According to the article posted in  brit.co , licensed master social worker and therapist Kimberly Hershenson said that comparing yourself to someone successful can be an excellent self-motivation to work on your goals.

Aside from self-improvement, as a researcher, you should know that comparison is an essential method in scientific studies, such as experimental research and descriptive research . Through this method, you can uncover the relationship between two or more variables of your project in the form of comparative analysis .

What is Comparative Research?

Aiming to compare two or more variables of an experiment project, experts usually apply comparative research examples in social sciences to compare countries and cultures across a particular area or the entire world. Despite its proven effectiveness, you should keep it in mind that some states have different disciplines in sharing data. Thus, it would help if you consider the affecting factors in gathering specific information.

Quantitative and Qualitative Research Methods in Comparative Studies

In comparing variables, the statistical and mathematical data collection, and analysis that quantitative research methodology naturally uses to uncover the correlational connection of the variables, can be essential. Additionally, since quantitative research requires a specific research question, this method can help you can quickly come up with one particular comparative research question.

The goal of comparative research is drawing a solution out of the similarities and differences between the focused variables. Through non-experimental or qualitative research , you can include this type of research method in your comparative research design.

13+ Comparative Research Examples

Know more about comparative research by going over the following examples. You can download these zipped documents in PDF and MS Word formats.

1. Comparative Research Report Template

comparative research report template

  • Google Docs

Size: 113 KB

2. Business Comparative Research Template

business comparative research template

Size: 69 KB

3. Comparative Market Research Template

comparative market research template

Size: 172 KB

4. Comparative Research Strategies Example

comparative research strategies example

5. Comparative Research in Anthropology Example

comparative research in anthropology example

Size: 192 KB

6. Sample Comparative Research Example

sample comparative research example

Size: 516 KB

7. Comparative Area Research Example

comparative area research example

8. Comparative Research on Women’s Emplyment Example

comparative research on womens emplyment

Size: 290 KB

9. Basic Comparative Research Example

basic comparative research example

Size: 19 KB

10. Comparative Research in Medical Treatments Example

comparative research in medical treatments

11. Comparative Research in Education Example

comparative research in education

Size: 455 KB

12. Formal Comparative Research Example

formal comparative research example

Size: 244 KB

13. Comparative Research Designs Example

comparing comparative research designs

Size: 259 KB

14. Casual Comparative Research in DOC

caasual comparative research in doc

Best Practices in Writing an Essay for Comparative Research in Visual Arts

If you are going to write an essay for a comparative research examples paper, this section is for you. You must know that there are inevitable mistakes that students do in essay writing . To avoid those mistakes, follow the following pointers.

1. Compare the Artworks Not the Artists

One of the mistakes that students do when writing a comparative essay is comparing the artists instead of artworks. Unless your instructor asked you to write a biographical essay, focus your writing on the works of the artists that you choose.

2. Consult to Your Instructor

There is broad coverage of information that you can find on the internet for your project. Some students, however, prefer choosing the images randomly. In doing so, you may not create a successful comparative study. Therefore, we recommend you to discuss your selections with your teacher.

3. Avoid Redundancy

It is common for the students to repeat the ideas that they have listed in the comparison part. Keep it in mind that the spaces for this activity have limitations. Thus, it is crucial to reserve each space for more thoroughly debated ideas.

4. Be Minimal

Unless instructed, it would be practical if you only include a few items(artworks). In this way, you can focus on developing well-argued information for your study.

5. Master the Assessment Method and the Goals of the Project

We get it. You are doing this project because your instructor told you so. However, you can make your study more valuable by understanding the goals of doing the project. Know how you can apply this new learning. You should also know the criteria that your teachers use to assess your output. It will give you a chance to maximize the grade that you can get from this project.

Comparing things is one way to know what to improve in various aspects. Whether you are aiming to attain a personal goal or attempting to find a solution to a certain task, you can accomplish it by knowing how to conduct a comparative study. Use this content as a tool to expand your knowledge about this research methodology .

comparative study research questions

AI Generator

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 10 methods for comparative studies.

Francis Lau and Anne Holbrook .

10.1. Introduction

In eHealth evaluation, comparative studies aim to find out whether group differences in eHealth system adoption make a difference in important outcomes. These groups may differ in their composition, the type of system in use, and the setting where they work over a given time duration. The comparisons are to determine whether significant differences exist for some predefined measures between these groups, while controlling for as many of the conditions as possible such as the composition, system, setting and duration.

According to the typology by Friedman and Wyatt (2006) , comparative studies take on an objective view where events such as the use and effect of an eHealth system can be defined, measured and compared through a set of variables to prove or disprove a hypothesis. For comparative studies, the design options are experimental versus observational and prospective versus retro­­spective. The quality of eHealth comparative studies depends on such aspects of methodological design as the choice of variables, sample size, sources of bias, confounders, and adherence to quality and reporting guidelines.

In this chapter we focus on experimental studies as one type of comparative study and their methodological considerations that have been reported in the eHealth literature. Also included are three case examples to show how these studies are done.

10.2. Types of Comparative Studies

Experimental studies are one type of comparative study where a sample of participants is identified and assigned to different conditions for a given time duration, then compared for differences. An example is a hospital with two care units where one is assigned a cpoe system to process medication orders electronically while the other continues its usual practice without a cpoe . The participants in the unit assigned to the cpoe are called the intervention group and those assigned to usual practice are the control group. The comparison can be performance or outcome focused, such as the ratio of correct orders processed or the occurrence of adverse drug events in the two groups during the given time period. Experimental studies can take on a randomized or non-randomized design. These are described below.

10.2.1. Randomized Experiments

In a randomized design, the participants are randomly assigned to two or more groups using a known randomization technique such as a random number table. The design is prospective in nature since the groups are assigned concurrently, after which the intervention is applied then measured and compared. Three types of experimental designs seen in eHealth evaluation are described below ( Friedman & Wyatt, 2006 ; Zwarenstein & Treweek, 2009 ).

Randomized controlled trials ( rct s) – In rct s participants are randomly assigned to an intervention or a control group. The randomization can occur at the patient, provider or organization level, which is known as the unit of allocation. For instance, at the patient level one can randomly assign half of the patients to receive emr reminders while the other half do not. At the provider level, one can assign half of the providers to receive the reminders while the other half continues with their usual practice. At the organization level, such as a multisite hospital, one can randomly assign emr reminders to some of the sites but not others. Cluster randomized controlled trials ( crct s) – In crct s, clusters of participants are randomized rather than by individual participant since they are found in naturally occurring groups such as living in the same communities. For instance, clinics in one city may be randomized as a cluster to receive emr reminders while clinics in another city continue their usual practice. Pragmatic trials – Unlike rct s that seek to find out if an intervention such as a cpoe system works under ideal conditions, pragmatic trials are designed to find out if the intervention works under usual conditions. The goal is to make the design and findings relevant to and practical for decision-makers to apply in usual settings. As such, pragmatic trials have few criteria for selecting study participants, flexibility in implementing the intervention, usual practice as the comparator, the same compliance and follow-up intensity as usual practice, and outcomes that are relevant to decision-makers.

10.2.2. Non-randomized Experiments

Non-randomized design is used when it is neither feasible nor ethical to randomize participants into groups for comparison. It is sometimes referred to as a quasi-experimental design. The design can involve the use of prospective or retrospective data from the same or different participants as the control group. Three types of non-randomized designs are described below ( Harris et al., 2006 ).

Intervention group only with pretest and post-test design – This design involves only one group where a pretest or baseline measure is taken as the control period, the intervention is implemented, and a post-test measure is taken as the intervention period for comparison. For example, one can compare the rates of medication errors before and after the implementation of a cpoe system in a hospital. To increase study quality, one can add a second pretest period to decrease the probability that the pretest and post-test difference is due to chance, such as an unusually low medication error rate in the first pretest period. Other ways to increase study quality include adding an unrelated outcome such as patient case-mix that should not be affected, removing the intervention to see if the difference remains, and removing then re-implementing the intervention to see if the differences vary accordingly. Intervention and control groups with post-test design – This design involves two groups where the intervention is implemented in one group and compared with a second group without the intervention, based on a post-test measure from both groups. For example, one can implement a cpoe system in one care unit as the intervention group with a second unit as the control group and compare the post-test medication error rates in both units over six months. To increase study quality, one can add one or more pretest periods to both groups, or implement the intervention to the control group at a later time to measure for similar but delayed effects. Interrupted time series ( its ) design – In its design, multiple measures are taken from one group in equal time intervals, interrupted by the implementation of the intervention. The multiple pretest and post-test measures decrease the probability that the differences detected are due to chance or unrelated effects. An example is to take six consecutive monthly medication error rates as the pretest measures, implement the cpoe system, then take another six consecutive monthly medication error rates as the post-test measures for comparison in error rate differences over 12 months. To increase study quality, one may add a concurrent control group for comparison to be more convinced that the intervention produced the change.

10.3. Methodological Considerations

The quality of comparative studies is dependent on their internal and external validity. Internal validity refers to the extent to which conclusions can be drawn correctly from the study setting, participants, intervention, measures, analysis and interpretations. External validity refers to the extent to which the conclusions can be generalized to other settings. The major factors that influence validity are described below.

10.3.1. Choice of Variables

Variables are specific measurable features that can influence validity. In comparative studies, the choice of dependent and independent variables and whether they are categorical and/or continuous in values can affect the type of questions, study design and analysis to be considered. These are described below ( Friedman & Wyatt, 2006 ).

Dependent variables – This refers to outcomes of interest; they are also known as outcome variables. An example is the rate of medication errors as an outcome in determining whether cpoe can improve patient safety. Independent variables – This refers to variables that can explain the measured values of the dependent variables. For instance, the characteristics of the setting, participants and intervention can influence the effects of cpoe . Categorical variables – This refers to variables with measured values in discrete categories or levels. Examples are the type of providers (e.g., nurses, physicians and pharmacists), the presence or absence of a disease, and pain scale (e.g., 0 to 10 in increments of 1). Categorical variables are analyzed using non-parametric methods such as chi-square and odds ratio. Continuous variables – This refers to variables that can take on infinite values within an interval limited only by the desired precision. Examples are blood pressure, heart rate and body temperature. Continuous variables are analyzed using parametric methods such as t -test, analysis of variance or multiple regression.

10.3.2. Sample Size

Sample size is the number of participants to include in a study. It can refer to patients, providers or organizations depending on how the unit of allocation is defined. There are four parts to calculating sample size. They are described below ( Noordzij et al., 2010 ).

Significance level – This refers to the probability that a positive finding is due to chance alone. It is usually set at 0.05, which means having a less than 5% chance of drawing a false positive conclusion. Power – This refers to the ability to detect the true effect based on a sample from the population. It is usually set at 0.8, which means having at least an 80% chance of drawing a correct conclusion. Effect size – This refers to the minimal clinically relevant difference that can be detected between comparison groups. For continuous variables, the effect is a numerical value such as a 10-kilogram weight difference between two groups. For categorical variables, it is a percentage such as a 10% difference in medication error rates. Variability – This refers to the population variance of the outcome of interest, which is often unknown and is estimated by way of standard deviation ( sd ) from pilot or previous studies for continuous outcome.

Table 10.1. Sample Size Equations for Comparing Two Groups with Continuous and Categorical Outcome Variables.

Sample Size Equations for Comparing Two Groups with Continuous and Categorical Outcome Variables.

An example of sample size calculation for an rct to examine the effect of cds on improving systolic blood pressure of hypertensive patients is provided in the Appendix. Refer to the Biomath website from Columbia University (n.d.) for a simple Web-based sample size / power calculator.

10.3.3. Sources of Bias

There are five common sources of biases in comparative studies. They are selection, performance, detection, attrition and reporting biases ( Higgins & Green, 2011 ). These biases, and the ways to minimize them, are described below ( Vervloet et al., 2012 ).

Selection or allocation bias – This refers to differences between the composition of comparison groups in terms of the response to the intervention. An example is having sicker or older patients in the control group than those in the intervention group when evaluating the effect of emr reminders. To reduce selection bias, one can apply randomization and concealment when assigning participants to groups and ensure their compositions are comparable at baseline. Performance bias – This refers to differences between groups in the care they received, aside from the intervention being evaluated. An example is the different ways by which reminders are triggered and used within and across groups such as electronic, paper and phone reminders for patients and providers. To reduce performance bias, one may standardize the intervention and blind participants from knowing whether an intervention was received and which intervention was received. Detection or measurement bias – This refers to differences between groups in how outcomes are determined. An example is where outcome assessors pay more attention to outcomes of patients known to be in the intervention group. To reduce detection bias, one may blind assessors from participants when measuring outcomes and ensure the same timing for assessment across groups. Attrition bias – This refers to differences between groups in ways that participants are withdrawn from the study. An example is the low rate of participant response in the intervention group despite having received reminders for follow-up care. To reduce attrition bias, one needs to acknowledge the dropout rate and analyze data according to an intent-to-treat principle (i.e., include data from those who dropped out in the analysis). Reporting bias – This refers to differences between reported and unreported findings. Examples include biases in publication, time lag, citation, language and outcome reporting depending on the nature and direction of the results. To reduce reporting bias, one may make the study protocol available with all pre-specified outcomes and report all expected outcomes in published results.

10.3.4. Confounders

Confounders are factors other than the intervention of interest that can distort the effect because they are associated with both the intervention and the outcome. For instance, in a study to demonstrate whether the adoption of a medication order entry system led to lower medication costs, there can be a number of potential confounders that can affect the outcome. These may include severity of illness of the patients, provider knowledge and experience with the system, and hospital policy on prescribing medications ( Harris et al., 2006 ). Another example is the evaluation of the effect of an antibiotic reminder system on the rate of post-operative deep venous thromboses ( dvt s). The confounders can be general improvements in clinical practice during the study such as prescribing patterns and post-operative care that are not related to the reminders ( Friedman & Wyatt, 2006 ).

To control for confounding effects, one may consider the use of matching, stratification and modelling. Matching involves the selection of similar groups with respect to their composition and behaviours. Stratification involves the division of participants into subgroups by selected variables, such as comorbidity index to control for severity of illness. Modelling involves the use of statistical techniques such as multiple regression to adjust for the effects of specific variables such as age, sex and/or severity of illness ( Higgins & Green, 2011 ).

10.3.5. Guidelines on Quality and Reporting

There are guidelines on the quality and reporting of comparative studies. The grade (Grading of Recommendations Assessment, Development and Evaluation) guidelines provide explicit criteria for rating the quality of studies in randomized trials and observational studies ( Guyatt et al., 2011 ). The extended consort (Consolidated Standards of Reporting Trials) Statements for non-pharmacologic trials ( Boutron, Moher, Altman, Schulz, & Ravaud, 2008 ), pragmatic trials ( Zwarestein et al., 2008 ), and eHealth interventions ( Baker et al., 2010 ) provide reporting guidelines for randomized trials.

The grade guidelines offer a system of rating quality of evidence in systematic reviews and guidelines. In this approach, to support estimates of intervention effects rct s start as high-quality evidence and observational studies as low-quality evidence. For each outcome in a study, five factors may rate down the quality of evidence. The final quality of evidence for each outcome would fall into one of high, moderate, low, and very low quality. These factors are listed below (for more details on the rating system, refer to Guyatt et al., 2011 ).

Design limitations – For rct s they cover the lack of allocation concealment, lack of blinding, large loss to follow-up, trial stopped early or selective outcome reporting. Inconsistency of results – Variations in outcomes due to unexplained heterogeneity. An example is the unexpected variation of effects across subgroups of patients by severity of illness in the use of preventive care reminders. Indirectness of evidence – Reliance on indirect comparisons due to restrictions in study populations, intervention, comparator or outcomes. An example is the 30-day readmission rate as a surrogate outcome for quality of computer-supported emergency care in hospitals. Imprecision of results – Studies with small sample size and few events typically would have wide confidence intervals and are considered of low quality. Publication bias – The selective reporting of results at the individual study level is already covered under design limitations, but is included here for completeness as it is relevant when rating quality of evidence across studies in systematic reviews.

The original consort Statement has 22 checklist items for reporting rct s. For non-pharmacologic trials extensions have been made to 11 items. For pragmatic trials extensions have been made to eight items. These items are listed below. For further details, readers can refer to Boutron and colleagues (2008) and the consort website ( consort , n.d.).

Title and abstract – one item on the means of randomization used. Introduction – one item on background, rationale, and problem addressed by the intervention. Methods – 10 items on participants, interventions, objectives, outcomes, sample size, randomization (sequence generation, allocation concealment, implementation), blinding (masking), and statistical methods. Results – seven items on participant flow, recruitment, baseline data, numbers analyzed, outcomes and estimation, ancillary analyses, adverse events. Discussion – three items on interpretation, generalizability, overall evidence.

The consort Statement for eHealth interventions describes the relevance of the consort recommendations to the design and reporting of eHealth studies with an emphasis on Internet-based interventions for direct use by patients, such as online health information resources, decision aides and phr s. Of particular importance is the need to clearly define the intervention components, their role in the overall care process, target population, implementation process, primary and secondary outcomes, denominators for outcome analyses, and real world potential (for details refer to Baker et al., 2010 ).

10.4. Case Examples

10.4.1. pragmatic rct in vascular risk decision support.

Holbrook and colleagues (2011) conducted a pragmatic rct to examine the effects of a cds intervention on vascular care and outcomes for older adults. The study is summarized below.

Setting – Community-based primary care practices with emr s in one Canadian province. Participants – English-speaking patients 55 years of age or older with diagnosed vascular disease, no cognitive impairment and not living in a nursing home, who had a provider visit in the past 12 months. Intervention – A Web-based individualized vascular tracking and advice cds system for eight top vascular risk factors and two diabetic risk factors, for use by both providers and patients and their families. Providers and staff could update the patient’s profile at any time and the cds algorithm ran nightly to update recommendations and colour highlighting used in the tracker interface. Intervention patients had Web access to the tracker, a print version mailed to them prior to the visit, and telephone support on advice. Design – Pragmatic, one-year, two-arm, multicentre rct , with randomization upon patient consent by phone, using an allocation-concealed online program. Randomization was by patient with stratification by provider using a block size of six. Trained reviewers examined emr data and conducted patient telephone interviews to collect risk factors, vascular history, and vascular events. Providers completed questionnaires on the intervention at study end. Patients had final 12-month lab checks on urine albumin, low-density lipoprotein cholesterol, and A1c levels. Outcomes – Primary outcome was based on change in process composite score ( pcs ) computed as the sum of frequency-weighted process score for each of the eight main risk factors with a maximum score of 27. The process was considered met if a risk factor had been checked. pcs was measured at baseline and study end with the difference as the individual primary outcome scores. The main secondary outcome was a clinical composite score ( ccs ) based on the same eight risk factors compared in two ways: a comparison of the mean number of clinical variables on target and the percentage of patients with improvement between the two groups. Other secondary outcomes were actual vascular event rates, individual pcs and ccs components, ratings of usability, continuity of care, patient ability to manage vascular risk, and quality of life using the EuroQol five dimensions questionnaire ( eq-5D) . Analysis – 1,100 patients were needed to achieve 90% power in detecting a one-point pcs difference between groups with a standard deviation of five points, two-tailed t -test for mean difference at 5% significance level, and a withdrawal rate of 10%. The pcs , ccs and eq-5D scores were analyzed using a generalized estimating equation accounting for clustering within providers. Descriptive statistics and χ2 tests or exact tests were done with other outcomes. Findings – 1,102 patients and 49 providers enrolled in the study. The intervention group with 545 patients had significant pcs improvement with a difference of 4.70 ( p < .001) on a 27-point scale. The intervention group also had significantly higher odds of rating improvements in their continuity of care (4.178, p < .001) and ability to improve their vascular health (3.07, p < .001). There was no significant change in vascular events, clinical variables and quality of life. Overall the cds intervention led to reduced vascular risks but not to improved clinical outcomes in a one-year follow-up.

10.4.2. Non-randomized Experiment in Antibiotic Prescribing in Primary Care

Mainous, Lambourne, and Nietert (2013) conducted a prospective non-randomized trial to examine the impact of a cds system on antibiotic prescribing for acute respiratory infections ( ari s) in primary care. The study is summarized below.

Setting – A primary care research network in the United States whose members use a common emr and pool data quarterly for quality improvement and research studies. Participants – An intervention group with nine practices across nine states, and a control group with 61 practices. Intervention – Point-of-care cds tool as customizable progress note templates based on existing emr features. cds recommendations reflect Centre for Disease Control and Prevention ( cdc ) guidelines based on a patient’s predominant presenting symptoms and age. cds was used to assist in ari diagnosis, prompt antibiotic use, record diagnosis and treatment decisions, and access printable patient and provider education resources from the cdc . Design – The intervention group received a multi-method intervention to facilitate provider cds adoption that included quarterly audit and feedback, best practice dissemination meetings, academic detailing site visits, performance review and cds training. The control group did not receive information on the intervention, the cds or education. Baseline data collection was for three months with follow-up of 15 months after cds implementation. Outcomes – The outcomes were frequency of inappropriate prescribing during an ari episode, broad-spectrum antibiotic use and diagnostic shift. Inappropriate prescribing was computed by dividing the number of ari episodes with diagnoses in the inappropriate category that had an antibiotic prescription by the total number of ari episodes with diagnosis for which antibiotics are inappropriate. Broad-spectrum antibiotic use was computed by all ari episodes with a broad-spectrum antibiotic prescription by the total number of ari episodes with an antibiotic prescription. Antibiotic drift was computed in two ways: dividing the number of ari episodes with diagnoses where antibiotics are appropriate by the total number of ari episodes with an antibiotic prescription; and dividing the number of ari episodes where antibiotics were inappropriate by the total number of ari episodes. Process measure included frequency of cds template use and whether the outcome measures differed by cds usage. Analysis – Outcomes were measured quarterly for each practice, weighted by the number of ari episodes during the quarter to assign greater weight to practices with greater numbers of relevant episodes and to periods with greater numbers of relevant episodes. Weighted means and 95% ci s were computed separately for adult and pediatric (less than 18 years of age) patients for each time period for both groups. Baseline means in outcome measures were compared between the two groups using weighted independent-sample t -tests. Linear mixed models were used to compare changes over the 18-month period. The models included time, intervention status, and were adjusted for practice characteristics such as specialty, size, region and baseline ari s. Random practice effects were included to account for clustering of repeated measures on practices over time. P -values of less than 0.05 were considered significant. Findings – For adult patients, inappropriate prescribing in ari episodes declined more among the intervention group (-0.6%) than the control group (4.2%)( p = 0.03), and prescribing of broad-spectrum antibiotics declined by 16.6% in the intervention group versus an increase of 1.1% in the control group ( p < 0.0001). For pediatric patients, there was a similar decline of 19.7% in the intervention group versus an increase of 0.9% in the control group ( p < 0.0001). In summary, the cds had a modest effect in reducing inappropriate prescribing for adults, but had a substantial effect in reducing the prescribing of broad-spectrum antibiotics in adult and pediatric patients.

10.4.3. Interrupted Time Series on EHR Impact in Nursing Care

Dowding, Turley, and Garrido (2012) conducted a prospective its study to examine the impact of ehr implementation on nursing care processes and outcomes. The study is summarized below.

Setting – Kaiser Permanente ( kp ) as a large not-for-profit integrated healthcare organization in the United States. Participants – 29 kp hospitals in the northern and southern regions of California. Intervention – An integrated ehr system implemented at all hospitals with cpoe , nursing documentation and risk assessment tools. The nursing component for risk assessment documentation of pressure ulcers and falls was consistent across hospitals and developed by clinical nurses and informaticists by consensus. Design – its design with monthly data on pressure ulcers and quarterly data on fall rates and risk collected over seven years between 2003 and 2009. All data were collected at the unit level for each hospital. Outcomes – Process measures were the proportion of patients with a fall risk assessment done and the proportion with a hospital-acquired pressure ulcer ( hapu ) risk assessment done within 24 hours of admission. Outcome measures were fall and hapu rates as part of the unit-level nursing care process and nursing sensitive outcome data collected routinely for all California hospitals. Fall rate was defined as the number of unplanned descents to the floor per 1,000 patient days, and hapu rate was the percentage of patients with stages i-IV or unstageable ulcer on the day of data collection. Analysis – Fall and hapu risk data were synchronized using the month in which the ehr was implemented at each hospital as time zero and aggregated across hospitals for each time period. Multivariate regression analysis was used to examine the effect of time, region and ehr . Findings – The ehr was associated with significant increase in document rates for hapu risk (2.21; 95% CI 0.67 to 3.75) and non-significant increase for fall risk (0.36; -3.58 to 4.30). The ehr was associated with 13% decrease in hapu rates (-0.76; -1.37 to -0.16) but no change in fall rates (-0.091; -0.29 to 011). Hospital region was a significant predictor of variation for hapu (0.72; 0.30 to 1.14) and fall rates (0.57; 0.41 to 0.72). During the study period, hapu rates decreased significantly (-0.16; -0.20 to -0.13) but not fall rates (0.0052; -0.01 to 0.02). In summary, ehr implementation was associated with a reduction in the number of hapu s but not patient falls, and changes over time and hospital region also affected outcomes.

10.5. Summary

In this chapter we introduced randomized and non-randomized experimental designs as two types of comparative studies used in eHealth evaluation. Randomization is the highest quality design as it reduces bias, but it is not always feasible. The methodological issues addressed include choice of variables, sample size, sources of biases, confounders, and adherence to reporting guidelines. Three case examples were included to show how eHealth comparative studies are done.

  • Baker T. B., Gustafson D. H., Shaw B., Hawkins R., Pingree S., Roberts L., Strecher V. Relevance of consort reporting criteria for research on eHealth interventions. Patient Education and Counselling. 2010; 81 (suppl. 7):77–86. [ PMC free article : PMC2993846 ] [ PubMed : 20843621 ]
  • Columbia University. (n.d.). Statistics: sample size / power calculation. Biomath (Division of Biomathematics/Biostatistics), Department of Pediatrics. New York: Columbia University Medical Centre. Retrieved from http://www ​.biomath.info/power/index.htm .
  • Boutron I., Moher D., Altman D. G., Schulz K. F., Ravaud P. consort Group. Extending the consort statement to randomized trials of nonpharmacologic treatment: Explanation and elaboration. Annals of Internal Medicine. 2008; 148 (4):295–309. [ PubMed : 18283207 ]
  • Cochrane Collaboration. Cochrane handbook. London: Author; (n.d.) Retrieved from http://handbook ​.cochrane.org/
  • consort Group. (n.d.). The consort statement . Retrieved from http://www ​.consort-statement.org/
  • Dowding D. W., Turley M., Garrido T. The impact of an electronic health record on nurse sensitive patient outcomes: an interrupted time series analysis. Journal of the American Medical Informatics Association. 2012; 19 (4):615–620. [ PMC free article : PMC3384108 ] [ PubMed : 22174327 ]
  • Friedman C. P., Wyatt J.C. Evaluation methods in biomedical informatics. 2nd ed. New York: Springer Science + Business Media, Inc; 2006.
  • Guyatt G., Oxman A. D., Akl E. A., Kunz R., Vist G., Brozek J. et al. Schunemann H. J. grade guidelines: 1. Introduction – grade evidence profiles and summary of findings tables. Journal of Clinical Epidemiology. 2011; 64 (4):383–394. [ PubMed : 21195583 ]
  • Harris A. D., McGregor J. C., Perencevich E. N., Furuno J. P., Zhu J., Peterson D. E., Finkelstein J. The use and interpretation of quasi-experimental studies in medical informatics. Journal of the American Medical Informatics Association. 2006; 13 (1):16–23. [ PMC free article : PMC1380192 ] [ PubMed : 16221933 ]
  • The Cochrane Collaboration. Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. London: 2011. (Version 5.1.0, updated March 2011) Retrieved from http://handbook ​.cochrane.org/
  • Holbrook A., Pullenayegum E., Thabane L., Troyan S., Foster G., Keshavjee K. et al. Curnew G. Shared electronic vascular risk decision support in primary care. Computerization of medical practices for the enhancement of therapeutic effectiveness (compete III) randomized trial. Archives of Internal Medicine. 2011; 171 (19):1736–1744. [ PubMed : 22025430 ]
  • Mainous III A. G., Lambourne C. A., Nietert P.J. Impact of a clinical decision support system on antibiotic prescribing for acute respiratory infections in primary care: quasi-experimental trial. Journal of the American Medical Informatics Association. 2013; 20 (2):317–324. [ PMC free article : PMC3638170 ] [ PubMed : 22759620 ]
  • Noordzij M., Tripepi G., Dekker F. W., Zoccali C., Tanck M. W., Jager K.J. Sample size calculations: basic principles and common pitfalls. Nephrology Dialysis Transplantation. 2010; 25 (5):1388–1393. Retrieved from http://ndt ​.oxfordjournals ​.org/content/early/2010/01/12/ndt ​.gfp732.short . [ PubMed : 20067907 ]
  • Vervloet M., Linn A. J., van Weert J. C. M., de Bakker D. H., Bouvy M. L., van Dijk L. The effectiveness of interventions using electronic reminders to improve adherence to chronic medication: A systematic review of the literature. Journal of the American Medical Informatics Association. 2012; 19 (5):696–704. [ PMC free article : PMC3422829 ] [ PubMed : 22534082 ]
  • Zwarenstein M., Treweek S., Gagnier J. J., Altman D. G., Tunis S., Haynes B., Oxman A. D., Moher D. for the consort and Pragmatic Trials in Healthcare (Practihc) groups. Improving the reporting of pragmatic trials: an extension of the consort statement. British Medical Journal. 2008; 337 :a2390. [ PMC free article : PMC3266844 ] [ PubMed : 19001484 ] [ CrossRef ]
  • Zwarenstein M., Treweek S. What kind of randomized trials do we need? Canadian Medical Association Journal. 2009; 180 (10):998–1000. [ PMC free article : PMC2679816 ] [ PubMed : 19372438 ]

Appendix. Example of Sample Size Calculation

This is an example of sample size calculation for an rct that examines the effect of a cds system on reducing systolic blood pressure in hypertensive patients. The case is adapted from the example described in the publication by Noordzij et al. (2010) .

(a) Systolic blood pressure as a continuous outcome measured in mmHg

Based on similar studies in the literature with similar patients, the systolic blood pressure values from the comparison groups are expected to be normally distributed with a standard deviation of 20 mmHg. The evaluator wishes to detect a clinically relevant difference of 15 mmHg in systolic blood pressure as an outcome between the intervention group with cds and the control group without cds . Assuming a significance level or alpha of 0.05 for 2-tailed t -test and power of 0.80, the corresponding multipliers 1 are 1.96 and 0.842, respectively. Using the sample size equation for continuous outcome below we can calculate the sample size needed for the above study.

n = 2[(a+b)2σ2]/(μ1-μ2)2 where

n = sample size for each group

μ1 = population mean of systolic blood pressures in intervention group

μ2 = population mean of systolic blood pressures in control group

μ1- μ2 = desired difference in mean systolic blood pressures between groups

σ = population variance

a = multiplier for significance level (or alpha)

b = multiplier for power (or 1-beta)

Providing the values in the equation would give the sample size (n) of 28 samples per group as the result

n = 2[(1.96+0.842)2(202)]/152 or 28 samples per group

(b) Systolic blood pressure as a categorical outcome measured as below or above 140 mmHg (i.e., hypertension yes/no)

In this example a systolic blood pressure from a sample that is above 140 mmHg is considered an event of the patient with hypertension. Based on published literature the proportion of patients in the general population with hypertension is 30%. The evaluator wishes to detect a clinically relevant difference of 10% in systolic blood pressure as an outcome between the intervention group with cds and the control group without cds . This means the expected proportion of patients with hypertension is 20% (p1 = 0.2) in the intervention group and 30% (p2 = 0.3) in the control group. Assuming a significance level or alpha of 0.05 for 2-tailed t -test and power of 0.80 the corresponding multipliers are 1.96 and 0.842, respectively. Using the sample size equation for categorical outcome below, we can calculate the sample size needed for the above study.

n = [(a+b)2(p1q1+p2q2)]/χ2

p1 = proportion of patients with hypertension in intervention group

q1 = proportion of patients without hypertension in intervention group (or 1-p1)

p2 = proportion of patients with hypertension in control group

q2 = proportion of patients without hypertension in control group (or 1-p2)

χ = desired difference in proportion of hypertensive patients between two groups

Providing the values in the equation would give the sample size (n) of 291 samples per group as the result

n = [(1.96+0.842)2((0.2)(0.8)+(0.3)(0.7))]/(0.1)2 or 291 samples per group

From Table 3 on p. 1392 of Noordzij et al. (2010).

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Lau F, Holbrook A. Chapter 10 Methods for Comparative Studies. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)
  • Disable Glossary Links

In this Page

  • Introduction
  • Types of Comparative Studies
  • Methodological Considerations
  • Case Examples
  • Example of Sample Size Calculation

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 10 Methods for Comparative Studies - Handbook of eHealth Evaluation: An ... Chapter 10 Methods for Comparative Studies - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, a comparative analysis of student performance in an online vs. face-to-face environmental science course from 2009 to 2016.

comparative study research questions

  • Department of Biology, Fort Valley State University, Fort Valley, GA, United States

A growing number of students are now opting for online classes. They find the traditional classroom modality restrictive, inflexible, and impractical. In this age of technological advancement, schools can now provide effective classroom teaching via the Web. This shift in pedagogical medium is forcing academic institutions to rethink how they want to deliver their course content. The overarching purpose of this research was to determine which teaching method proved more effective over the 8-year period. The scores of 548 students, 401 traditional students and 147 online students, in an environmental science class were used to determine which instructional modality generated better student performance. In addition to the overarching objective, we also examined score variabilities between genders and classifications to determine if teaching modality had a greater impact on specific groups. No significant difference in student performance between online and face-to-face (F2F) learners overall, with respect to gender, or with respect to class rank were found. These data demonstrate the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank. A potential exists for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.

Introduction

The advent of online education has made it possible for students with busy lives and limited flexibility to obtain a quality education. As opposed to traditional classroom teaching, Web-based instruction has made it possible to offer classes worldwide through a single Internet connection. Although it boasts several advantages over traditional education, online instruction still has its drawbacks, including limited communal synergies. Still, online education seems to be the path many students are taking to secure a degree.

This study compared the effectiveness of online vs. traditional instruction in an environmental studies class. Using a single indicator, we attempted to see if student performance was effected by instructional medium. This study sought to compare online and F2F teaching on three levels—pure modality, gender, and class rank. Through these comparisons, we investigated whether one teaching modality was significantly more effective than the other. Although there were limitations to the study, this examination was conducted to provide us with additional measures to determine if students performed better in one environment over another ( Mozes-Carmel and Gold, 2009 ).

The methods, procedures, and operationalization tools used in this assessment can be expanded upon in future quantitative, qualitative, and mixed method designs to further analyze this topic. Moreover, the results of this study serve as a backbone for future meta-analytical studies.

Origins of Online Education

Computer-assisted instruction is changing the pedagogical landscape as an increasing number of students are seeking online education. Colleges and universities are now touting the efficiencies of Web-based education and are rapidly implementing online classes to meet student needs worldwide. One study reported “increases in the number of online courses given by universities have been quite dramatic over the last couple of years” ( Lundberg et al., 2008 ). Think tanks are also disseminating statistics on Web-based instruction. “In 2010, the Sloan Consortium found a 17% increase in online students from the years before, beating the 12% increase from the previous year” ( Keramidas, 2012 ).

Contrary to popular belief, online education is not a new phenomenon. The first correspondence and distance learning educational programs were initiated in the mid-1800s by the University of London. This model of educational learning was dependent on the postal service and therefore wasn't seen in American until the later Nineteenth century. It was in 1873 when what is considered the first official correspondence educational program was established in Boston, Massachusetts known as the “Society to Encourage Home Studies.” Since then, non-traditional study has grown into what it is today considered a more viable online instructional modality. Technological advancement indubitably helped improve the speed and accessibility of distance learning courses; now students worldwide could attend classes from the comfort of their own homes.

Qualities of Online and Traditional Face to Face (F2F) Classroom Education

Online and traditional education share many qualities. Students are still required to attend class, learn the material, submit assignments, and complete group projects. While teachers, still have to design curriculums, maximize instructional quality, answer class questions, motivate students to learn, and grade assignments. Despite these basic similarities, there are many differences between the two modalities. Traditionally, classroom instruction is known to be teacher-centered and requires passive learning by the student, while online instruction is often student-centered and requires active learning.

In teacher-centered, or passive learning, the instructor usually controls classroom dynamics. The teacher lectures and comments, while students listen, take notes, and ask questions. In student-centered, or active learning, the students usually determine classroom dynamics as they independently analyze the information, construct questions, and ask the instructor for clarification. In this scenario, the teacher, not the student, is listening, formulating, and responding ( Salcedo, 2010 ).

In education, change comes with questions. Despite all current reports championing online education, researchers are still questioning its efficacy. Research is still being conducted on the effectiveness of computer-assisted teaching. Cost-benefit analysis, student experience, and student performance are now being carefully considered when determining whether online education is a viable substitute for classroom teaching. This decision process will most probably carry into the future as technology improves and as students demand better learning experiences.

Thus far, “literature on the efficacy of online courses is expansive and divided” ( Driscoll et al., 2012 ). Some studies favor traditional classroom instruction, stating “online learners will quit more easily” and “online learning can lack feedback for both students and instructors” ( Atchley et al., 2013 ). Because of these shortcomings, student retention, satisfaction, and performance can be compromised. Like traditional teaching, distance learning also has its apologists who aver online education produces students who perform as well or better than their traditional classroom counterparts ( Westhuis et al., 2006 ).

The advantages and disadvantages of both instructional modalities need to be fully fleshed out and examined to truly determine which medium generates better student performance. Both modalities have been proven to be relatively effective, but, as mentioned earlier, the question to be asked is if one is truly better than the other.

Student Need for Online Education

With technological advancement, learners now want quality programs they can access from anywhere and at any time. Because of these demands, online education has become a viable, alluring option to business professionals, stay-at home-parents, and other similar populations. In addition to flexibility and access, multiple other face value benefits, including program choice and time efficiency, have increased the attractiveness of distance learning ( Wladis et al., 2015 ).

First, prospective students want to be able to receive a quality education without having to sacrifice work time, family time, and travel expense. Instead of having to be at a specific location at a specific time, online educational students have the freedom to communicate with instructors, address classmates, study materials, and complete assignments from any Internet-accessible point ( Richardson and Swan, 2003 ). This type of flexibility grants students much-needed mobility and, in turn, helps make the educational process more enticing. According to Lundberg et al. (2008) “the student may prefer to take an online course or a complete online-based degree program as online courses offer more flexible study hours; for example, a student who has a job could attend the virtual class watching instructional film and streaming videos of lectures after working hours.”

Moreover, more study time can lead to better class performance—more chapters read, better quality papers, and more group project time. Studies on the relationship between study time and performance are limited; however, it is often assumed the online student will use any surplus time to improve grades ( Bigelow, 2009 ). It is crucial to mention the link between flexibility and student performance as grades are the lone performance indicator of this research.

Second, online education also offers more program choices. With traditional classroom study, students are forced to take courses only at universities within feasible driving distance or move. Web-based instruction, on the other hand, grants students electronic access to multiple universities and course offerings ( Salcedo, 2010 ). Therefore, students who were once limited to a few colleges within their immediate area can now access several colleges worldwide from a single convenient location.

Third, with online teaching, students who usually don't participate in class may now voice their opinions and concerns. As they are not in a classroom setting, quieter students may feel more comfortable partaking in class dialogue without being recognized or judged. This, in turn, may increase average class scores ( Driscoll et al., 2012 ).

Benefits of Face-to-Face (F2F) Education via Traditional Classroom Instruction

The other modality, classroom teaching, is a well-established instructional medium in which teaching style and structure have been refined over several centuries. Face-to-face instruction has numerous benefits not found in its online counterpart ( Xu and Jaggars, 2016 ).

First and, perhaps most importantly, classroom instruction is extremely dynamic. Traditional classroom teaching provides real-time face-to-face instruction and sparks innovative questions. It also allows for immediate teacher response and more flexible content delivery. Online instruction dampens the learning process because students must limit their questions to blurbs, then grant the teacher and fellow classmates time to respond ( Salcedo, 2010 ). Over time, however, online teaching will probably improve, enhancing classroom dynamics and bringing students face-to face with their peers/instructors. However, for now, face-to-face instruction provides dynamic learning attributes not found in Web-based teaching ( Kemp and Grieve, 2014 ).

Second, traditional classroom learning is a well-established modality. Some students are opposed to change and view online instruction negatively. These students may be technophobes, more comfortable with sitting in a classroom taking notes than sitting at a computer absorbing data. Other students may value face-to-face interaction, pre and post-class discussions, communal learning, and organic student-teacher bonding ( Roval and Jordan, 2004 ). They may see the Internet as an impediment to learning. If not comfortable with the instructional medium, some students may shun classroom activities; their grades might slip and their educational interest might vanish. Students, however, may eventually adapt to online education. With more universities employing computer-based training, students may be forced to take only Web-based courses. Albeit true, this doesn't eliminate the fact some students prefer classroom intimacy.

Third, face-to-face instruction doesn't rely upon networked systems. In online learning, the student is dependent upon access to an unimpeded Internet connection. If technical problems occur, online students may not be able to communicate, submit assignments, or access study material. This problem, in turn, may frustrate the student, hinder performance, and discourage learning.

Fourth, campus education provides students with both accredited staff and research libraries. Students can rely upon administrators to aid in course selection and provide professorial recommendations. Library technicians can help learners edit their papers, locate valuable study material, and improve study habits. Research libraries may provide materials not accessible by computer. In all, the traditional classroom experience gives students important auxiliary tools to maximize classroom performance.

Fifth, traditional classroom degrees trump online educational degrees in terms of hiring preferences. Many academic and professional organizations do not consider online degrees on par with campus-based degrees ( Columbaro and Monaghan, 2009 ). Often, prospective hiring bodies think Web-based education is a watered-down, simpler means of attaining a degree, often citing poor curriculums, unsupervised exams, and lenient homework assignments as detriments to the learning process.

Finally, research shows online students are more likely to quit class if they do not like the instructor, the format, or the feedback. Because they work independently, relying almost wholly upon self-motivation and self-direction, online learners may be more inclined to withdraw from class if they do not get immediate results.

The classroom setting provides more motivation, encouragement, and direction. Even if a student wanted to quit during the first few weeks of class, he/she may be deterred by the instructor and fellow students. F2F instructors may be able to adjust the structure and teaching style of the class to improve student retention ( Kemp and Grieve, 2014 ). With online teaching, instructors are limited to electronic correspondence and may not pick-up on verbal and non-verbal cues.

Both F2F and online teaching have their pros and cons. More studies comparing the two modalities to achieve specific learning outcomes in participating learner populations are required before well-informed decisions can be made. This study examined the two modalities over eight (8) years on three different levels. Based on the aforementioned information, the following research questions resulted.

RQ1: Are there significant differences in academic performance between online and F2F students enrolled in an environmental science course?

RQ2: Are there gender differences between online and F2F student performance in an environmental science course?

RQ3: Are there significant differences between the performance of online and F2F students in an environmental science course with respect to class rank?

The results of this study are intended to edify teachers, administrators, and policymakers on which medium may work best.

Methodology

Participants.

The study sample consisted of 548 FVSU students who completed the Environmental Science class between 2009 and 2016. The final course grades of the participants served as the primary comparative factor in assessing performance differences between online and F2F instruction. Of the 548 total participants, 147 were online students while 401 were traditional students. This disparity was considered a limitation of the study. Of the 548 total students, 246 were male, while 302 were female. The study also used students from all four class ranks. There were 187 freshmen, 184 sophomores, 76 juniors, and 101 seniors. This was a convenience, non-probability sample so the composition of the study set was left to the discretion of the instructor. No special preferences or weights were given to students based upon gender or rank. Each student was considered a single, discrete entity or statistic.

All sections of the course were taught by a full-time biology professor at FVSU. The professor had over 10 years teaching experience in both classroom and F2F modalities. The professor was considered an outstanding tenured instructor with strong communication and management skills.

The F2F class met twice weekly in an on-campus classroom. Each class lasted 1 h and 15 min. The online class covered the same material as the F2F class, but was done wholly on-line using the Desire to Learn (D2L) e-learning system. Online students were expected to spend as much time studying as their F2F counterparts; however, no tracking measure was implemented to gauge e-learning study time. The professor combined textbook learning, lecture and class discussion, collaborative projects, and assessment tasks to engage students in the learning process.

This study did not differentiate between part-time and full-time students. Therefore, many part-time students may have been included in this study. This study also did not differentiate between students registered primarily at FVSU or at another institution. Therefore, many students included in this study may have used FVSU as an auxiliary institution to complete their environmental science class requirement.

Test Instruments

In this study, student performance was operationalized by final course grades. The final course grade was derived from test, homework, class participation, and research project scores. The four aforementioned assessments were valid and relevant; they were useful in gauging student ability and generating objective performance measurements. The final grades were converted from numerical scores to traditional GPA letters.

Data Collection Procedures

The sample 548 student grades were obtained from FVSU's Office of Institutional Research Planning and Effectiveness (OIRPE). The OIRPE released the grades to the instructor with the expectation the instructor would maintain confidentiality and not disclose said information to third parties. After the data was obtained, the instructor analyzed and processed the data though SPSS software to calculate specific values. These converted values were subsequently used to draw conclusions and validate the hypothesis.

Summary of the Results: The chi-square analysis showed no significant difference in student performance between online and face-to-face (F2F) learners [χ 2 (4, N = 548) = 6.531, p > 0.05]. The independent sample t -test showed no significant difference in student performance between online and F2F learners with respect to gender [ t (145) = 1.42, p = 0.122]. The 2-way ANOVA showed no significant difference in student performance between online and F2F learners with respect to class rank ( Girard et al., 2016 ).

Research question #1 was to determine if there was a statistically significant difference between the academic performance of online and F2F students.

Research Question 1

The first research question investigated if there was a difference in student performance between F2F and online learners.

To investigate the first research question, we used a traditional chi-square method to analyze the data. The chi-square analysis is particularly useful for this type of comparison because it allows us to determine if the relationship between teaching modality and performance in our sample set can be extended to the larger population. The chi-square method provides us with a numerical result which can be used to determine if there is a statistically significant difference between the two groups.

Table 1 shows us the mean and SD for modality and for gender. It is a general breakdown of numbers to visually elucidate any differences between scores and deviations. The mean GPA for both modalities is similar with F2F learners scoring a 69.35 and online learners scoring a 68.64. Both groups had fairly similar SDs. A stronger difference can be seen between the GPAs earned by men and women. Men had a 3.23 mean GPA while women had a 2.9 mean GPA. The SDs for both groups were almost identical. Even though the 0.33 numerical difference may look fairly insignificant, it must be noted that a 3.23 is approximately a B+ while a 2.9 is approximately a B. Given a categorical range of only A to F, a plus differential can be considered significant.

www.frontiersin.org

Table 1 . Means and standard deviations for 8 semester- “Environmental Science data set.”

The mean grade for men in the environmental online classes ( M = 3.23, N = 246, SD = 1.19) was higher than the mean grade for women in the classes ( M = 2.9, N = 302, SD = 1.20) (see Table 1 ).

First, a chi-square analysis was performed using SPSS to determine if there was a statistically significant difference in grade distribution between online and F2F students. Students enrolled in the F2F class had the highest percentage of A's (63.60%) as compared to online students (36.40%). Table 2 displays grade distribution by course delivery modality. The difference in student performance was statistically significant, χ 2 (4, N = 548) = 6.531, p > 0.05. Table 3 shows the gender difference on student performance between online and F2F students.

www.frontiersin.org

Table 2 . Contingency table for student's academic performance ( N = 548).

www.frontiersin.org

Table 3 . Gender * performance crosstabulation.

Table 2 shows us the performance measures of online and F2F students by grade category. As can be seen, F2F students generated the highest performance numbers for each grade category. However, this disparity was mostly due to a higher number of F2F students in the study. There were 401 F2F students as opposed to just 147 online students. When viewing grades with respect to modality, there are smaller percentage differences between respective learners ( Tanyel and Griffin, 2014 ). For example, F2F learners earned 28 As (63.60% of total A's earned) while online learners earned 16 As (36.40% of total A's earned). However, when viewing the A grade with respect to total learners in each modality, it can be seen that 28 of the 401 F2F students (6.9%) earned As as compared to 16 of 147 (10.9%) online learners. In this case, online learners scored relatively higher in this grade category. The latter measure (grade total as a percent of modality total) is a better reflection of respective performance levels.

Given a critical value of 7.7 and a d.f. of 4, we were able to generate a chi-squared measure of 6.531. The correlating p -value of 0.163 was greater than our p -value significance level of 0.05. We, therefore, had to accept the null hypothesis and reject the alternative hypothesis. There is no statistically significant difference between the two groups in terms of performance scores.

Research Question 2

The second research question was posed to evaluate if there was a difference between online and F2F varied with gender. Does online and F2F student performance vary with respect to gender? Table 3 shows the gender difference on student performance between online and face to face students. We used chi-square test to determine if there were differences in online and F2F student performance with respect to gender. The chi-square test with alpha equal to 0.05 as criterion for significance. The chi-square result shows that there is no statistically significant difference between men and women in terms of performance.

Research Question 3

The third research question tried to determine if there was a difference between online and F2F varied with respect to class rank. Does online and F2F student performance vary with respect to class rank?

Table 4 shows the mean scores and standard deviations of freshman, sophomore, and junior and senior students for both online and F2F student performance. To test the third hypothesis, we used a two-way ANOVA. The ANOVA is a useful appraisal tool for this particular hypothesis as it tests the differences between multiple means. Instead of testing specific differences, the ANOVA generates a much broader picture of average differences. As can be seen in Table 4 , the ANOVA test for this particular hypothesis states there is no significant difference between online and F2F learners with respect to class rank. Therefore, we must accept the null hypothesis and reject the alternative hypothesis.

www.frontiersin.org

Table 4 . Descriptive analysis of student performance by class rankings gender.

The results of the ANOVA show there is no significant difference in performance between online and F2F students with respect to class rank. Results of ANOVA is presented in Table 5 .

www.frontiersin.org

Table 5 . Analysis of variance (ANOVA) for online and F2F of class rankings.

As can be seen in Table 4 , the ANOVA test for this particular hypothesis states there is no significant difference between online and F2F learners with respect to class rank. Therefore, we must accept the null hypothesis and reject the alternative hypothesis.

Discussion and Social Implications

The results of the study show there is no significant difference in performance between online and traditional classroom students with respect to modality, gender, or class rank in a science concepts course for non-STEM majors. Although there were sample size issues and study limitations, this assessment shows both online learners and classroom learners perform at the same level. This conclusion indicates teaching modality may not matter as much as other factors. Given the relatively sparse data on pedagogical modality comparison given specific student population characteristics, this study could be considered innovative. In the current literature, we have not found a study of this nature comparing online and F2F non-STEM majors with respect to three separate factors—medium, gender, and class rank—and the ability to learn science concepts and achieve learning outcomes. Previous studies have compared traditional classroom learning vs. F2F learning for other factors (including specific courses, costs, qualitative analysis, etcetera, but rarely regarding outcomes relevant to population characteristics of learning for a specific science concepts course over many years) ( Liu, 2005 ).

In a study evaluating the transformation of a graduate level course for teachers, academic quality of the online course and learning outcomes were evaluated. The study evaluated the ability of course instructors to design the course for online delivery and develop various interactive multimedia models at a cost-savings to the respective university. The online learning platform proved effective in translating information where tested students successfully achieved learning outcomes comparable to students taking the F2F course ( Herman and Banister, 2007 ).

Another study evaluated the similarities and differences in F2F and online learning in a non-STEM course, “Foundations of American Education” and overall course satisfaction by students enrolled in either of the two modalities. F2F and online course satisfaction was qualitatively and quantitative analyzed. However, in analyzing online and F2F course feedback using quantitative feedback, online course satisfaction was less than F2F satisfaction. When qualitative data was used, course satisfaction was similar between modalities ( Werhner, 2010 ). The course satisfaction data and feedback was used to suggest a number of posits for effective online learning in the specific course. The researcher concluded that there was no difference in the learning success of students enrolled in the online vs. F2F course, stating that “in terms of learning, students who apply themselves diligently should be successful in either format” ( Dell et al., 2010 ). The author's conclusion presumes that the “issues surrounding class size are under control and that the instructor has a course load that makes the intensity of the online course workload feasible” where the authors conclude that the workload for online courses is more than for F2F courses ( Stern, 2004 ).

In “A Meta-Analysis of Three Types of Interaction Treatments in Distance Education,” Bernard et al. (2009) conducted a meta-analysis evaluating three types of instructional and/or media conditions designed into distance education (DE) courses known as interaction treatments (ITs)—student–student (SS), student–teacher (ST), or student–content (SC) interactions—to other DE instructional/interaction treatments. The researchers found that a strong association existed between the integration of these ITs into distance education courses and achievement compared with blended or F2F modalities of learning. The authors speculated that this was due to increased cognitive engagement based in these three interaction treatments ( Larson and Sung, 2009 ).

Other studies evaluating students' preferences (but not efficacy) for online vs. F2F learning found that students prefer online learning when it was offered, depending on course topic, and online course technology platform ( Ary and Brune, 2011 ). F2F learning was preferred when courses were offered late morning or early afternoon 2–3 days/week. A significant preference for online learning resulted across all undergraduate course topics (American history and government, humanities, natural sciences, social, and behavioral sciences, diversity, and international dimension) except English composition and oral communication. A preference for analytical and quantitative thought courses was also expressed by students, though not with statistically significant results ( Mann and Henneberry, 2014 ). In this research study, we looked at three hypothesis comparing online and F2F learning. In each case, the null hypothesis was accepted. Therefore, at no level of examination did we find a significant difference between online and F2F learners. This finding is important because it tells us traditional-style teaching with its heavy emphasis on interpersonal classroom dynamics may 1 day be replaced by online instruction. According to Daymont and Blau (2008) online learners, regardless of gender or class rank, learn as much from electronic interaction as they do from personal interaction. Kemp and Grieve (2014) also found that both online and F2F learning for psychology students led to similar academic performance. Given the cost efficiencies and flexibility of online education, Web-based instructional systems may rapidly rise.

A number of studies support the economic benefits of online vs. F2F learning, despite differences in social constructs and educational support provided by governments. In a study by Li and Chen (2012) higher education institutions benefit the most from two of four outputs—research outputs and distance education—with teaching via distance education at both the undergraduate and graduate levels more profitable than F2F teaching at higher education institutions in China. Zhang and Worthington (2017) reported an increasing cost benefit for the use of distance education over F2F instruction as seen at 37 Australian public universities over 9 years from 2003 to 2012. Maloney et al. (2015) and Kemp and Grieve (2014) also found significant savings in higher education when using online learning platforms vs. F2F learning. In the West, the cost efficiency of online learning has been demonstrated by several research studies ( Craig, 2015 ). Studies by Agasisti and Johnes (2015) and Bartley and Golek (2004) both found the cost benefits of online learning significantly greater than that of F2F learning at U.S. institutions.

Knowing there is no significant difference in student performance between the two mediums, institutions of higher education may make the gradual shift away from traditional instruction; they may implement Web-based teaching to capture a larger worldwide audience. If administered correctly, this shift to Web-based teaching could lead to a larger buyer population, more cost efficiencies, and more university revenue.

The social implications of this study should be touted; however, several concerns regarding generalizability need to be taken into account. First, this study focused solely on students from an environmental studies class for non-STEM majors. The ability to effectively prepare students for scientific professions without hands-on experimentation has been contended. As a course that functions to communicate scientific concepts, but does not require a laboratory based component, these results may not translate into similar performance of students in an online STEM course for STEM majors or an online course that has an online laboratory based co-requisite when compared to students taking traditional STEM courses for STEM majors. There are few studies that suggest the landscape may be changing with the ability to effectively train students in STEM core concepts via online learning. Biel and Brame (2016) reported successfully translating the academic success of F2F undergraduate biology courses to online biology courses. However, researchers reported that of the large-scale courses analyzed, two F2F sections outperformed students in online sections, and three found no significant difference. A study by Beale et al. (2014) comparing F2F learning with hybrid learning in an embryology course found no difference in overall student performance. Additionally, the bottom quartile of students showed no differential effect of the delivery method on examination scores. Further, a study from Lorenzo-Alvarez et al. (2019) found that radiology education in an online learning platform resulted in similar academic outcomes as F2F learning. Larger scale research is needed to determine the effectiveness of STEM online learning and outcomes assessments, including workforce development results.

In our research study, it is possible the study participants may have been more knowledgeable about environmental science than about other subjects. Therefore, it should be noted this study focused solely on students taking this one particular class. Given the results, this course presents a unique potential for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.

Second, the operationalization measure of “grade” or “score” to determine performance level may be lacking in scope and depth. The grades received in a class may not necessarily show actual ability, especially if the weights were adjusted to heavily favor group tasks and writing projects. Other performance indicators may be better suited to properly access student performance. A single exam containing both multiple choice and essay questions may be a better operationalization indicator of student performance. This type of indicator will provide both a quantitative and qualitative measure of subject matter comprehension.

Third, the nature of the student sample must be further dissected. It is possible the online students in this study may have had more time than their counterparts to learn the material and generate better grades ( Summers et al., 2005 ). The inverse holds true, as well. Because this was a convenience non-probability sampling, the chances of actually getting a fair cross section of the student population were limited. In future studies, greater emphasis must be placed on selecting proper study participants, those who truly reflect proportions, types, and skill levels.

This study was relevant because it addressed an important educational topic; it compared two student groups on multiple levels using a single operationalized performance measure. More studies, however, of this nature need to be conducted before truly positing that online and F2F teaching generate the same results. Future studies need to eliminate spurious causal relationships and increase generalizability. This will maximize the chances of generating a definitive, untainted results. This scientific inquiry and comparison into online and traditional teaching will undoubtedly garner more attention in the coming years.

Our study compared learning via F2F vs. online learning modalities in teaching an environmental science course additionally evaluating factors of gender and class rank. These data demonstrate the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank. The social implications of this finding are important for advancing access to and learning of scientific concepts by the general population, as many institutions of higher education allow an online course to be taken without enrolling in a degree program. Thus, the potential exists for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.

Limitations of the Study

The limitations of the study centered around the nature of the sample group, student skills/abilities, and student familiarity with online instruction. First, because this was a convenience, non-probability sample, the independent variables were not adjusted for real-world accuracy. Second, student intelligence and skill level were not taken into consideration when separating out comparison groups. There exists the possibility that the F2F learners in this study may have been more capable than the online students and vice versa. This limitation also applies to gender and class rank differences ( Friday et al., 2006 ). Finally, there may have been ease of familiarity issues between the two sets of learners. Experienced traditional classroom students now taking Web-based courses may be daunted by the technical aspect of the modality. They may not have had the necessary preparation or experience to efficiently e-learn, thus leading to lowered scores ( Helms, 2014 ). In addition to comparing online and F2F instructional efficacy, future research should also analyze blended teaching methods for the effectiveness of courses for non-STEM majors to impart basic STEM concepts and see if the blended style is more effective than any one pure style.

Data Availability Statement

The datasets generated for this study are available on request to the corresponding author.

Ethics Statement

The studies involving human participants were reviewed and approved by Fort Valley State University Human Subjects Institutional Review Board. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.

Author Contributions

JP provided substantial contributions to the conception of the work, acquisition and analysis of data for the work, and is the corresponding author on this paper who agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. FJ provided substantial contributions to the design of the work, interpretation of the data for the work, and revised it critically for intellectual content.

This research was supported in part by funding from the National Science Foundation, Awards #1649717, 1842510, Ñ900572, and 1939739 to FJ.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors would like to thank the reviewers for their detailed comments and feedback that assisted in the revising of our original manuscript.

Agasisti, T., and Johnes, G. (2015). Efficiency, costs, rankings and heterogeneity: the case of US higher education. Stud. High. Educ. 40, 60–82. doi: 10.1080/03075079.2013.818644

CrossRef Full Text | Google Scholar

Ary, E. J., and Brune, C. W. (2011). A comparison of student learning outcomes in traditional and online personal finance courses. MERLOT J. Online Learn. Teach. 7, 465–474.

Google Scholar

Atchley, W., Wingenbach, G., and Akers, C. (2013). Comparison of course completion and student performance through online and traditional courses. Int. Rev. Res. Open Dist. Learn. 14, 104–116. doi: 10.19173/irrodl.v14i4.1461

Bartley, S. J., and Golek, J. H. (2004). Evaluating the cost effectiveness of online and face-to-face instruction. Educ. Technol. Soc. 7, 167–175.

Beale, E. G., Tarwater, P. M., and Lee, V. H. (2014). A retrospective look at replacing face-to-face embryology instruction with online lectures in a human anatomy course. Am. Assoc. Anat. 7, 234–241. doi: 10.1002/ase.1396

PubMed Abstract | CrossRef Full Text | Google Scholar

Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkesh, M. A., et al. (2009). A meta-analysis of three types of interaction treatments in distance education. Rev. Educ. Res. 79, 1243–1289. doi: 10.3102/0034654309333844

Biel, R., and Brame, C. J. (2016). Traditional versus online biology courses: connecting course design and student learning in an online setting. J. Microbiol. Biol. Educ. 17, 417–422. doi: 10.1128/jmbe.v17i3.1157

Bigelow, C. A. (2009). Comparing student performance in an online versus a face to face introductory turfgrass science course-a case study. NACTA J. 53, 1–7.

Columbaro, N. L., and Monaghan, C. H. (2009). Employer perceptions of online degrees: a literature review. Online J. Dist. Learn. Administr. 12.

Craig, R. (2015). A Brief History (and Future) of Online Degrees. Forbes/Education . Available online at: https://www.forbes.com/sites/ryancraig/2015/06/23/a-brief-history-and-future-of-online-degrees/#e41a4448d9a8

Daymont, T., and Blau, G. (2008). Student performance in online and traditional sections of an undergraduate management course. J. Behav. Appl. Manag. 9, 275–294.

Dell, C. A., Low, C., and Wilker, J. F. (2010). Comparing student achievement in online and face-to-face class formats. J. Online Learn. Teach. Long Beach 6, 30–42.

Driscoll, A., Jicha, K., Hunt, A. N., Tichavsky, L., and Thompson, G. (2012). Can online courses deliver in-class results? A comparison of student performance and satisfaction in an online versus a face-to-face introductory sociology course. Am. Sociol. Assoc . 40, 312–313. doi: 10.1177/0092055X12446624

Friday, E., Shawnta, S., Green, A. L., and Hill, A. Y. (2006). A multi-semester comparison of student performance between multiple traditional and online sections of two management courses. J. Behav. Appl. Manag. 8, 66–81.

Girard, J. P., Yerby, J., and Floyd, K. (2016). Knowledge retention in capstone experiences: an analysis of online and face-to-face courses. Knowl. Manag. ELearn. 8, 528–539. doi: 10.34105/j.kmel.2016.08.033

Helms, J. L. (2014). Comparing student performance in online and face-to-face delivery modalities. J. Asynchr. Learn. Netw. 18, 1–14. doi: 10.24059/olj.v18i1.348

Herman, T., and Banister, S. (2007). Face-to-face versus online coursework: a comparison of costs and learning outcomes. Contemp. Issues Technol. Teach. Educ. 7, 318–326.

Kemp, N., and Grieve, R. (2014). Face-to-Face or face-to-screen? Undergraduates' opinions and test performance in classroom vs. online learning. Front. Psychol. 5:1278. doi: 10.3389/fpsyg.2014.01278

Keramidas, C. G. (2012). Are undergraduate students ready for online learning? A comparison of online and face-to-face sections of a course. Rural Special Educ. Q . 31, 25–39. doi: 10.1177/875687051203100405

Larson, D.K., and Sung, C. (2009). Comparing student performance: online versus blended versus face-to-face. J. Asynchr. Learn. Netw. 13, 31–42. doi: 10.24059/olj.v13i1.1675

Li, F., and Chen, X. (2012). Economies of scope in distance education: the case of Chinese Research Universities. Int. Rev. Res. Open Distrib. Learn. 13, 117–131.

Liu, Y. (2005). Effects of online instruction vs. traditional instruction on student's learning. Int. J. Instruct. Technol. Dist. Learn. 2, 57–64.

Lorenzo-Alvarez, R., Rudolphi-Solero, T., Ruiz-Gomez, M. J., and Sendra-Portero, F. (2019). Medical student education for abdominal radiographs in a 3D virtual classroom versus traditional classroom: a randomized controlled trial. Am. J. Roentgenol. 213, 644–650. doi: 10.2214/AJR.19.21131

Lundberg, J., Castillo-Merino, D., and Dahmani, M. (2008). Do online students perform better than face-to-face students? Reflections and a short review of some Empirical Findings. Rev. Univ. Soc. Conocim . 5, 35–44. doi: 10.7238/rusc.v5i1.326

Maloney, S., Nicklen, P., Rivers, G., Foo, J., Ooi, Y. Y., Reeves, S., et al. (2015). Cost-effectiveness analysis of blended versus face-to-face delivery of evidence-based medicine to medical students. J. Med. Internet Res. 17:e182. doi: 10.2196/jmir.4346

Mann, J. T., and Henneberry, S. R. (2014). Online versus face-to-face: students' preferences for college course attributes. J. Agric. Appl. Econ . 46, 1–19. doi: 10.1017/S1074070800000602

Mozes-Carmel, A., and Gold, S. S. (2009). A comparison of online vs proctored final exams in online classes. Imanagers J. Educ. Technol. 6, 76–81. doi: 10.26634/jet.6.1.212

Richardson, J. C., and Swan, K. (2003). Examining social presence in online courses in relation to student's perceived learning and satisfaction. J. Asynchr. Learn. 7, 68–88.

Roval, A. P., and Jordan, H. M. (2004). Blended learning and sense of community: a comparative analysis with traditional and fully online graduate courses. Int. Rev. Res. Open Dist. Learn. 5. doi: 10.19173/irrodl.v5i2.192

Salcedo, C. S. (2010). Comparative analysis of learning outcomes in face-to-face foreign language classes vs. language lab and online. J. Coll. Teach. Learn. 7, 43–54. doi: 10.19030/tlc.v7i2.88

Stern, B. S. (2004). A comparison of online and face-to-face instruction in an undergraduate foundations of american education course. Contemp. Issues Technol. Teach. Educ. J. 4, 196–213.

Summers, J. J., Waigandt, A., and Whittaker, T. A. (2005). A comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. Innov. High. Educ. 29, 233–250. doi: 10.1007/s10755-005-1938-x

Tanyel, F., and Griffin, J. (2014). A Ten-Year Comparison of Outcomes and Persistence Rates in Online versus Face-to-Face Courses . Retrieved from: https://www.westga.edu/~bquest/2014/onlinecourses2014.pdf

Werhner, M. J. (2010). A comparison of the performance of online versus traditional on-campus earth science students on identical exams. J. Geosci. Educ. 58, 310–312. doi: 10.5408/1.3559697

Westhuis, D., Ouellette, P. M., and Pfahler, C. L. (2006). A comparative analysis of on-line and classroom-based instructional formats for teaching social work research. Adv. Soc. Work 7, 74–88. doi: 10.18060/184

Wladis, C., Conway, K. M., and Hachey, A. C. (2015). The online STEM classroom-who succeeds? An exploration of the impact of ethnicity, gender, and non-traditional student characteristics in the community college context. Commun. Coll. Rev. 43, 142–164. doi: 10.1177/0091552115571729

Xu, D., and Jaggars, S. S. (2016). Performance gaps between online and face-to-face courses: differences across types of students and academic subject areas. J. Higher Educ. 85, 633–659. doi: 10.1353/jhe.2014.0028

Zhang, L.-C., and Worthington, A. C. (2017). Scale and scope economies of distance education in Australian universities. Stud. High. Educ. 42, 1785–1799. doi: 10.1080/03075079.2015.1126817

Keywords: face-to-face (F2F), traditional classroom teaching, web-based instructions, information and communication technology (ICT), online learning, desire to learn (D2L), passive learning, active learning

Citation: Paul J and Jefferson F (2019) A Comparative Analysis of Student Performance in an Online vs. Face-to-Face Environmental Science Course From 2009 to 2016. Front. Comput. Sci. 1:7. doi: 10.3389/fcomp.2019.00007

Received: 15 May 2019; Accepted: 15 October 2019; Published: 12 November 2019.

Reviewed by:

Copyright © 2019 Paul and Jefferson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jasmine Paul, paulj@fvsu.edu

  • Study Protocol
  • Open access
  • Published: 29 March 2024

Evaluation of heroin-assisted treatment in Norway: protocol for a mixed methods study

  • Lars Henrik Myklebust 1 ,
  • Desiree Eide 1 ,
  • Espen A. Arnevik 4 ,
  • Omid Dadras 2 ,
  • Silvana De Pirro 1 , 6 ,
  • Rune Ellefsen 4 ,
  • Lars T. Fadnes 2 , 3 ,
  • Morten Hesse 5 ,
  • Timo L. Kvamme 5 ,
  • Francesca Melis 1 ,
  • Ann Oldervoll 1 ,
  • Birgitte Thylstrup 5 ,
  • Linda E.C. Wusthoff 1 , 4 &
  • Thomas Clausen 1  

BMC Health Services Research volume  24 , Article number:  398 ( 2024 ) Cite this article

Metrics details

Opioid agonist treatment (OAT) for patients with opioid use disorder (OUD) has a convincing evidence base, although variable retention rates suggest that it may not be beneficial for all. One of the options to include more patients is the introduction of heroin-assisted treatment (HAT), which involves the prescribing of pharmaceutical heroin in a clinical supervised setting. Clinical trials suggest that HAT positively affects illicit drug use, criminal behavior, quality of life, and health. The results are less clear for longer-term outcomes such as mortality, level of function and social integration. This protocol describes a longitudinal evaluation of the introduction of HAT into the OAT services in Norway over a 5-year period. The main aim of the project is to study the individual, organizational and societal effects of implementing HAT in the specialized healthcare services for OUD.

The project adopts a multidisciplinary approach, where the primary cohort for analysis will consist of approximately 250 patients in Norway, observed during the period of 2022–2026. Cohorts for comparative analysis will include all HAT-patients in Denmark from 2010 to 2022 ( N  = 500) and all Norwegian patients in conventional OAT ( N  = 8300). Data comes from individual in-depth and semi-structured interviews, self-report questionnaires, clinical records, and national registries, collected at several time points throughout patients’ courses of treatment. Qualitative analyses will use a flexible inductive thematic approach. Quantitative analyses will employ a wide array of methods including bi-variate parametric and non-parametric tests, and various forms of multivariate modeling.

The project’s primary strength lies in its comprehensive and longitudinal approach. It has the potential to reveal new insights on whether pharmaceutical heroin should be an integral part of integrated conventional OAT services to individually tailor treatments for patients with OUD. This could affect considerations about drug treatment even beyond HAT-specific topics, where an expanded understanding of why some do not succeed with conventional OAT will strengthen the knowledge base for drug treatment in general. Results will be disseminated to the scientific community, clinicians, and policy makers.

Trial registration

The study was approved by the Norwegian Regional Committee for Medical and Health Research Ethics (REK), ref.nr.:195733.

Peer Review reports

Opioid use disorder (OUD) is a major global health concern with an estimated caseload of 31.5 million in 2022 [ 1 ]. It is frequently related to infectious diseases from injection-based drug use, psychiatric disorders, deterioration of social relations, reduced workforce participation, and a tenfold increase in crude all-cause rate of mortality [ 2 ]. The treatment and care for patients with OUD has gradually developed from an initial emphasis on abstinence and withdrawal management, to regular prescriptions of opioid agonists for maintenance treatment (OAT) [ 3 ].

Half a century after the first initiatives of prescribing methadone for OUD in a regular manner [ 4 , 5 ] OAT now has a strong evidence-base [ 6 ]. Overall, it contributes to a substantial reduction in mortality, general health benefits, and reduced use of illicit drugs and criminal activity [ 6 , 7 , 8 , 9 ]. Still, not all individuals find conventional OAT sufficiently attractive over time, and cycles of dropout and re-entering are ongoing challenges in these programs [ 10 , 11 , 12 ]. A variable retention rate of 20–84% has been observed [ 13 ]. Among the efforts to improve the inclusion of patients in OAT is the introduction of more diverse medication options, such as rapid-onset, short-acting injectable pharmaceutical opioids such as heroin [ 14 ].

The use of medical grade heroin (diacetylmorphine) in treating OUD has been applied in England since the 1920s, originally as hand-out prescriptions to take home [ 15 , 16 ]. Initiatives to incorporate it into more regular OAT started in Switzerland in 1994, with promising results [ 17 , 18 ]. Now, three decades later and after clinical trials from several European countries and Canada, the body of research suggests that heroin-assisted treatment (HAT) is beneficial for a sub-selection of patients in regard to health outcomes and reductions in use of illicit drugs and criminal behavior [ 19 , 20 , 21 ]. The results are less clear for longer-term outcomes such as mortality [ 6 , 19 ].

Still, HAT remains politically controversial [ 22 ], and reduced illicit heroin use and criminal behavior may not be compelling arguments for its efficacy. Rather, as for any other medical treatment its impact may better be assessed by patients’ improvement in quality of life, everyday level of function, and mortality [ 23 ].

Although newer studies suggests that take home doses are a feasible and safe alternative for patients deemed suitable [ 24 , 25 ], medical heroin is typically administered under rigorous and comprehensive medical supervision due to the risk of serious adverse events and diversion [ 26 ]. Studies on cost effectiveness suggest both excessive expenses and inconclusive results when compared with methadone treatment, which are possibly dependent on methodological issues and poor consideration of the mechanisms involved [ 20 , 27 , 28 ].

Additionally, most of the research on the effectiveness of HAT originates from randomized clinical trials which may have limitations concerning the understanding of long-term outcomes and the mechanisms behind [ 23 ]. Thus, the main contribution of HAT may lie in the engagement of a high-risk population in utilization of health- and social services over time, like the more conventional options of OAT [ 23 , 29 ]. A more comprehensive view of outcomes beyond the mere quantity and frequency of drug use and criminal behaviour can provide crucial information about the mechanisms responsible for treatment effectiveness, and its possible impact on other clinically and socially relevant parameters [ 30 ].

The current Norwegian HAT study is presented in this context. The study is part of a clinical project by the Norwegian Directorate of Health, with the aim to evaluate the implementation of HAT into the national OAT services. It is based on a model from Denmark where the use of medical heroin was introduced in 2010, following the British “RIOTT” line of test trials from 2005 [ 31 ]. Denmark currently has five clinics as permanent parts of the national healthcare system, although a limited amount of research has been published from this model [ 32 ].

The Norwegian HAT-project

OAT programs based on prescription of methadone and buprenorphine has in various forms been integrated into the Norwegian health and social services-system since 1997 [ 33 ]. In the spring of 2020, the Norwegian Directorate of Health introduced a time-limited, clinically based project on the use of pharmaceutical heroin in the specialist healthcare services. Based on a day-center model, treatment is offered at two designated clinics in the largest Norwegian cities of Oslo and Bergen. The clinics consist of injection sites and medical personnel for the administration of pharmaceutical heroin twice a day, in combination with a take-home oral overnight dose of slow-releasing opioid-agonist such as methadone or morphine. Take-home doses of heroin are not granted, and patients must attend daily all year around. Psychosocial services and support are also offered [ 34 ]. Patients are referred from other services of substance use disorder treatment, specialist healthcare services or general practitioners. Criteria for admission are ongoing OUD with at least one former attempt of conventional OAT, being over 18 years of age and with general competency of consent. Exclusion criteria are severe mental disorders with reduced competency of consent, pregnancy, or repeated violent behavior.

The Norwegian Centre for Addiction Research (SERAF) at the University of Oslo was granted the research-based evaluation of the HAT project in 2021. The study will be conducted together with Section for Clinical Addiction Research (RusForsk) at Oslo University Hospital, Bergen Addiction Research Group (BAR) at Haukeland University Hospital in Bergen, Centre for Alcohol and Drug Research (CRF) at Aarhus University in Denmark, and the Norwegian user organization proLARNett.

The primary aim of the research project is to examine the effects from implementing HAT in Norway for individual patients and for the health services organization. A secondary aim is to compare these findings with the Danish HAT program.

Based on the Norwegian Directorate of Health’s specifications in the project proposal, the study will cover the following thematic areas:

Explore the attitudes, experiences and challenges of HAT as perceived by patients, their relatives, and clinical staff.

Describe changes in mental and physical health among patients receiving HAT, and in what way it is associated with outcomes such as quality of life, utilization of health- and social services, social reintegration, criminal behavior and use of illicit drugs.

Report any serious adverse events and incidents at treatment initiation, during treatment, and after discharge from HAT.

Perform an economic evaluation of the program with associated clinical benefits and societal costs.

Evaluate the organizational processes involved in the implementation of HAT in Norwegian specialist healthcare services, and the eventual impact from HAT on OUD patients’ utilization of conventional OAT.

Additional research relevant to HAT that is not explicitly outlined in the proposal (may require additional approvals from the Norwegian Regional Committee for Medical and Health Research Ethics.)

The themes were operationalized into six work packages, with corresponding research questions and data sources (shown in Table  1 ).

Methods and design

The project is a multi-dimensional study, involving an array of methodological approaches and data sources. The main part is a prospective cohort study of all Norwegian HAT patients, compared with the cohorts of all Danish HAT patients and Norwegian patients in conventional OAT.

Study populations and size

The primary target group is all patients enrolled in the two HAT clinics in Oslo and Bergen during the period 2022–2026, with an expected total sample size of N  = 250. Based on earlier findings, the ratio of men to women is expected to be 4:1, with an age distribution of 27–60, presenting multiple substance use disorders. As the study is based on the total clinical population, representation will be determined by its demographics, with no exclusion of genders or ethnic minorities. The patients who have applied to but have not been accepted for HAT will be used for comparison, with an expected sample size of 100.

Comparative data from the Danish cohort will be drawn from the comprehensive dataset at Aarhus University from 2010 and onwards, with a sample size of approximately 500 [ 35 ]. Likewise, the comprehensive dataset at SERAF on the cohort of Norwegian patients in conventional OAT from 2003 has an approximate sample size of 8300.

Data sources

Data on the primary cohort of Norwegian HAT patients will be based on a prospective collection of both qualitative and quantitative variables from treatment inclusion and throughout the project period. For the cohorts of Danish patients, of Norwegian patients that have been referred to but not granted HAT, and of Norwegian patients in conventional OAT, data are mainly based on national registries.

In-depth and semi-structured interviews and observation

The qualitative part of the project includes individual in-depth and semi-structured interviews with patients and relatives on their views and experiences with HAT, and focus group interviews with staff concerning implementation, clinical and legal aspects of the project. Semi-structured interview protocols have been developed by the project group and user representatives. Interviews will include 25–35 patients and 10–20 family members, conducted by a team of researchers and user representatives at 1, 6, 18 months after patients enter treatment, and with relatives after 4 and 12 months. Focus group interviews with staff will be conducted at 3, 9 and 18 months. Further, the clinic managers are being interviewed at several timepoints from the planning of the clinics and throughout the duration of the project.

For insights into clinic aspects not identified through interviews, researchers will conduct participant observation in the clinics over several periods of 1–2 weeks throughout the study.

Questionnaires

The quantitative part of the project will use similar questionnaires to preceding projects involving patients in conventional OAT. These will evaluate changes in physical and mental health, personal economy, utilization of social services, criminal behavior and illegal drug use by repeated measures administered at inclusion, by 3, 6 and 12 months of treatment, and thereafter yearly (24, 36 and 48 months). Staff are asked to complete a separate questionnaire if a patient leaves treatment.

Clinical records

Information will also be obtained from the individual patient’s routine clinical records on variables such as main vital signs, nutritional status, cognitive function and mental health, medication, and comorbidities, as well as more HAT-specific variables such as adverse events, dosage, and administration routes of the pharmaceutical heroin.

Central register databases

Nordic national registers are an important and useful source for epidemiological and healthcare services research, including the study of substance use disorders [ 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 ]. The project will utilize databases from national registries in both Norway and Denmark to describe the cohorts and to monitor the changes and outcomes in a wider context. Currently, one study has explored the use of the Short Form (SF-36) Health Survey in patients enrolled in the Danish HAT database, finding support for the structural and external validity for its use in HAT [ 44 ].

Table  2 gives an overview of the relevant Norwegian and Danish register databases along with their relevant variables.

Additional studies

Currently, the only planned sub-study is on the pharmacokinetics of heroin and its metabolites, and its subjective effects on patients. Despite its widespread use, the pharmacology of heroin remains poorly understood [ 45 , 46 , 47 ]. A subsample of patients will therefore be invited to participate in this observational study with post-administration blood samples collected at different time points, with analysis of the concentration of heroin and its metabolites together with scales of subjective experience. The study has been granted separate approvals from Norwegian Regional Committee for Medical and Health Research Ethics.

Analysis strategy

Exploration and analysis of data will be both by qualitative and quantitative strategies, for individual patients and at the organizational level.

Qualitative

Treatment satisfaction of patients is particularly significant to the project and is often dependent on the context of factors such as staff, management, and clinical environment [ 48 , 49 ]. Qualitative analyses are widely considered valuable for description of phenomena and hypothesis generation, taking into consideration the natural context in which people and organizations function [ 50 ]. Transcribed interviews will be coded following the principles of a flexible inductive thematic analysis and multidimensional approach [ 51 ].

Quantitative

Given the large amount and comprehensive nature of the data, variables of interest will vary in levels of measurement and distribution, so parametric and non-parametric tests will be used accordingly.

Presentation of cohorts will include descriptive statistics by basic parameters such as mean or medians, standard deviations and ratios, and bivariate analyses by ANOVA and Chi-Square tests. Various advanced methods such as survival analysis and logistic and linear regression modeling will be applied based on the type and distribution of dependent variables and co-variates. To avoid ecological fallacy and nested dimensions, multi-level methods will be applied for analyses of patients in relation to services’ organization [ 52 ]. Given the longitudinal design and to address the repeated measurements and correlated data, linear mixed models (LMM) (random intercepts or random slope models) will be used for person-specific effects, and marginal models like Generalized Estimating Equations (GEE) for population effects.

A theoretical sample size for statistical power will not be calculated because the study is based on the total clinical population available. For analyses of discrete and possible repeated events such as the number of criminal acts or medical prescriptions, statistical power will most likely be sufficient even with a restricted number of individuals. For analyses where the proportion of patients to number of variables may imply low statistical power, stratification of the study-population and restrictions to the number of covariates in the multivariate models will be applied.

Economic evaluations

Health economics and methods of cost-effectiveness analysis can guide decision makers, but at the same time they intrinsically rely on sets of politically and administratively determined rules and contexts [ 53 ]. In general, the cost-effectiveness of a treatment is intended to reflect the difference between the recourse’s opportunity costs (medical heroin) and those of the foregone or conventional alternative, to capture a broader set of values beyond the scope of mere financial costs [ 54 ].

Initially, for operating costs a three-step, top-down methodology used and refined by a former healthcare services project will be applied, where total costs are distributed on service units and units of treatment for individual patients [ 55 ].

For cost-effectiveness analyses of healthcare interventions, outcome is often measured in quality-adjusted life years (QALYs) for individual patients, in number of accidents or fatal incidents, or as societal costs associated with patients’ level of functioning and societal (criminal) behavior [ 56 ]. This will readily apply to the project and is in line with the national Norwegian recommendations for evaluation of new health interventions [ 57 , 58 ]. The relationships between HAT and various forms of criminal behavior (both property crime and illegal drug offences), labor market attachment, income and drug expenditures are also unclear and possible subjects for investigation during the project [ 20 ].

The data for all analyses will come from key account figures and relevant variables already obtained in the project.

The main strength of the study comes from its clinical and longitudinal approaches. The use of patient-interviews combined with clinical records, self-report data and register-based information will enhance the analyses and may uncover important associations between the individual patients, treatment, and the organizational level of healthcare services. The results are therefore expected to address aspects of HAT that may contribute to the development of clinical services and individually tailored treatments for OUD.

Study limitations are mainly related to the designs’ limitation for isolation of the effects from HAT on the outcome variables. Although valuable associations often have been suggested by longitudinal ecological studies, this limited possibility of unbiased causal inference remains a major weakness of both epidemiological and cohort designs [ 59 ]. Consequently, analyses will be cautiously interpreted within the context of previous findings, as well as patient and staff experiences. The triangulation of different types of data sources and cohorts, with the use of multivariate analysis and modeling might nevertheless provide more nuanced insights than currently exist.

Also, socially desirable bias concerning self-report questionnaires may be inherent in all self-reported outcomes [ 60 ]. This will apply to the study, as patients in the Norwegian cohort are possibly aware that the prospects of HAT may depend on the results from the study.

The sample of patients in the main cohort might also not be representative of individuals with OUD who do not seek the HAT option for reasons related to the study outcomes, such as social deprivation and isolation, behavioral misconduct, and incarceration [ 61 , 62 ]. Comparison with patients not granted access to the HAT-treatment may partly address this, although not to a full extent.

Lastly, the results will emerge in the context of a Nordic cultural and political system with healthcare reimbursements, insurance models and legal aspects that may limit their generalizability to other countries and societies. Given a cautious interpretation, the project may nonetheless be considered relevant to populations where OAT is used, and a wide range of medications are potentially provided.

Results from this project have the potential to identify new insights of value to patients, healthcare personnel, service administrators and policy makers as to whether an option for pharmaceutical heroin could be implemented as a conventional part of OAT services. We believe that the results will suggest future themes for research within the field of HAT with a potential for individually tailored treatment and care for individuals with OUD. This could affect considerations about drug treatment even beyond HAT-specific topics, where an expanded understanding of why some patients do not succeed with conventional OAT or specific OAT medications will strengthen the knowledge base for drug treatment in general.

Data availability

Data sharing is not applicable to this article as no datasets are currently completed or analyzed. The data that support the eventual findings of this study are available from both national registries, individual health journals and the project-specific database, but restrictions apply to the availability which are under license for the current study. Data may be available from the authors upon reasonable request and dependent on permissions from the Norwegian Regional Committee for Medical and Health Research Ethics. All information on subjects will be stored in the University of Oslo's secure services for sensitive data (TSD). Files for analysis will not contain directly identifying information of patients. Data will be stored in a non-identifiable way for 15 years after the end of the project.

Abbreviations

Opioid agonist treatment

Opioid use disorder

  • Heroin assisted treatment

Generalized Estimating Equations

Linear mixed models

Quality-Adjusted Life Years

Norwegian Regional Committee for Medical and Health Research Ethics

Norwegian Centre for Addiction Research, Oslo

Section for Clinical Addiction Research, Oslo

Bergen Addiction Research Group, Bergen

Centre for Alcohol and Drug Research, Aarhus

Norwegian User-union

United Nations Office on Drugs and Crime (UNODC). World Drug Report 2023. 2023.

Degenhardt L, Grebely J, Stone J, Hickman M, Vickerman P, Marshall BDL, et al. Global patterns of opioid use and dependence: harms to populations, interventions, and future action. Lancet. 2019;394(10208):1560–79.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Babor TF, Caulkins J, Fischer B, Foxcroft D, Medina-Mora ME, Obot I, et al. Drug Policy and the Public Good: a summary of the second edition. Addiction. 2019;114(11):1941–50.

Article   Google Scholar  

Dole VP, Nyswander M. A Medical treatment for Diacetylmorphine (Heroin) addiction. A clinical trial with Methadone Hydrochloride. JAMA. 1965;193:646–50.

Article   CAS   PubMed   Google Scholar  

Isbell H, Vogel VH. The addiction liability of methadon (amidone, dolophine, 10820) and its use in the treatment of the morphine abstinence syndrome. Am J Psychiatry. 1949;105(12):909–14.

Strang J, Volkow ND, Degenhardt L, Hickman M, Johnson K, Koob GF et al. Opioid use disorder. Nat Rev Dis Primers. 2020;6(1).

Chaillon A, Bharat C, Stone J, Jones N, Degenhardt L, Larney S, et al. Modeling the population-level impact of opioid agonist treatment on mortality among people accessing treatment between 2001 and 2020 in New South Wales, Australia. Addiction. 2022;117(5):1338–52.

Article   PubMed   Google Scholar  

Havnes I, Bukten A, Gossop M, Waal H, Stangeland P, Clausen T. Reductions in convictions for violent crime during opioid maintenance treatment: a longitudinal national cohort study. Drug Alcohol Depend. 2012;124(3):307–10.

Clausen T, Anchersen K, Waal H. Mortality prior to, during and after opioid maintenance treatment (OMT): a national prospective cross-registry study. Drug Alcohol Depend. 2008;94(1–3):151–7.

Sordo L, Barrio G, Bravo MJ, Indave BI, Degenhardt L, Wiessing L, et al. Mortality risk during and after opioid substitution treatment: systematic review and meta-analysis of cohort studies. BMJ. 2017;357:j1550.

Article   PubMed   PubMed Central   Google Scholar  

Bukten A, Skurtveit S, Waal H, Clausen T. Factors associated with dropout among patients in opioid maintenance treatment (OMT) and predictors of re-entry. A national registry-based study. Addict Behav. 2014;39(10):1504–9.

Bukten A, Roislien J, Skurtveit S, Waal H, Gossop M, Clausen T. A day-by-day investigation of changes in criminal convictions before and after entering and leaving opioid maintenance treatment: a national cohort study. BMC Psychiatry. 2013;13:262.

Klimas J, Hamilton MA, Gorfinkel L, Adam A, Cullen W, Wood E. Retention in opioid agonist treatment: a rapid review and meta-analysis comparing observational studies and randomized controlled trials. Syst Rev. 2021;10(1):216.

Timko C, Schultz NR, Cucciare MA, Vittorio L, Garrison-Diehn C. Retention in medication-assisted treatment for opiate dependence: a systematic review. J Addict Dis. 2016;35(1):22–35.

Strang J, Sheridan J. Heroin prescribing in the British system of the mid 1990s: data from the 1995 national survey of community pharmacies in England and Wales. Drug Alcohol Rev. 1997;16(1):7–16.

Berridge V. Heroin prescription and history. N Engl J Med. 2009;361(8):820–1.

van den Brink W, Hendriks VM, Blanken P, Koeter MW, van Zwieten BJ, van Ree JM. Medical prescription of heroin to treatment resistant heroin addicts: two randomised controlled trials. BMJ. 2003;327(7410):310.

Rehm J, Gschwend P, Steffen T, Gutzwiller F, Dobler-Mikola A, Uchtenhagen A. Feasibility, safety, and efficacy of injectable heroin prescription for refractory opioid addicts: a follow-up study. Lancet. 2001;358(9291):1417–23.

Strang J, Groshkova T, Uchtenhagen A, van den Brink W, Haasen C, Schechter MT, et al. Heroin on trial: systematic review and meta-analysis of randomised trials of diamorphine-prescribing as treatment for refractory heroin addictiondagger. Br J Psychiatry. 2015;207(1):5–14.

Smart R, Reuter P. Does heroin-assisted treatment reduce crime? A review of randomized-controlled trials. Addiction. 2022;117(3):518–31.

Oviedo-Joekes E, Brissette S, Marsh DC, Lauzon P, Guh D, Anis A, et al. Diacetylmorphine versus methadone for the treatment of opioid addiction. N Engl J Med. 2009;361(8):777–86.

Farrell M, Hall W. Heroin-assisted treatment: has a controversial treatment come of age? Br J Psychiatry. 2015;207(1):3–4.

Bell J, van der Waal R, Strang J. Supervised Injectable Heroin: a clinical perspective. Can J Psychiat. 2017;62(7):451–6.

Meyer M, Strasser J, Kock P, Walter M, Vogel M, Dursteler KM. Experiences with take-home dosing in heroin-assisted treatment in Switzerland during the COVID-19 pandemic-Is an update of legal restrictions warranted? Int J Drug Policy. 2022;101:103548.

Oviedo-Joekes E, Dobischok S, Carvajal J, MacDonald S, McDermid C, Klakowicz P, et al. Clients’ experiences on North America’s first take-home injectable opioid agonist treatment (iOAT) program: a qualitative study. BMC Health Serv Res. 2023;23(1):553.

Ferri M, Davoli M, Perucci CA. Heroin maintenance for chronic heroin-dependent individuals. Cochrane Database Syst Rev. 2011;2011(12):CD003410.

PubMed   PubMed Central   Google Scholar  

Dijkgraaf MG, van der Zanden BP, de Borgie CA, Blanken P, van Ree JM, van den Brink W. Cost utility analysis of co-prescribed heroin compared with methadone maintenance treatment in heroin addicts in two randomised trials. BMJ. 2005;330(7503):1297.

Byford S, Barrett B, Metrebian N, Groshkova T, Cary M, Charles V, et al. Cost-effectiveness of injectable opioid treatment v. oral methadone for chronic heroin addiction. Br J Psychiatry. 2013;203(5):341–9.

Poulter HL, Walker T, Ahmed D, Moore HJ, Riley F, Towl G, et al. More than just ‘free heroin’: caring whilst navigating constraint in the delivery of diamorphine assisted treatment. Int J Drug Policy. 2023;116:104025.

Tiffany ST, Friedman L, Greenfield SF, Hasin DS, Jackson R. Beyond drug use: a systematic consideration of other outcomes in evaluations of treatments for substance use disorders. Addiction. 2012;107(4):709–18.

Groshkova T, Metrebian N, Hallam C, Charles V, Martin A, Forzisi L, et al. Treatment expectations and satisfaction of treatment-refractory opioid-dependent patients in RIOTT, the Randomised Injectable Opiate Treatment Trial, the UK’s first supervised injectable maintenance clinics. Drug Alcohol Rev. 2013;32(6):566–73.

Sundhetsstyrelsen. Evaluering af ordningen med lægeordineret heroin til opioidafhængige patienter. Opgørelse over årene 2013–2020. København; 2021.

Waal H. Merits and problems in high-threshold methadone maintenance treatment. Evaluation of medication-assisted rehabilitation in Norway 1998–2004. Eur Addict Res. 2007;13(2):66–73.

Ellefsen R, Wusthoff LEC, Arnevik EA. Patients’ satisfaction with heroin-assisted treatment: a qualitative study. Harm Reduct J. 2023;20(1):73.

Gabrhelik R, Handal M, Mravcik V, Nechanska B, Tjagvad C, Thylstrup B et al. Opioid maintenance treatment in the Czech Republic, Norway and Denmark: a study protocol of a comparative registry linkage study. Bmj Open. 2021;11(5).

Arendt M, Munk-Jorgensen P, Sher L, Jensen SO. Mortality among individuals with cannabis, cocaine, amphetamine, MDMA, and opioid use disorders: a nationwide follow-up study of Danish substance users in treatment. Drug Alcohol Depend. 2011;114(2–3):134–9.

PubMed   Google Scholar  

Bukten A, Lokdam NT, Skjaervo I, Ugelvik T, Skurtveit S, Gabrhelik R et al. PriSUD-Nordic-diagnosing and treating Substance Use disorders in the Prison Population: protocol for a mixed methods study. Jmir Res Protoc. 2022;11(3).

Bukten A, Stavseth MR, Clasuen T. From restrictive to more liberal: variations in moratlity among patients in opioid maintenance treament over a 12-year period. BMC Health Serv Res. 2019;19(1):553.

Mortensen LH, Rehnberg J, Dahl E, Diderichsen F, Elstad JI, Martikainen P, et al. Shape of the association between income and mortality: a cohort study of Denmark, Finland, Norway and Sweden in 1995 and 2003. Bmj Open. 2016;6(12):e010974.

Munk-Jorgensen P, Ostergaard SD. Register-based studies of mental disorders. Scand J Public Health. 2011;39(7 Suppl):170–4.

Lassemo E, Myklebust LH. Changes in patterns of coercion during a nine-year period in a Norwegian psychiatric service area. Int J Methods Psychiatr Res. 2021;30(4):e1889.

Myklebust LH, Sorgaard KW, Bjorbekkmo S, Eisemann MR, Olstad R. Time-trends in the utilization of decentralized mental health services in Norway - A natural experiment: the VELO-project. Int J Ment Health Syst. 2010;4:5.

Myklebust LH, Lassemo E. The role of local inpatient psychiatric units and general practitioner on continuity of care in Northern Norway: a case-register study. Int J Methods Psychiatr Res. 2021;30(2):e1866.

Kvamme T, Thylstrup B, Hesse M. Quality of life assessment in Danish Heroin Assisted Treatment Patients: Validity of the SF-36 Survey. submitted.

De Pirro S, Galati G, Pizzamiglio L, Badiani A. The affective and neural correlates of Heroin versus Cocaine Use in Addiction are influenced by environmental setting but in Opposite directions. J Neurosci. 2018;38(22):5182–95.

De Pirro S, Lush P, Parkinson J, Duka T, Critchley HD, Badiani A. Effect of alcohol on the sense of agency in healthy humans. Addict Biol. 2020;25(4):e12796.

Milella MS, D’Ottavio G, De Pirro S, Barra M, Caprioli D, Badiani A. Heroin and its metabolites: relevance to heroin use disorder. Translational Psychiatry. 2023;13(1):120.

Kelly SM, O’Grady KE, Brown BS, Mitchell SG, Schwartz RP. The role of patient satisfaction in methadone treatment. Am J Drug Alcohol Abuse. 2010;36(3):150–4.

Barbosa CD, Balp MM, Kulich K, Germain N, Rofail D. A literature review to explore the link between treatment satisfaction and adherence, compliance, and persistence. Patient Prefer Adherence. 2012;6:39–48.

Korstjens I, Moser A, Series. Practical guidance to qualitative research. Part 2: context, research questions and designs. Eur J Gen Pract. 2017;23(1):274–9.

Braun V, Clarke V. One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qual Res Psychol. 2021;18(3):328–52.

Dyjack D. Ecological fallacies. J Environ Health. 2019;81(9):50–.

Google Scholar  

Gold MR, Russell JES, Weinstein LB. MC. Cost-Effectiveness in Health and Medicine: Oxford University Press; 1996 1996.

Turner HC, Sandmann FG, Downey LE, Orangi S, Teerawattananon Y, Vassall A, et al. What are economic costs and when should they be used in health economic studies? Cost Eff Resour Alloc. 2023;21(1):31.

Lassemo E, Myklebust L, Sorgaard K. Patient cost and treatment unit cost comparison in the VELO-Study. Psychiat Prax. 2011;38.

Yates BT. Cost-inclusive evaluation: a banquet of approaches for including costs, benefits, and cost-effectiveness and cost-benefit analyses in your next evaluation. Eval Program Plann. 2009;32(1):52–4.

The Directorate of Health. Recommendations for the economic evaluation of new interventions in the Norwegian health sector. Oslo2012.

Meld. St. 34. Principles for priority settings in health care. Summary of a white paper on priority setting in the Norwegian health care sector. In: Servces NMoHaC, editor. Oslo2017.

Jekel JF, Katz DL, Elmpre JG. Epideimology, biostatisticvcs and preventive medicine. Pennsylvania, USA: B.W.Saunders; 2001.

Latkin CA, Edwards C, Davey-Rothwell MA, Tobin KE. The relationship between social desirability bias and self-reports of health, substance use, and social network factors among urban substance users in Baltimore, Maryland. Addict Behav. 2017;73:133–6.

Hosseinbor M, Yassini Ardekani SM, Bakhshani S, Bakhshani S. Emotional and social loneliness in individuals with and without substance dependence disorder. Int J High Risk Behav Addict. 2014;3(3):e22688.

Chandler RK, Fletcher BW, Volkow ND. Treating drug abuse and addiction in the criminal justice system: improving public health and safety. JAMA. 2009;301(2):183–90.

Cho HL, Danis M, Grady C. Post-trial responsibilities beyond post-trial access. Lancet. 2018;391(10129):1478–9.

Cook K, Snyder J, Calvert J. Attitudes toward Post-trial Access to Medical interventions: a review of academic literature, legislation, and International guidelines. Dev World Bioeth. 2016;16(2):70–9.

Download references

Acknowledgements

We would like to thank the representatives from proLARNett for inputs on the design and aims of the study. Also thanks to associate professor Eva Lassemo at SINTEF-Helse, Norway for inputs on economical analysis.

The project is funded by the Norwegian Directorate of Health for the duration of 4.5 years with an annual limit of 5 million NOK (assignment No.20/00546). No remuneration is planned for the subjects’ participation in the project.

Open access funding provided by University of Oslo (incl Oslo University Hospital)

Author information

Authors and affiliations.

Norwegian Centre for Addiction Research, Faculty of Medicine, University of Oslo, Kirkevegen 166, Building 45, NO-0407, Oslo, Norway

Lars Henrik Myklebust, Desiree Eide, Silvana De Pirro, Francesca Melis, Ann Oldervoll, Linda E.C. Wusthoff & Thomas Clausen

Bergen Addiction Research Group, Department of Addiction Medicine, Haukeland University Hospital, P.b 1400, NO-5021, Bergen, Norway

Omid Dadras & Lars T. Fadnes

Department of Global Public Health and Primary Care, University of Bergen, P.b 7804, NO-5020, Bergen, Norway

Lars T. Fadnes

Section for Clinical Addiction Research, Oslo University Hospital, P.b 4950 Nydalen, NO-0424, Oslo, Norway

Espen A. Arnevik, Rune Ellefsen & Linda E.C. Wusthoff

Centre for Alcohol and Drug Research, Aarhus University, Bartholins Allé 10, DK-8000, Aarhus, Denmark

Morten Hesse, Timo L. Kvamme & Birgitte Thylstrup

Department of Physiology and Pharmacology “V. Erspamer,” La Sapienza, University of Rome, P. Ie Aldo Moro 5, 00185, Rome, Italy

Silvana De Pirro

You can also search for this author in PubMed   Google Scholar

Contributions

LHM wrote and drafted the manuscript with critical input from all the authors. The study was planned and designed by TC, DE, LTF and LECW. The statistical section had essential inputs from FM and LHM, the section on economic evaluation had substantial inputs from OD, FM and LHM. The litterature search was conducted LHM, with inputs from TC and LECW. Authors OD, SDP, RE, MH, BT, TLK, EAA and AO read the manuscript and had substantial contributions on data-aquisition and corresponding background material.

Corresponding author

Correspondence to Lars Henrik Myklebust .

Ethics declarations

Ethics approval and consent to participate.

The study was approved by the Norwegian Regional Committee for Medical and Health Research Ethics (REK 195733). Informed consent was obtained from all participants. A particular revision of the Helsinki declaration on eventual continued post-trial provisions of clinical care and treatment [ 63 , 64 ] does not apply as the project solely observes the outcomes from already provided treatment and does not initiate any research interventions. No specific insurances for subjects are taken out for the study. In case of injury or complications despite all precautions, patients have the right to apply for compensation through the Norwegian System of Patient Injury Compensation (NPE).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Myklebust, L.H., Eide, D., Arnevik, E.A. et al. Evaluation of heroin-assisted treatment in Norway: protocol for a mixed methods study. BMC Health Serv Res 24 , 398 (2024). https://doi.org/10.1186/s12913-024-10767-w

Download citation

Received : 24 November 2023

Accepted : 21 February 2024

Published : 29 March 2024

DOI : https://doi.org/10.1186/s12913-024-10767-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Opioid maintenance treatment
  • Health services
  • Mixed methods
  • Longitudinal

BMC Health Services Research

ISSN: 1472-6963

comparative study research questions

IMAGES

  1. Comparative Research

    comparative study research questions

  2. (PDF) SAMPLE QUESTIONNAIRE A Comparative Study of Preference and

    comparative study research questions

  3. FREE 9+ Comparative Research Templates in PDF

    comparative study research questions

  4. PPT

    comparative study research questions

  5. Comparative Analysis Template

    comparative study research questions

  6. Comparative Research

    comparative study research questions

VIDEO

  1. Comparative Study -- Musical Genre

  2. 58- Comparative study

  3. A Comparative Study of QOQ vs YOY Performance

  4. AWR001 Academic Writing Part 1 A

  5. RESEARCH CRITIQUE: Quantitative Study

  6. A comparative study of college students’ attitudes before and after the introduction IJEP 2023 111

COMMENTS

  1. An Effective Guide to Comparative Research Questions

    These kinds of research question assist you in learning more about the type of relationship between two study variables. Because they aim to distinctly define the connection between two variables, relationship-based research questions are also known as correlational research questions. Examples of Comparative Research Questions

  2. How to structure quantitative research questions

    Structure of comparative research questions. There are five steps required to construct a comparative research question: (1) choose your starting phrase; (2) identify and name the dependent variable; (3) identify the groups you are interested in; (4) identify the appropriate adjoining text; and (5) write out the comparative research question. Each of these steps is discussed in turn:

  3. (PDF) A Short Introduction to Comparative Research

    A comparative study is a kind of method that analyzes phenomena and then put them together. to find the points of differentiation and similarity (MokhtarianPour, 2016). A comparative perspective ...

  4. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  5. Research Question Examples ‍

    A well-crafted research question (or set of questions) sets the stage for a robust study and meaningful insights. But, if you're new to research, it's not always clear what exactly constitutes a good research question. In this post, we'll provide you with clear examples of quality research questions across various disciplines, so that you can approach your research project with confidence!

  6. A Practical Guide to Writing Quantitative and Qualitative Research

    A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. ... .1,5,14 These questions may also aim to discover differences between groups within the context of an outcome variable (comparative research questions),1,5,14 or elucidate trends ...

  7. Comparative Research Methods

    Comparative research in communication and media studies is conventionally understood as the contrast among different macro-level units, such as world regions, countries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time.

  8. 15

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  9. Types of quantitative research question

    The quantitative research design that we select subsequently determines whether we look for relationships, associations, trends or interactions. To learn how to structure (i.e., write out) each of these three types of quantitative research question (i.e., descriptive, comparative, relationship-based research questions), see the article: How to ...

  10. Comparative Research Methods

    Research goals. Comparative communication research is a combination of substance (specific objects of investigation studied in diferent macro-level contexts) and method (identification of diferences and similarities following established rules and using equivalent concepts).

  11. Study Objectives and Questions

    The steps involved in the process of developing research questions and study objectives for conducting observational comparative effectiveness research (CER) are described in this chapter. It is important to begin with identifying decisions under consideration, determining who the decisionmakers and stakeholders in the specific area of research under study are, and understanding the context in ...

  12. Types of Research Questions

    Research questions can be categorized into different types, depending on the type of research to be undertaken. ... Comparative questions are helpful when studying groups with dependent variables where one variable is compared with another. ... International journal of qualitative studies in education, 22(4), 431-447.

  13. How to Write Quantitative Research Questions: Types With Examples

    2. Comparative. Comparative research questions help you identify the difference between two or more groups based on one or more variables. In general, a comparative research question is used to quantify one variable; however, you can use two or more variables depending on your market research objectives. Comparative research questions examples ...

  14. Research Questions

    Definition: Research questions are the specific questions that guide a research study or inquiry. These questions help to define the scope of the research and provide a clear focus for the study. Research questions are usually developed at the beginning of a research project and are designed to address a particular research problem or objective.

  15. Comparative Studies

    Comparative studies in public administration and political research can be arranged along a continuum that runs from the macro-to the micro-level of analysis (cf., Levine et al. 1990).. At the macrolevel: Comparative analysis focuses on fundamental questions of government, of the public sector, etc. between and within countries by using aggregate data.

  16. Research Questions: Definitions, Types + [Examples]

    A case study is a qualitative research approach that involves carrying out a detailed investigation into a research subject(s) or variable(s). In the course of a case study, the researcher gathers a range of data from multiple sources of information via different data collection methods, and over a period of time. ... Comparative Research ...

  17. Comparative Case Studies: Methodological Discussion

    In the past, comparativists have oftentimes regarded case study research as an alternative to comparative studies proper. At the risk of oversimplification: methodological choices in comparative and international education (CIE) research, from the 1960s onwards, have fallen primarily on either single country (small n) contextualized comparison, or on cross-national (usually large n, variable ...

  18. Comparative Research

    Comparative studies have their problems on every level of research, i.e., from theory to types of research questions, operationalization, instruments, sampling, and interpretation of results. The major problem in comparative research, regardless of the discipline, is that all aspects of the analysis from theory to datasets may vary in ...

  19. Using qualitative comparative analysis to understand and quantify

    Like other types of studies, the first step involves identifying the research question(s) and developing a conceptual model. This step guides the study as a whole and also informs case, condition (c.f., variable), and outcome selection. As mentioned above, QCA frames research questions differently than traditional quantitative or qualitative ...

  20. 189 questions with answers in COMPARATIVE STUDIES

    To calculate the minimum sample size for a comparative study on electrolytes in preterm and full-term infants, you can use a formula that takes into consideration the effect size, level of ...

  21. Comparative Research

    Additionally, since quantitative research requires a specific research question, this method can help you can quickly come up with one particular comparative research question. The goal of comparative research is drawing a solution out of the similarities and differences between the focused variables. Through non-experimental or qualitative ...

  22. Chapter 10 Methods for Comparative Studies

    In eHealth evaluation, comparative studies aim to find out whether group differences in eHealth system adoption make a difference in important outcomes. These groups may differ in their composition, the type of system in use, and the setting where they work over a given time duration. The comparisons are to determine whether significant differences exist for some predefined measures between ...

  23. Frontiers

    A Comparative Analysis of Student Performance in an Online vs. Face-to-Face Environmental Science Course From 2009 to 2016. ... Research question #1 was to determine if there was a statistically significant difference between the academic performance of online and F2F students. ... In our research study, it is possible the study participants ...

  24. Evaluation of heroin-assisted treatment in Norway: protocol for a mixed

    The themes were operationalized into six work packages, with corresponding research questions and data sources (shown in Table ... Norway and Denmark: a study protocol of a comparative registry linkage study. Bmj Open. 2021;11(5). Arendt M, Munk-Jorgensen P, Sher L, Jensen SO. Mortality among individuals with cannabis, cocaine, amphetamine ...