Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Focus Group? | Step-by-Step Guide & Examples

What is a Focus Group | Step-by-Step Guide & Examples

Published on December 10, 2021 by Tegan George . Revised on June 22, 2023.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest.

What is a focus group

Table of contents

What is a focus group, step 1: choose your topic of interest, step 2: define your research scope and hypotheses, step 3: determine your focus group questions, step 4: select a moderator or co-moderator, step 5: recruit your participants, step 6: set up your focus group, step 7: host your focus group, step 8: analyze your data and report your results, advantages and disadvantages of focus groups, other interesting articles, frequently asked questions about focus groups.

Focus groups are a type of qualitative research . Observations of the group’s dynamic, their answers to focus group questions, and even their body language can guide future research on consumer decisions, products and services, or controversial topics.

Focus groups are often used in marketing, library science, social science, and user research disciplines. They can provide more nuanced and natural feedback than individual interviews and are easier to organize than experiments or large-scale surveys .

Prevent plagiarism. Run a free check.

Focus groups are primarily considered a confirmatory research technique . In other words, their discussion-heavy setting is most useful for confirming or refuting preexisting beliefs. For this reason, they are great for conducting explanatory research , where you explore why something occurs when limited information is available.

A focus group may be a good choice for you if:

  • You’re interested in real-time, unfiltered responses on a given topic or in the dynamics of a discussion between participants
  • Your questions are rooted in feelings or perceptions , and cannot easily be answered with “yes” or “no”
  • You’re confident that a relatively small number of responses will answer your question
  • You’re seeking directional information that will help you uncover new questions or future research ideas
  • Structured interviews : The questions are predetermined in both topic and order.
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.

Differences between types of interviews

Make sure to choose the type of interview that suits your research best. This table shows the most important differences between the four types.

Topics favorable to focus groups

As a rule of thumb, research topics related to thoughts, beliefs, and feelings work well in focus groups. If you are seeking direction, explanation, or in-depth dialogue, a focus group could be a good fit.

However, if your questions are dichotomous or if you need to reach a large audience quickly, a survey may be a better option. If your question hinges upon behavior but you are worried about influencing responses, consider an observational study .

  • If you want to determine whether the student body would regularly consume vegan food, a survey would be a great way to gauge student preferences.

However, food is much more than just consumption and nourishment and can have emotional, cultural, and other implications on individuals.

  • If you’re interested in something less concrete, such as students’ perceptions of vegan food or the interplay between their choices at the dining hall and their feelings of homesickness or loneliness, perhaps a focus group would be best.

Once you have determined that a focus group is the right choice for your topic, you can start thinking about what you expect the group discussion to yield.

Perhaps literature already exists on your subject or a sufficiently similar topic that you can use as a starting point. If the topic isn’t well studied, use your instincts to determine what you think is most worthy of study.

Setting your scope will help you formulate intriguing hypotheses , set clear questions, and recruit the right participants.

  • Are you interested in a particular sector of the population, such as vegans or non-vegans?
  • Are you interested in including vegetarians in your analysis?
  • Perhaps not all students eat at the dining hall. Will your study exclude those who don’t?
  • Are you only interested in students who have strong opinions on the subject?

A benefit of focus groups is that your hypotheses can be open-ended. You can be open to a wide variety of opinions, which can lead to unexpected conclusions.

The questions that you ask your focus group are crucially important to your analysis. Take your time formulating them, paying special attention to phrasing. Be careful to avoid leading questions , which can affect your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

If you are discussing a controversial topic, be careful that your questions do not cause social desirability bias . Here, your respondents may lie about their true beliefs to mask any socially unacceptable or unpopular opinions. This and other demand characteristics can hurt your analysis and lead to several types of reseach bias in your results, particularly if your participants react in a different way once knowing they’re being observed. These include self-selection bias , the Hawthorne effect , the Pygmalion effect , and recall bias .

  • Engagement questions make your participants feel comfortable and at ease: “What is your favorite food at the dining hall?”
  • Exploration questions drill down to the focus of your analysis: “What pros and cons of offering vegan options do you see?”
  • Exit questions pick up on anything you may have previously missed in your discussion: “Is there anything you’d like to mention about vegan options in the dining hall that we haven’t discussed?”

It is important to have more than one moderator in the room. If you would like to take the lead asking questions, select a co-moderator who can coordinate the technology, take notes, and observe the behavior of the participants.

If your hypotheses have behavioral aspects, consider asking someone else to be lead moderator so that you are free to take a more observational role.

Depending on your topic, there are a few types of moderator roles that you can choose from.

  • The most common is the dual-moderator , introduced above.
  • Another common option is the dueling-moderator style . Here, you and your co-moderator take opposing sides on an issue to allow participants to see different perspectives and respond accordingly.

Depending on your research topic, there are a few sampling methods you can choose from to help you recruit and select participants.

  • Voluntary response sampling , such as posting a flyer on campus and finding participants based on responses
  • Convenience sampling of those who are most readily accessible to you, such as fellow students at your university
  • Stratified sampling of a particular age, race, ethnicity, gender identity, or other characteristic of interest to you
  • Judgment sampling of a specific set of participants that you already know you want to include

Beware of sampling bias and selection bias , which can occur when some members of the population are more likely to be included than others.

Number of participants

In most cases, one focus group will not be sufficient to answer your research question. It is likely that you will need to schedule three to four groups. A good rule of thumb is to stop when you’ve reached a saturation point (i.e., when you aren’t receiving new responses to your questions).

Most focus groups have 6–10 participants. It’s a good idea to over-recruit just in case someone doesn’t show up. As a rule of thumb, you shouldn’t have fewer than 6 or more than 12 participants, in order to get the most reliable results.

Lastly, it’s preferable for your participants not to know you or each other, as this can bias your results.

A focus group is not just a group of people coming together to discuss their opinions. While well-run focus groups have an enjoyable and relaxed atmosphere, they are backed up by rigorous methods to provide robust observations.

Confirm a time and date

Be sure to confirm a time and date with your participants well in advance. Focus groups usually meet for 45–90 minutes, but some can last longer. However, beware of the possibility of wandering attention spans. If you really think your session needs to last longer than 90 minutes, schedule a few breaks.

Confirm whether it will take place in person or online

You will also need to decide whether the group will meet in person or online. If you are hosting it in person, be sure to pick an appropriate location.

  • An uncomfortable or awkward location may affect the mood or level of participation of your group members.
  • Online sessions are convenient, as participants can join from home, but they can also lessen the connection between participants.

As a general rule, make sure you are in a noise-free environment that minimizes distractions and interruptions to your participants.

Consent and ethical considerations

It’s important to take into account ethical considerations and informed consent when conducting your research. Informed consent means that participants possess all the information they need to decide whether they want to participate in the research before it starts. This includes information about benefits, risks, funding, and institutional approval.

Participants should also sign a release form that states that they are comfortable with being audio- or video-recorded. While verbal consent may be sufficient, it is best to ask participants to sign a form.

A disadvantage of focus groups is that they are too small to provide true anonymity to participants. Make sure that your participants know this prior to participating.

There are a few things you can do to commit to keeping information private. You can secure confidentiality by removing all identifying information from your report or offer to pseudonymize the data later. Data pseudonymization entails replacing any identifying information about participants with pseudonymous or false identifiers.

Preparation prior to participation

If there is something you would like participants to read, study, or prepare beforehand, be sure to let them know well in advance. It’s also a good idea to call them the day before to ensure they will still be participating.

Consider conducting a tech check prior to the arrival of your participants, and note any environmental or external factors that could affect the mood of the group that day. Be sure that you are organized and ready, as a stressful atmosphere can be distracting and counterproductive.

Starting the focus group

Welcome individuals to the focus group by introducing the topic, yourself, and your co-moderator, and go over any ground rules or suggestions for a successful discussion. It’s important to make your participants feel at ease and forthcoming with their responses.

Consider starting out with an icebreaker, which will allow participants to relax and settle into the space a bit. Your icebreaker can be related to your study topic or not; it’s just an exercise to get participants talking.

Leading the discussion

Once you start asking your questions, try to keep response times equal between participants. Take note of the most and least talkative members of the group, as well as any participants with particularly strong or dominant personalities.

You can ask less talkative members questions directly to encourage them to participate or ask participants questions by name to even the playing field. Feel free to ask participants to elaborate on their answers or to give an example.

As a moderator, strive to remain neutral . Refrain from reacting to responses, and be aware of your body language (e.g., nodding, raising eyebrows) and the possibility for observer bias . Active listening skills, such as parroting back answers or asking for clarification, are good methods to encourage participation and signal that you’re listening.

Many focus groups offer a monetary incentive for participants. Depending on your research budget, this is a nice way to show appreciation for their time and commitment. To keep everyone feeling fresh, consider offering snacks or drinks as well.

After concluding your focus group, you and your co-moderator should debrief, recording initial impressions of the discussion as well as any highlights, issues, or immediate conclusions you’ve drawn.

The next step is to transcribe and clean your data . Assign each participant a number or pseudonym for organizational purposes. Transcribe the recordings and conduct content analysis to look for themes or categories of responses. The categories you choose can then form the basis for reporting your results.

Just like other research methods, focus groups come with advantages and disadvantages.

  • They are fairly straightforward to organize and results have strong face validity .
  • They are usually inexpensive, even if you compensate participant.
  • A focus group is much less time-consuming than a survey or experiment , and you get immediate results.
  • Focus group results are often more comprehensible and intuitive than raw data.

Disadvantages

  • It can be difficult to assemble a truly representative sample. Focus groups are generally not considered externally valid due to their small sample sizes.
  • Due to the small sample size, you cannot ensure the anonymity of respondents, which may influence their desire to speak freely.
  • Depth of analysis can be a concern, as it can be challenging to get honest opinions on controversial topics.
  • There is a lot of room for error in the data analysis and high potential for observer dependency in drawing conclusions. You have to be careful not to cherry-pick responses to fit a prior conclusion.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of 4 types of interviews .

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Focus group interviews : The questions are presented to a group instead of one individual.

It’s impossible to completely avoid observer bias in studies where data collection is done or recorded manually, but you can take steps to reduce this type of bias in your research .

Scope of research is determined at the beginning of your research process , prior to the data collection stage. Sometimes called “scope of study,” your scope delineates what will and will not be covered in your project. It helps you focus your work and your time, ensuring that you’ll be able to achieve your goals and outcomes.

Defining a scope can be very useful in any research project, from a research proposal to a thesis or dissertation . A scope is needed for all types of research: quantitative , qualitative , and mixed methods .

To define your scope of research, consider the following:

  • Budget constraints or any specifics of grant funding
  • Your proposed timeline and duration
  • Specifics about your population of study, your proposed sample size , and the research methodology you’ll pursue
  • Any inclusion and exclusion criteria
  • Any anticipated control , extraneous , or confounding variables that could bias your research if not accounted for properly.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What is a Focus Group | Step-by-Step Guide & Examples. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/focus-group/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is qualitative research | methods & examples, explanatory research | definition, guide, & examples, data collection | definition, methods & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 21, Issue 3
  • Data collection in qualitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • David Barrett 1 ,
  • http://orcid.org/0000-0003-1130-5603 Alison Twycross 2
  • 1 Faculty of Health Sciences , University of Hull , Hull , UK
  • 2 School of Health and Social Care , London South Bank University , London , UK
  • Correspondence to Dr David Barrett, Faculty of Health Sciences, University of Hull, Hull HU6 7RX, UK; D.I.Barrett{at}hull.ac.uk

https://doi.org/10.1136/eb-2018-102939

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Qualitative research methods allow us to better understand the experiences of patients and carers; they allow us to explore how decisions are made and provide us with a detailed insight into how interventions may alter care. To develop such insights, qualitative research requires data which are holistic, rich and nuanced, allowing themes and findings to emerge through careful analysis. This article provides an overview of the core approaches to data collection in qualitative research, exploring their strengths, weaknesses and challenges.

Collecting data through interviews with participants is a characteristic of many qualitative studies. Interviews give the most direct and straightforward approach to gathering detailed and rich data regarding a particular phenomenon. The type of interview used to collect data can be tailored to the research question, the characteristics of participants and the preferred approach of the researcher. Interviews are most often carried out face-to-face, though the use of telephone interviews to overcome geographical barriers to participant recruitment is becoming more prevalent. 1

A common approach in qualitative research is the semistructured interview, where core elements of the phenomenon being studied are explicitly asked about by the interviewer. A well-designed semistructured interview should ensure data are captured in key areas while still allowing flexibility for participants to bring their own personality and perspective to the discussion. Finally, interviews can be much more rigidly structured to provide greater control for the researcher, essentially becoming questionnaires where responses are verbal rather than written.

Deciding where to place an interview design on this ‘structural spectrum’ will depend on the question to be answered and the skills of the researcher. A very structured approach is easy to administer and analyse but may not allow the participant to express themselves fully. At the other end of the spectrum, an open approach allows for freedom and flexibility, but requires the researcher to walk an investigative tightrope that maintains the focus of an interview without forcing participants into particular areas of discussion.

Example of an interview schedule 3

What do you think is the most effective way of assessing a child’s pain?

Have you come across any issues that make it difficult to assess a child’s pain?

What pain-relieving interventions do you find most useful and why?

When managing pain in children what is your overall aim?

Whose responsibility is pain management?

What involvement do you think parents should have in their child’s pain management?

What involvement do children have in their pain management?

Is there anything that currently stops you managing pain as well as you would like?

What would help you manage pain better?

Interviews present several challenges to researchers. Most interviews are recorded and will need transcribing before analysing. This can be extremely time-consuming, with 1 hour of interview requiring 5–6 hours to transcribe. 4 The analysis itself is also time-consuming, requiring transcriptions to be pored over word-for-word and line-by-line. Interviews also present the problem of bias the researcher needs to take care to avoid leading questions or providing non-verbal signals that might influence the responses of participants.

Focus groups

The focus group is a method of data collection in which a moderator/facilitator (usually a coresearcher) speaks with a group of 6–12 participants about issues related to the research question. As an approach, the focus group offers qualitative researchers an efficient method of gathering the views of many participants at one time. Also, the fact that many people are discussing the same issue together can result in an enhanced level of debate, with the moderator often able to step back and let the focus group enter into a free-flowing discussion. 5 This provides an opportunity to gather rich data from a specific population about a particular area of interest, such as barriers perceived by student nurses when trying to communicate with patients with cancer. 6

From a participant perspective, the focus group may provide a more relaxing environment than a one-to-one interview; they will not need to be involved with every part of the discussion and may feel more comfortable expressing views when they are shared by others in the group. Focus groups also allow participants to ‘bounce’ ideas off each other which sometimes results in different perspectives emerging from the discussion. However, focus groups are not without their difficulties. As with interviews, focus groups provide a vast amount of data to be transcribed and analysed, with discussions often lasting 1–2 hours. Moderators also need to be highly skilled to ensure that the discussion can flow while remaining focused and that all participants are encouraged to speak, while ensuring that no individuals dominate the discussion. 7

Observation

Participant and non-participant observation are powerful tools for collecting qualitative data, as they give nurse researchers an opportunity to capture a wide array of information—such as verbal and non-verbal communication, actions (eg, techniques of providing care) and environmental factors—within a care setting. Another advantage of observation is that the researcher gains a first-hand picture of what actually happens in clinical practice. 8 If the researcher is adopting a qualitative approach to observation they will normally record field notes . Field notes can take many forms, such as a chronological log of what is happening in the setting, a description of what has been observed, a record of conversations with participants or an expanded account of impressions from the fieldwork. 9 10

As with other qualitative data collection techniques, observation provides an enormous amount of data to be captured and analysed—one approach to helping with collection and analysis is to digitally record observations to allow for repeated viewing. 11 Observation also provides the researcher with some unique methodological and ethical challenges. Methodologically, the act of being observed may change the behaviour of the participant (often referred to as the ‘Hawthorne effect’), impacting on the value of findings. However, most researchers report a process of habitation taking place where, after a relatively short period of time, those being observed revert to their normal behaviour. Ethically, the researcher will need to consider when and how they should intervene if they view poor practice that could put patients at risk.

The three core approaches to data collection in qualitative research—interviews, focus groups and observation—provide researchers with rich and deep insights. All methods require skill on the part of the researcher, and all produce a large amount of raw data. However, with careful and systematic analysis 12 the data yielded with these methods will allow researchers to develop a detailed understanding of patient experiences and the work of nurses.

  • Twycross AM ,
  • Williams AM ,
  • Huang MC , et al
  • Onwuegbuzie AJ ,
  • Dickinson WB ,
  • Leech NL , et al
  • Twycross A ,
  • Emerson RM ,
  • Meriläinen M ,
  • Ala-Kokko T

Competing interests None declared.

Patient consent Not required.

Provenance and peer review Commissioned; internally peer reviewed.

Read the full text or download the PDF:

Logo for Open Educational Resources

Chapter 10. Introduction to Data Collection Techniques

Introduction.

Now that we have discussed various aspects of qualitative research, we can begin to collect data. This chapter serves as a bridge between the first half and second half of this textbook (and perhaps your course) by introducing techniques of data collection. You’ve already been introduced to some of this because qualitative research is often characterized by the form of data collection; for example, an ethnographic study is one that employs primarily observational data collection for the purpose of documenting and presenting a particular culture or ethnos. Thus, some of this chapter will operate as a review of material already covered, but we will be approaching it from the data-collection side rather than the tradition-of-inquiry side we explored in chapters 2 and 4.

Revisiting Approaches

There are four primary techniques of data collection used in qualitative research: interviews, focus groups, observations, and document review. [1] There are other available techniques, such as visual analysis (e.g., photo elicitation) and biography (e.g., autoethnography) that are sometimes used independently or supplementarily to one of the main forms. Not to confuse you unduly, but these various data collection techniques are employed differently by different qualitative research traditions so that sometimes the technique and the tradition become inextricably entwined. This is largely the case with observations and ethnography. The ethnographic tradition is fundamentally based on observational techniques. At the same time, traditions other than ethnography also employ observational techniques, so it is worthwhile thinking of “tradition” and “technique” separately (see figure 10.1).

Figure 10.1. Data Collection Techniques

Each of these data collection techniques will be the subject of its own chapter in the second half of this textbook. This chapter serves as an orienting overview and as the bridge between the conceptual/design portion of qualitative research and the actual practice of conducting qualitative research.

Overview of the Four Primary Approaches

Interviews are at the heart of qualitative research. Returning to epistemological foundations, it is during the interview that the researcher truly opens herself to hearing what others have to say, encouraging her interview subjects to reflect deeply on the meanings and values they hold. Interviews are used in almost every qualitative tradition but are particularly salient in phenomenological studies, studies seeking to understand the meaning of people’s lived experiences.

Focus groups can be seen as a type of interview, one in which a group of persons (ideally between five and twelve) is asked a series of questions focused on a particular topic or subject. They are sometimes used as the primary form of data collection, especially outside academic research. For example, businesses often employ focus groups to determine if a particular product is likely to sell. Among qualitative researchers, it is often used in conjunction with any other primary data collection technique as a form of “triangulation,” or a way of increasing the reliability of the study by getting at the object of study from multiple directions. [2] Some traditions, such as feminist approaches, also see the focus group as an important “consciousness-raising” tool.

If interviews are at the heart of qualitative research, observations are its lifeblood. Researchers who are more interested in the practices and behaviors of people than what they think or who are trying to understand the parameters of an organizational culture rely on observations as their primary form of data collection. The notes they make “in the field” (either during observations or afterward) form the “data” that will be analyzed. Ethnographers, those seeking to describe a particular ethnos, or culture, believe that observations are more reliable guides to that culture than what people have to say about it. Observations are thus the primary form of data collection for ethnographers, albeit often supplemented with in-depth interviews.

Some would say that these three—interviews, focus groups, and observations—are really the foundational techniques of data collection. They are far and away the three techniques most frequently used separately, in conjunction with one another, and even sometimes in mixed methods qualitative/quantitative studies. Document review, either as a form of content analysis or separately, however, is an important addition to the qualitative researcher’s toolkit and should not be overlooked (figure 10.1). Although it is rare for a qualitative researcher to make document review their primary or sole form of data collection, including documents in the research design can help expand the reach and the reliability of a study. Document review can take many forms, from historical and archival research, in which the researcher pieces together a narrative of the past by finding and analyzing a variety of “documents” and records (including photographs and physical artifacts), to analyses of contemporary media content, as in the case of compiling and coding blog posts or other online commentaries, and content analysis that identifies and describes communicative aspects of media or documents.

focus groups data collection in qualitative research

In addition to these four major techniques, there are a host of emerging and incidental data collection techniques, from photo elicitation or photo voice, in which respondents are asked to comment upon a photograph or image (particularly useful as a supplement to interviews when the respondents are hesitant or unable to answer direct questions), to autoethnographies, in which the researcher uses his own position and life to increase our understanding about a phenomenon and its historical and social context.

Taken together, these techniques provide a wide range of practices and tools with which to discover the world. They are particularly suited to addressing the questions that qualitative researchers ask—questions about how things happen and why people act the way they do, given particular social contexts and shared meanings about the world (chapter 4).

Triangulation and Mixed Methods

Because the researcher plays such a large and nonneutral role in qualitative research, one that requires constant reflectivity and awareness (chapter 6), there is a constant need to reassure her audience that the results she finds are reliable. Quantitative researchers can point to any number of measures of statistical significance to reassure their audiences, but qualitative researchers do not have math to hide behind. And she will also want to reassure herself that what she is hearing in her interviews or observing in the field is a true reflection of what is going on (or as “true” as possible, given the problem that the world is as large and varied as the elephant; see chapter 3). For those reasons, it is common for researchers to employ more than one data collection technique or to include multiple and comparative populations, settings, and samples in the research design (chapter 2). A single set of interviews or initial comparison of focus groups might be conceived as a “pilot study” from which to launch the actual study. Undergraduate students working on a research project might be advised to think about their projects in this way as well. You are simply not going to have enough time or resources as an undergraduate to construct and complete a successful qualitative research project, but you may be able to tackle a pilot study. Graduate students also need to think about the amount of time and resources they have for completing a full study. Masters-level students, or students who have one year or less in which to complete a program, should probably consider their study as an initial exploratory pilot. PhD candidates might have the time and resources to devote to the type of triangulated, multifaceted research design called for by the research question.

We call the use of multiple qualitative methods of data collection and the inclusion of multiple and comparative populations and settings “triangulation.” Using different data collection methods allows us to check the consistency of our findings. For example, a study of the vaccine hesitant might include a set of interviews with vaccine-hesitant people and a focus group of the same and a content analysis of online comments about a vaccine mandate. By employing all three methods, we can be more confident of our interpretations from the interviews alone (especially if we are hearing the same thing throughout; if we are not, then this is a good sign that we need to push a little further to find out what is really going on). [3] Methodological triangulation is an important tool for increasing the reliability of our findings and the overall success of our research.

Methodological triangulation should not be confused with mixed methods techniques, which refer instead to the combining of qualitative and quantitative research methods. Mixed methods studies can increase reliability, but that is not their primary purpose. Mixed methods address multiple research questions, both the “how many” and “why” kind, or the causal and explanatory kind. Mixed methods will be discussed in more detail in chapter 15.

Let us return to the three examples of qualitative research described in chapter 1: Cory Abramson’s study of aging ( The End Game) , Jennifer Pierce’s study of lawyers and discrimination ( Racing for Innocence ), and my own study of liberal arts college students ( Amplified Advantage ). Each of these studies uses triangulation.

Abramson’s book is primarily based on three years of observations in four distinct neighborhoods. He chose the neighborhoods in such a way to maximize his ability to make comparisons: two were primarily middle class and two were primarily poor; further, within each set, one was predominantly White, while the other was either racially diverse or primarily African American. In each neighborhood, he was present in senior centers, doctors’ offices, public transportation, and other public spots where the elderly congregated. [4] The observations are the core of the book, and they are richly written and described in very moving passages. But it wasn’t enough for him to watch the seniors. He also engaged with them in casual conversation. That, too, is part of fieldwork. He sometimes even helped them make it to the doctor’s office or get around town. Going beyond these interactions, he also interviewed sixty seniors, an equal amount from each of the four neighborhoods. It was in the interviews that he could ask more detailed questions about their lives, what they thought about aging, what it meant to them to be considered old, and what their hopes and frustrations were. He could see that those living in the poor neighborhoods had a more difficult time accessing care and resources than those living in the more affluent neighborhoods, but he couldn’t know how the seniors understood these difficulties without interviewing them. Both forms of data collection supported each other and helped make the study richer and more insightful. Interviews alone would have failed to demonstrate the very real differences he observed (and that some seniors would not even have known about). This is the value of methodological triangulation.

Pierce’s book relies on two separate forms of data collection—interviews with lawyers at a firm that has experienced a history of racial discrimination and content analyses of news stories and popular films that screened during the same years of the alleged racial discrimination. I’ve used this book when teaching methods and have often found students struggle with understanding why these two forms of data collection were used. I think this is because we don’t teach students to appreciate or recognize “popular films” as a legitimate form of data. But what Pierce does is interesting and insightful in the best tradition of qualitative research. Here is a description of the content analyses from a review of her book:

In the chapter on the news media, Professor Pierce uses content analysis to argue that the media not only helped shape the meaning of affirmative action, but also helped create white males as a class of victims. The overall narrative that emerged from these media accounts was one of white male innocence and victimization. She also maintains that this narrative was used to support “neoconservative and neoliberal political agendas” (p. 21). The focus of these articles tended to be that affirmative action hurt white working-class and middle-class men particularly during the recession in the 1980s (despite statistical evidence that people of color were hurt far more than white males by the recession). In these stories fairness and innocence were seen in purely individual terms. Although there were stories that supported affirmative action and developed a broader understanding of fairness, the total number of stories slanted against affirmative action from 1990 to 1999. During that time period negative stories always outnumbered those supporting the policy, usually by a ratio of 3:1 or 3:2. Headlines, the presentation of polling data, and an emphasis in stories on racial division, Pierce argues, reinforced the story of white male victimization. Interestingly, the news media did very few stories on gender and affirmative action. The chapter on the film industry from 1989 to 1999 reinforces Pierce’s argument and adds another layer to her interpretation of affirmative action during this time period. She sampled almost 60 Hollywood films with receipts ranging from four million to 184 million dollars. In this chapter she argues that the dominant theme of these films was racial progress and the redemption of white Americans from past racism. These movies usually portrayed white, elite, and male experiences. People of color were background figures who supported the protagonist and “anointed” him as a savior (p. 45). Over the course of the film the protagonists move from “innocence to consciousness” concerning racism. The antagonists in these films most often were racist working-class white men. A Time to Kill , Mississippi Burning , Amistad , Ghosts of Mississippi , The Long Walk Home , To Kill a Mockingbird , and Dances with Wolves receive particular analysis in this chapter, and her examination of them leads Pierce to conclude that they infused a myth of racial progress into America’s cultural memory. White experiences of race are the focus and contemporary forms of racism are underplayed or omitted. Further, these films stereotype both working-class and elite white males, and underscore the neoliberal emphasis on individualism. ( Hrezo 2012 )

With that context in place, Pierce then turned to interviews with attorneys. She finds that White male attorneys often misremembered facts about the period in which the law firm was accused of racial discrimination and that they often portrayed their firms as having made substantial racial progress. This was in contrast to many of the lawyers of color and female lawyers who remembered the history differently and who saw continuing examples of racial (and gender) discrimination at the law firm. In most of the interviews, people talked about individuals, not structure (and these are attorneys, who really should know better!). By including both content analyses and interviews in her study, Pierce is better able to situate the attorney narratives and explain the larger context for the shared meanings of individual innocence and racial progress. Had this been a study only of films during this period, we would not know how actual people who lived during this period understood the decisions they made; had we had only the interviews, we would have missed the historical context and seen a lot of these interviewees as, well, not very nice people at all. Together, we have a study that is original, inventive, and insightful.

My own study of how class background affects the experiences and outcomes of students at small liberal arts colleges relies on mixed methods and triangulation. At the core of the book is an original survey of college students across the US. From analyses of this survey, I can present findings on “how many” questions and descriptive statistics comparing students of different social class backgrounds. For example, I know and can demonstrate that working-class college students are less likely to go to graduate school after college than upper-class college students are. I can even give you some estimates of the class gap. But what I can’t tell you from the survey is exactly why this is so or how it came to be so . For that, I employ interviews, focus groups, document reviews, and observations. Basically, I threw the kitchen sink at the “problem” of class reproduction and higher education (i.e., Does college reduce class inequalities or make them worse?). A review of historical documents provides a picture of the place of the small liberal arts college in the broader social and historical context. Who had access to these colleges and for what purpose have always been in contest, with some groups attempting to exclude others from opportunities for advancement. What it means to choose a small liberal arts college in the early twenty-first century is thus different for those whose parents are college professors, for those whose parents have a great deal of money, and for those who are the first in their family to attend college. I was able to get at these different understandings through interviews and focus groups and to further delineate the culture of these colleges by careful observation (and my own participation in them, as both former student and current professor). Putting together individual meanings, student dispositions, organizational culture, and historical context allowed me to present a story of how exactly colleges can both help advance first-generation, low-income, working-class college students and simultaneously amplify the preexisting advantages of their peers. Mixed methods addressed multiple research questions, while triangulation allowed for this deeper, more complex story to emerge.

In the next few chapters, we will explore each of the primary data collection techniques in much more detail. As we do so, think about how these techniques may be productively joined for more reliable and deeper studies of the social world.

Advanced Reading: Triangulation

Denzin ( 1978 ) identified four basic types of triangulation: data, investigator, theory, and methodological. Properly speaking, if we use the Denzin typology, the use of multiple methods of data collection and analysis to strengthen one’s study is really a form of methodological triangulation. It may be helpful to understand how this differs from the other types.

Data triangulation occurs when the researcher uses a variety of sources in a single study. Perhaps they are interviewing multiple samples of college students. Obviously, this overlaps with sample selection (see chapter 5). It is helpful for the researcher to understand that these multiple data sources add strength and reliability to the study. After all, it is not just “these students here” but also “those students over there” that are experiencing this phenomenon in a particular way.

Investigator triangulation occurs when different researchers or evaluators are part of the research team. Intercoding reliability is a form of investigator triangulation (or at least a way of leveraging the power of multiple researchers to raise the reliability of the study).

Theory triangulation is the use of multiple perspectives to interpret a single set of data, as in the case of competing theoretical paradigms (e.g., a human capital approach vs. a Bourdieusian multiple capital approach).

Methodological triangulation , as explained in this chapter, is the use of multiple methods to study a single phenomenon, issue, or problem.

Further Readings

Carter, Nancy, Denise Bryant-Lukosius, Alba DiCenso, Jennifer Blythe, Alan J. Neville. 2014. “The Use of Triangulation in Qualitative Research.” Oncology Nursing Forum 41(5):545–547. Discusses the four types of triangulation identified by Denzin with an example of the use of focus groups and in-depth individuals.

Mathison, Sandra. 1988. “Why Triangulate?” Educational Researcher 17(2):13–17. Presents three particular ways of assessing validity through the use of triangulated data collection: convergence, inconsistency, and contradiction.

Tracy, Sarah J. 2010. “Qualitative Quality: Eight ‘Big-Tent’ Criteria for Excellent Qualitative Research.” Qualitative Inquiry 16(10):837–851. Focuses on triangulation as a criterion for conducting valid qualitative research.

  • Marshall and Rossman ( 2016 ) state this slightly differently. They list four primary methods for gathering information: (1) participating in the setting, (2) observing directly, (3) interviewing in depth, and (4) analyzing documents and material culture (141). An astute reader will note that I have collapsed participation into observation and that I have distinguished focus groups from interviews. I suspect that this distinction marks me as more of an interview-based researcher, while Marshall and Rossman prioritize ethnographic approaches. The main point of this footnote is to show you, the reader, that there is no single agreed-upon number of approaches to collecting qualitative data. ↵
  • See “ Advanced Reading: Triangulation ” at end of this chapter. ↵
  • We can also think about triangulating the sources, as when we include comparison groups in our sample (e.g., if we include those receiving vaccines, we might find out a bit more about where the real differences lie between them and the vaccine hesitant); triangulating the analysts (building a research team so that your interpretations can be checked against those of others on the team); and even triangulating the theoretical perspective (as when we “try on,” say, different conceptualizations of social capital in our analyses). ↵

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Focus Groups – Steps, Examples and Guide

Focus Groups – Steps, Examples and Guide

Table of Contents

Focus Groups in Qualitative Research

Focus Group

Definition:

A focus group is a qualitative research method used to gather in-depth insights and opinions from a group of individuals about a particular product, service, concept, or idea.

The focus group typically consists of 6-10 participants who are selected based on shared characteristics such as demographics, interests, or experiences. The discussion is moderated by a trained facilitator who asks open-ended questions to encourage participants to share their thoughts, feelings, and attitudes towards the topic.

Focus groups are an effective way to gather detailed information about consumer behavior, attitudes, and perceptions, and can provide valuable insights to inform decision-making in a range of fields including marketing, product development, and public policy.

Types of Focus Group

The following are some types or methods of Focus Groups:

Traditional Focus Group

This is the most common type of focus group, where a small group of people is brought together to discuss a particular topic. The discussion is typically led by a skilled facilitator who asks open-ended questions to encourage participants to share their thoughts and opinions.

Mini Focus Group

A mini-focus group involves a smaller group of participants, typically 3 to 5 people. This type of focus group is useful when the topic being discussed is particularly sensitive or when the participants are difficult to recruit.

Dual Moderator Focus Group

In a dual-moderator focus group, two facilitators are used to manage the discussion. This can help to ensure that the discussion stays on track and that all participants have an opportunity to share their opinions.

Teleconference or Online Focus Group

Teleconferences or online focus groups are conducted using video conferencing technology or online discussion forums. This allows participants to join the discussion from anywhere in the world, making it easier to recruit participants and reducing the cost of conducting the focus group.

Client-led Focus Group

In a client-led focus group, the client who is commissioning the research takes an active role in the discussion. This type of focus group is useful when the client has specific questions they want to ask or when they want to gain a deeper understanding of their customers.

The following Table can explain Focus Group types more clearly

How To Conduct a Focus Group

To conduct a focus group, follow these general steps:

Define the Research Question

Identify the key research question or objective that you want to explore through the focus group. Develop a discussion guide that outlines the topics and questions you want to cover during the session.

Recruit Participants

Identify the target audience for the focus group and recruit participants who meet the eligibility criteria. You can use various recruitment methods such as social media, online panels, or referrals from existing customers.

Select a Venue

Choose a location that is convenient for the participants and has the necessary facilities such as audio-visual equipment, seating, and refreshments.

Conduct the Session

During the focus group session, introduce the topic, and review the objectives of the research. Encourage participants to share their thoughts and opinions by asking open-ended questions and probing deeper into their responses. Ensure that the discussion remains on topic and that all participants have an opportunity to contribute.

Record the Session

Use audio or video recording equipment to capture the discussion. Note-taking is also essential to ensure that you capture all key points and insights.

Analyze the data

Once the focus group is complete, transcribe and analyze the data. Look for common themes, patterns, and insights that emerge from the discussion. Use this information to generate insights and recommendations that can be applied to the research question.

When to use Focus Group Method

The focus group method is typically used in the following situations:

Exploratory Research

When a researcher wants to explore a new or complex topic in-depth, focus groups can be used to generate ideas, opinions, and insights.

Product Development

Focus groups are often used to gather feedback from consumers about new products or product features to help identify potential areas for improvement.

Marketing Research

Focus groups can be used to test marketing concepts, messaging, or advertising campaigns to determine their effectiveness and appeal to different target audiences.

Customer Feedback

Focus groups can be used to gather feedback from customers about their experiences with a particular product or service, helping companies improve customer satisfaction and loyalty.

Public Policy Research

Focus groups can be used to gather public opinions and attitudes on social or political issues, helping policymakers make more informed decisions.

Examples of Focus Group

Here are some real-time examples of focus groups:

  • A tech company wants to improve the user experience of their mobile app. They conduct a focus group with a diverse group of users to gather feedback on the app’s design, functionality, and features. The focus group consists of 8 participants who are selected based on their age, gender, ethnicity, and level of experience with the app. During the session, a trained facilitator asks open-ended questions to encourage participants to share their thoughts and opinions on the app. The facilitator also observes the participants’ behavior and reactions to the app’s features. After the focus group, the data is analyzed to identify common themes and issues raised by the participants. The insights gathered from the focus group are used to inform improvements to the app’s design and functionality, with the goal of creating a more user-friendly and engaging experience for all users.
  • A car manufacturer wants to develop a new electric vehicle that appeals to a younger demographic. They conduct a focus group with millennials to gather their opinions on the design, features, and pricing of the vehicle.
  • A political campaign team wants to develop effective messaging for their candidate’s campaign. They conduct a focus group with voters to gather their opinions on key issues and identify the most persuasive arguments and messages.
  • A restaurant chain wants to develop a new menu that appeals to health-conscious customers. They conduct a focus group with fitness enthusiasts to gather their opinions on the types of food and drinks that they would like to see on the menu.
  • A healthcare organization wants to develop a new wellness program for their employees. They conduct a focus group with employees to gather their opinions on the types of programs, incentives, and support that would be most effective in promoting healthy behaviors.
  • A clothing retailer wants to develop a new line of sustainable and eco-friendly clothing. They conduct a focus group with environmentally conscious consumers to gather their opinions on the design, materials, and pricing of the clothing.

Purpose of Focus Group

The key objectives of a focus group include:

Generating New Ideas and insights

Focus groups are used to explore new or complex topics in-depth, generating new ideas and insights that may not have been previously considered.

Understanding Consumer Behavior

Focus groups can be used to gather information on consumer behavior, attitudes, and perceptions to inform marketing and product development strategies.

Testing Concepts and Ideas

Focus groups can be used to test marketing concepts, messaging, or product prototypes to determine their effectiveness and appeal to different target audiences.

Gathering Customer Feedback

Informing decision-making.

Focus groups can provide valuable insights to inform decision-making in a range of fields including marketing, product development, and public policy.

Advantages of Focus Group

The advantages of using focus groups are:

  • In-depth insights: Focus groups provide in-depth insights into the attitudes, opinions, and behaviors of a target audience on a specific topic, allowing researchers to gain a deeper understanding of the issues being explored.
  • Group dynamics: The group dynamics of focus groups can provide additional insights, as participants may build on each other’s ideas, share experiences, and debate different perspectives.
  • Efficient data collection: Focus groups are an efficient way to collect data from multiple individuals at the same time, making them a cost-effective method of research.
  • Flexibility : Focus groups can be adapted to suit a range of research objectives, from exploratory research to concept testing and customer feedback.
  • Real-time feedback: Focus groups provide real-time feedback on new products or concepts, allowing researchers to make immediate adjustments and improvements based on participant feedback.
  • Participant engagement: Focus groups can be a more engaging and interactive research method than surveys or other quantitative methods, as participants have the opportunity to express their opinions and interact with other participants.

Limitations of Focus Groups

While focus groups can provide valuable insights, there are also some limitations to using them.

  • Small sample size: Focus groups typically involve a small number of participants, which may not be representative of the broader population being studied.
  • Group dynamics : While group dynamics can be an advantage of focus groups, they can also be a limitation, as dominant personalities may sway the discussion or participants may not feel comfortable expressing their true opinions.
  • Limited generalizability : Because focus groups involve a small sample size, the results may not be generalizable to the broader population.
  • Limited depth of responses: Because focus groups are time-limited, participants may not have the opportunity to fully explore or elaborate on their opinions or experiences.
  • Potential for bias: The facilitator of a focus group may inadvertently influence the discussion or the selection of participants may not be representative, leading to potential bias in the results.
  • Difficulty in analysis : The qualitative data collected in focus groups can be difficult to analyze, as it is often subjective and requires a skilled researcher to interpret and identify themes.

Characteristics of Focus Group

  • Small group size: Focus groups typically involve a small number of participants, ranging from 6 to 12 people. This allows for a more in-depth and focused discussion.
  • Targeted participants: Participants in focus groups are selected based on specific criteria, such as age, gender, or experience with a particular product or service.
  • Facilitated discussion: A skilled facilitator leads the discussion, asking open-ended questions and encouraging participants to share their thoughts and experiences.
  • I nteractive and conversational: Focus groups are interactive and conversational, with participants building on each other’s ideas and responding to one another’s opinions.
  • Qualitative data: The data collected in focus groups is qualitative, providing detailed insights into participants’ attitudes, opinions, and behaviors.
  • Non-threatening environment: Participants are encouraged to share their thoughts and experiences in a non-threatening and supportive environment.
  • Limited time frame: Focus groups are typically time-limited, lasting between 1 and 2 hours, to ensure that the discussion stays focused and productive.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

Methods of data collection in qualitative research: interviews and focus groups

Affiliation.

  • 1 Faculty of Health, Sport and Science, University of Glamorgan, Pontypridd, CF37 1DL. [email protected]
  • PMID: 18356873
  • DOI: 10.1038/bdj.2008.192

This paper explores the most common methods of data collection used in qualitative research: interviews and focus groups. The paper examines each method in detail, focusing on how they work in practice, when their use is appropriate and what they can offer dentistry. Examples of empirical studies that have used interviews or focus groups are also provided.

  • Data Collection / methods*
  • Dental Research / methods*
  • Focus Groups*
  • Interviews as Topic*
  • Qualitative Research*
  • Research Design
  • Open access
  • Published: 02 April 2024

Facilitators and barriers of implementing end-of-life care volunteering in a hospital in five European countries: the iLIVE study

  • Berivan Yildiz 1 ,
  • Agnes van der Heide 1 ,
  • Misa Bakan 2 ,
  • Grethe Skorpen Iversen 3 ,
  • Dagny Faksvåg Haugen 3 , 4 ,
  • Tamsin McGlinchey 5 ,
  • Ruthmarijke Smeding 5 ,
  • John Ellershaw 5 ,
  • Claudia Fischer 6 ,
  • Judit Simon 6 ,
  • Eva Vibora-Martin 7 ,
  • Inmaculada Ruiz-Torreras 7 ,
  • Anne Goossensen 8 &

the iLIVE consortium

BMC Palliative Care volume  23 , Article number:  88 ( 2024 ) Cite this article

169 Accesses

9 Altmetric

Metrics details

End-of-life (EoL) care volunteers in hospitals are a novel approach to support patients and their close ones. The iLIVE Volunteer Study supported hospital volunteer coordinators from five European countries to design and implement an EoL care volunteer service on general wards in their hospitals. This study aimed to identify and explore barriers and facilitators to the implementation of EoL care volunteer services in the five hospitals.

Volunteer coordinators (VCs) from the Netherlands (NL), Norway (NO), Slovenia (SI), Spain (ES) and United Kingdom (UK) participated in a focus group interview and subsequent in-depth one-to-one interviews. A theory-inspired framework based on the five domains of the Consolidated Framework for Implementation Research (CFIR) was used for data collection and analysis. Results from the focus group were depicted in radar charts per hospital.

Barriers across all hospitals were the COVID-19 pandemic delaying the implementation process, and the lack of recognition of the added value of EoL care volunteers by hospital staff. Site-specific barriers were struggles with promoting the service in a highly structured setting with many stakeholders (NL), negative views among nurses on hospital volunteering (NL, NO), a lack of support from healthcare professionals and the management (SI, ES), and uncertainty about their role in implementation among VCs (ES). Site-specific facilitators were training of volunteers (NO, SI, NL), involving volunteers in promoting the service (NO), and education and awareness for healthcare professionals about the role and boundaries of volunteers (UK).

Establishing a comprehensive EoL care volunteer service for patients in non-specialist palliative care wards involves multiple considerations including training, creating awareness and ensuring management support. Implementation requires involvement of stakeholders in a way that enables medical EoL care and volunteering to co-exist. Further research is needed to explore how trust and equal partnerships between volunteers and professional staff can be built and sustained.

Trial registration

NCT04678310. Registered 21/12/2020.

Peer Review reports

Introduction

Over the past years, end-of-life (EoL) care volunteering has become an important contribution to high quality care for patients in their last phase of life [ 1 ]. EoL care volunteers have been shown to offer practical, emotional, social, and existential support in a way that improves the well-being of patients and their families [ 2 , 3 ]. In almost every country in Europe, EoL care volunteers are actively engaged in hospices which would struggle to exist without their contributions in providing high quality care for dying patients [ 4 ]. Countries vary in the numbers and roles of volunteers, the tasks they perform and the developments in their organisations [ 5 ]. However, all do embrace “being there” as the core concept of this unique source of providing community care [ 6 , 7 ]. In some countries, volunteers have had a long involvement in EoL care volunteering, while in other countries these services have only recently started.

A relatively uncommon setting of EoL care volunteering—even in countries with a long history of volunteering – is the hospital setting, and specifically wards not specialised in palliative care. It has been suggested that EoL care volunteers enable patients to maintain their “social capital” during their stay in the hospital and allow the process of dying not to be narrowed within a medical or solely professional context [ 8 ]. Studies have suggested that EoL care volunteer services have the potential to improve the experience of dying patients in the hospital and prevent loneliness, particularly for those without social networks [ 6 , 9 , 10 ]. Moreover, a recent systematic review showed that hospital palliative care volunteers were appreciated for providing psychosocial support, considered as complementary to, rather than replacing the work of health care professionals [ 11 ].

Hospital settings have different characteristics compared to other settings where volunteers may support patients in their last phase of life such as home, community or stand-alone hospices or palliative care volunteering services. This may imply a different nature of volunteering or factors leading to successful integration in hospital care [ 11 ]. Implementing a comprehensive EoL care volunteer service for patients in hospital wards, where the context and care culture are not per se focused on palliative care, entails the integration of various aspects including training, staffing and directing. So far, studies have predominantly focused on examining the experiences of volunteers, providing insight on the volunteer training requirements, and the difficulties and benefits of fulfilling the role of a volunteer in the hospital setting [ 12 , 13 , 14 ]. No studies are available about the experience of implementing an EoL care volunteer service in the hospital setting, from the perspective of those coordinating the implementation.

As part of the European Union Horizon 2020 funded iLIVE project [ 15 ], the iLIVE Volunteer study developed a European Core Curriculum (ECC) for EoL care volunteers in the hospital setting [ 16 ]. The curriculum includes specific attention to ensuring end-of-life-care volunteers are embedded within the organisation, including understanding the specific needs of wards within the hospital where the volunteers will be supporting dying patients [ 16 ]. The curriculum was used to train volunteers and establish an EoL care volunteer service for hospitalized patients in five hospitals (one hospital in each of the following countries: The Netherlands, Norway, Slovenia, Spain and United Kingdom). To better understand how an EoL care volunteer service can be implemented in the hospital setting, this study aims to identify barriers and facilitators as experienced during the implementation of EoL care volunteering in hospitals in the five countries.

Study design and setting

The five hospitals were part of the iLIVE Volunteer Study (Trial registration number NCT04678310) in which research staff had online international meetings on a regular basis. A timetable of the project with timeslots of data collection and data analysis is included in the supplementary file . In each of the five participating hospitals, a site-specific EoL care volunteer service was developed and implemented. The number of wards in which the volunteer services were offered ranged from 1 to 8. The volunteers were active during different time slots on working days and during weekends. (Table  1 ).

Participants and procedures

Volunteer coordinators (VC) were assigned to lead the development and implementation of the EoL care volunteer service in each hospital (Table  2 ). In the Netherlands, Norway and Spain, two VCs were appointed to coordinate the service in their hospitals; in the UK and Slovenia one VC was appointed. All VCs followed a three-day “Train-the-Trainer” course in the UK prior to developing and implementing their volunteer services. The aim of this course was to introduce the ECC and provide information and skills for the development and implementation of the EoL care services, including a focus on the development of a volunteer training.

VCs were invited to participate to a focus group interview and an in-depth one-to-one interview about their experiences with implementation of the volunteer services. All VCs were female. VCs were contacted by e-mail and informed about the aims and procedure of both the focus group interview and one-to-one in-depth interview. Since the focus group interview took place as part of a project meeting, the researchers had also informed the VCs about practical and content-related details regarding the focus group interview. Most VCs had personally met the researchers before as part of the international project. All VCs explicitly provided verbal and written informed consent prior to participating in both the focus group interview and one-to-one in-depth interviews. The Consolidated criteria for reporting qualitative research (COREQ) checklist has been used to report necessary elements of the methods, analysis and results sections.

Data collection

The process of development of the EoL care volunteer services started early 2020. To collect data, the case study methodology according to Yin was applied, using multiple methods and data sources [ 17 ]. This case study approach provides the ability to deal with the comparison of different phenomena in complex and context dependent situations [ 18 ]. By using multiple data sources, the goal was to increase the validity of the research findings. This method also seemed useful given the international character of the study and therefore to partly compensate for the inability to be on-site. The first type of data collected was descriptive information about the volunteer services, using preformed documents that were shared with the researchers to inform them about the status of the service development in each country at regular intervals. Logbooks were used to report about decisions that were taken regarding the structure of the service. The second type of data collection were the focus group interview [ 19 ] and the one-to-one in-depth interviews with VCs [ 20 , 21 ]. An overview of data collection methods and analysis is provided in Fig.  1 .

figure 1

Overview of data collection methods and data analysis

After careful comparison of implementation theories, the Consolidated Framework for Implementation Research (CFIR) was used as a frame to inspire data collection and analysis [ 22 , 23 ]. The CFIR identifies factors that influence an intervention’s implementation and includes five major domains, each consisting of a number of constructs: Intervention, Inner setting, Outer setting, Characteristics of individuals, and Implementation process. (Table  3 ).

A semi-structured interview guide was developed using the interview guide tool available on the CFIR website [ 24 ]. The CFIR interview guide was reviewed to identify and select questions relevant to the implementation of the volunteer services. A pilot interview was then conducted with one VC using the CFIR interview guide with selected questions. Following this pilot interview, the interview guide was adapted to include self-developed questions to facilitate the interviews. The interview guide is included as a supplementary file .

Focus group interview

The face-to-face focus group interview was scheduled in May 2022 as part of a project meeting in a research and education center in Malaga, Spain. One VC from each site and two VCs from the Dutch site participated. At the beginning of the focus group interview, the moderator explained the aim and procedure of the focus group interview and in-depth interviews. She also explained the role of the moderator and the researcher who was present to take field notes during the focus group interview.

During the focus group interview, VCs were asked to list the top three factors that helped or hindered implementation. Each VC then shared their list of factors and the group elaborated on topics that were deemed important. Discussion was facilitated by asking questions from the CFIR interview guide and whether other VCs had similar or different experiences. At the end of the focus group, all VCs were asked to fill in a paper sheet that included a radar chart covering the five domains of the CFIR extended with an additional domain regarding the COVID-19 pandemic. The meaning of the domains and related constructs as included in the interview guide was explained. The VCs were asked to rate to what extent each domain of the CFIR influenced the implementation. The VCs could choose a score between -5 and 5 to rate each domain on the radar chart. A positive score (1 to 5) indicated a positive influence of a specific domain on the implementation of the EoL care volunteer service, while a negative score (-1 to -5) indicated a negative influence of a specific domain on the implementation. For example, a score of 5 indicated an extremely facilitating influence of that domain on the implementation process. The VCs from the Dutch site completed one radar chart together. The duration of the interview was 1 h. The interview was audio recorded and transcribed verbatim.

Both the moderator (AG, PhD) and the researcher (BY, MSc) who took field notes were female. AG is a professor of care ethics by occupation and BY a PhD candidate. Both researchers are by education trained to perform qualitative research.

In-depth interviews

To gain deeper insight into the facilitators and barriers for the implementation processes, one semi-structured in-depth interview per site was conducted with either one (the Netherlands, Spain, Slovenia, United Kingdom) or two (Norway) VCs. The in-depth interviews took place two to four months after the focus group interview was conducted. During the in-depth interviews, VCs were asked to give a detailed description of site-specific experiences visualized in the rating of each CFIR domain on the radar chart. Experiences related to their scores on the chart were explored verbally. Questions from the CFIR interview guide were asked for in-depth understanding on how relevant constructs under the domains influenced the implementation. The mean duration of the interviews was 55 min (range: 35—67 min). All in-depth interviews took place via Zoom and were conducted by a female researcher (AG). The interviews were audio recorded and transcribed verbatim.

Data analysis

Data from the preformed documents and logbooks were analysed within and between sites to get insight into the characteristics of each EoL care volunteer service. Analyses of the focus group and in-depth interview data started by studying the radar charts. Then, the transcripts of the in-depth interviews and the focus group were read to get familiar with the data, focusing on facilitators and barriers for implementation. Data saturation was discussed after the focus group interview and the five in-depth interviews. The domains of the CFIR were used as the theoretically inspired framework to conduct framework analysis [ 25 , 26 ]. Framework analysis was used because of its structured approach to summarize, compare and contrast data from the different contexts of the sites in relation to the five dimensions of the CFIR model [ 25 ].

Then, data of both the focus group interview and in-depth interviews were summarized under each domain for each EoL care volunteer service. Within this framework, facilitators and barriers were specified, and reflection on similarities and differences between sites took place. This resulted in a table per site illustrating the identified facilitators and barriers under each CFIR domain. This table includes facilitators and barriers identified during both the focus group interview and the five in-depth interviews.

Data that were not directly obvious to which domain of the theoretical framework they might belong, were placed in a separate column. After discussion among the authors about whether and which domain of the theoretical framework, if any, would be most appropriate, the data were integrated into the findings as well. This was done by including the data to the table of facilitators and barriers per site.

The first author (BY) performed the framework analysis and the last author (AG) evaluated twice whether the summaries were adequately answering the main research question under each CFIR domain [ 25 ]. The last author provided feedback in the analysis document, and the content and focus of the first author’s summaries were discussed in meetings. When AG identified summaries that did not directly answer the research question, such as descriptions of processes and contexts, suggestions were provided to identify and formulate facilitators and barriers as well. In this way, facilitators and barriers were identified together with descriptions of the different contexts and processes in which the facilitators and barriers were experienced by the VCs. This ensured the quality of the summaries during the analysis process. A member check about the written results of the analysis was performed with the VCs for comments or corrections.

Generic as well as site-specific barriers and facilitators were identified regarding the implementation in the five hospitals. In the following account of the results, pandemic related aspects, which posed a similar significant barrier across all sites, are distinguished from site-specific barriers and facilitators, which provide insight into the different contexts in each site. In addition, an overview of all site-specific facilitators and barriers is provided in the supplementary file . Figure  2 shows the scores for each CFIR domain per site.

figure 2

Consolidated Framework for Implementation Research (CFIR) domain scores depicted in radar charts per site

Pandemic related aspects

In all hospitals, the COVID-19 pandemic and associated measures were unpredictable and experienced as an enormous barrier to implementing the EoL care volunteer service (Fig.  1 , Supplementary file). One main hindering aspect of the pandemic, across all hospitals, was that volunteers were not allowed access to the hospital for a long period of time as imposed by governmental regulations. It was important to keep the volunteers motivated as the restrictions led to decreased motivation among volunteers who had completed the training and were ready to support patients. Consequently, some volunteers from the site in the UK decided to leave the service. In the site in Spain, possibilities for e-volunteering (e.g. telephone contact) were explored during this period.

Another barrier was that VCs had difficulties reaching healthcare professionals who were under high pressure. As a result, attempts to spread the word about the EoL care volunteer service were delayed or cancelled. Even in between waves of the pandemic, it was hard to promote the service due to the staff feeling exhausted and searching for balance in their departments. Consequently, the services had to be carefully introduced.

“Also during COVID, healthcare providers from one ward were displaced to another and there were a lot of mixing and stress about this. Then after each wave of COVID back to their original wards, it was stressful for them. I think this was also when they saw it [the volunteer service], they were like “oh ok, one thing more”. Then we slowly started to come one time, then another time. We started to make these promotions with postcards and with meetings. Then we repeated this meeting with managers and repeated this meeting on the ward. So it was a process.” (Coordinator 1, Site C, Slovenia, in-depth interview).

Site-specific facilitators and barriers

Site a (the netherlands).

The implementation in the Dutch site was mainly facilitated by positive experiences from the CFIR dimension of process (score 4) and hindered by barriers in the CFIR dimensions of inner setting (score 2) and characteristics of individuals (score -3). The process of implementing was described as a process of learning, reflecting and adapting. The VCs strived to acquire knowledge through their conversations and meetings with others. This facilitated the structure of the service and provided them with new strategies or led to adapting existing strategies. In addition, a facilitator was that healthcare staff, patients and families acknowledged the added value of the service. However, the opinion of some nurses who were skeptical about the service was perceived as a barrier, particularly among nurses with more years of work experience compared to nurses who graduated more recently:

“The service was easy to tell and then we were waiting, we hear that patients around us are dying and could easily have had volunteers supporting them. Until we learned that we also had to deal with the opinion of the nurse. That was new to us. So if you [the nurses] think: “yes, but it's my patient and I'm here anyway”, they will not call us if they think that way. While they did say what a nice service, how great that this exists, we also immediately resolved by saying that the service is not only for a patient who is alone, but also for a patient with family. So keep it open, just ask us. And if they had an opinion, then I just noticed: “you're not going to call me”.” (Coordinator A1, The Netherlands, in-depth interview)

Although the existence of a general volunteer service in the hospital was viewed as a facilitator in the CFIR dimension of the inner setting for implementation and recruitment of volunteers, the hierarchical structure within the hospital was experienced as posing a huge barrier. It was a challenge for the VCs to reach healthcare professionals on the wards to disseminate information about the service. Moreover, the working environment in the hospital characterized by shortages of personnel and a large number of flex workers (i.e. nurses) deployed on different wards depending on the needs of the ward, presented an additional barrier. The VCs therefore had to deal with nurses who were not fully engaged with a patient due to their brief presence on each ward.

“A coordinator cannot get that deep into care. You need an intermediary, in this case, the chaplains. Or a contact person who has been at family meetings, for example. But that [the route] takes a huge amount of time because the route is user-unfriendly: you have to go through a lot of layers, on every part of the day you have to deal with different people, this also differs per ward. Different people all the time, means that you have to explain things all over again. Also, if they have not passed it on to each other properly in the patient file, you get time pressure on the patient. Ideally, you should get rid of some of the layers.” (Coordinator A1, the Netherlands, in-depth interview)

A barrier regarding characteristics of individuals was to adapt to the diverse opinions held by various stakeholders within the hospital setting. Specifically, the VCs encountered difficulties during conversations about the volunteer service, which tended to move into different directions depending on whether they were held in a group or on a one-to-one basis. Additionally, the VCs had to deal with the views of individuals who joined the project team at a later stage, which impeded the implementation process. With regard to the characteristics of the intervention (score 5), the volunteer training and supervision sessions for volunteers were believed to foster a sense of friendship among the volunteers, which facilitated the continuity of the service.

Site B (Norway)

The implementation in the Norwegian site was mainly facilitated by the climate in the inner setting (score 3), positive beliefs about the volunteer service (score 5), and characteristics of individuals (score 4). One main facilitator was the perceived added value of the volunteer service for all stakeholders: the patients, relatives, nurses and physicians, and the hospital management (CFIR domain outer setting, score 2). The VCs considered the volunteer service to be beneficial for all those involved. Although some skeptical nurses worried about volunteers taking away their role, nurses who had established a closer relationship with their patients mostly asked for a volunteer:

“The third part I think is the nurses and the health care personnel working in the clinic and on the ward. Because it is a relief for them too. And I, what I see in this part is that when they call me to ask if a volunteer can be there, it is usually for a patient that they know very well, they have a relation to the patient and that goes to the feelings of the nurse. He or she they feel that “I know this patient, she is dying; I have a relation with her, I don’t want her to die alone…but I don’t have the time to sit there” and then they call us. I think that is an added value for this.” (Coordinator B1, Norway, in-depth interview)

The involvement of volunteers during the implementation process was one of the most important facilitators for implementation. The implementation is described as a democratic process in which volunteers are valued for their work and have a say in the decision-making process during the implementation of the service.

“[…] it is a democratic process, they have been into every decision, and they have discussed every topic around how to fill the role of a volunteer in our hospital so they own the project on the whole. They think about it, they read the information, they discuss how to do this, we change, if they want something changed and it seems like a good thing to do, we change. It is not about anyone’s prestige, it is about doing a good job and they decide how to do a good job in this. (Coordinator B1, Norway, in-depth interview)

A facilitator related to the nature of the volunteer service was that next to the VCs, healthcare staff also spread the word about the existence of a hospital volunteer service they found valuable.

“That said, I would say that people who know about this, the department or clinics knowing…they are very positive. Yes, so we have not met anyone saying, “Oh no what’s this, we don’t want this”. They are very positive. People are calling me from unexpected clinics and ask me “Is it true that you have some volunteers who can do this and can do that.” For me that is very positive and kind of self-advertisement. They hear about it from other people.” (Coordinator B2, Norway, in-depth interview)

Site C (Slovenia)

In the Slovenian site, most barriers were identified in the inner setting (score -4). Typical for the Slovenian setting were patients who were not familiar with volunteering. The entire concept of EoL care volunteering had to be introduced in the hospital, requiring considerable effort from the VC and staff. The VC experienced difficulties to spread information to the healthcare professionals. Another barrier was the closure of the palliative care ward due to COVID-19 during implementation. Due to this, the volunteer service had to be adapted in order to be offered at other wards. Nevertheless, regular meetings with different groups of healthcare professionals were identified as a facilitator in this process. In addition, the VC experienced a feeling of trust among healthcare professionals during the hectic periods of the pandemic:

“And I think during these first years we did a lot of work during the COVID and the healthcare workers got to know us and I think there was an increase in trust also to me and [colleague] and that’s why the volunteer intervention was also more successful because they listened to what we want to say, what we think, what we suggest, and we kind of try new things. It is not easy here to implement something new because patients get many busy schedules. There are many things, which are going on here is a bit hectic, or it is just normal to be hectic with many things happening. “ (Coordinator C1, Slovenia, in-depth interview)

A facilitator in the outer setting (score 0) was that the volunteer service was considered to be an added value for patients, as the patient population consisted of patients with pulmonary diseases. The VC believed that volunteers could offer support by calming patients but were also aware that patients were vulnerable for infections by volunteers. A facilitator regarding characteristics of the intervention (score 3) was the quality of the training leading to gaining knowledge among the group of volunteers and feelings of enthusiasms. In addition, they felt connected with each other and were motivated, despite the volunteer training sessions taking place online:

“ And I think they have clear motivation why they want to help people at the end of life and this was I don’t know connected together. And they were here, they were available even if they cannot come here or they do not want to come here because of Covid for example, different reasons, but they were mentally here with us. I think this was also important. As a coordinator, it was a good feeling knowing you had someone you could call. They are not just like dropping out but they are here.” (Coordinator C1, Slovenia, in-depth interview )

With regard to characteristics of individuals (score 5), the VC experienced a lack of support and appreciation towards the volunteer service from the management level. However, the support they received from the head of nurses and physicians turned out be a facilitator. In addition, having a retired nurse among volunteers was a facilitator as she emphasized the importance of such a service to other volunteers and healthcare professionals. In addition, because of her experience with patients with pulmonary problems she could advice on volunteer tasks that could benefit this specific patient population, such as assistance to get fresh air.

Site D (Spain)

The implementation of EoL care volunteers in the hospital in Spain was coordinated by two coordinators from the volunteer department of a local hospice. The VCs described the implementation process as a learning process (score 4). Introducing themselves and the volunteer service to the staff in the hospital appeared to be challenging. Different strategies were needed to facilitate communication between volunteers, the volunteer department of the local hospice and the hospital. In addition, barriers in the inner setting (score 1) were the lack of support from the management level of the hospital, and uncertainty about their role in the development and implementation of the service, which delayed the implementation.

“It was like, you know as [name of hospice organisation] we are a very well-known organisation and it was.. Sometimes it was really difficult to get to a hospital when you are a well-known organisation and.. It was like sometimes the hospital was feeling like we were going to teach them. It was like if we were the best and they were the worst, something strange of the head of the organisation. That part was really difficult.” (Coordinator D1, Spain, in-depth interview)

A barrier in the outer setting was unfamiliarity with the concept of EoL care volunteering among healthcare staff and patients (score -1). However, the VC undertook activities to engage the wider public and to introduce the volunteer service, for example through the use of social media. This also facilitated recruitment of volunteers.

Although the staff in the hospital had a positive view on EoL care volunteering, a barrier was that staff were not available all the time, and volunteers found it challenging to communicate with them about patients:

“Sometimes we felt that they [hospital staff] wanted it [the volunteers] not all the time. Only in the time it was useful for them. Let me see if I can explain myself. Our volunteers go there in the afternoon from 5pm to 7pm. So the volunteers started maybe 20 minutes early to get into the hospital [..] But sometimes they felt that was not the best moment for the staff because it was a busy afternoon because they did not have enough time. It was one of the difficult ones because some volunteers were more open or who have more tools they could manage to get more information or let them go and get more information later.” (Coordinator D1, Spain, in-depth interview)

Site E (United Kingdom)

The inner setting (score 3) in the hospital in UK was characterized by a climate in which volunteering is highly valued and welcomed. EoL care volunteers had already been active in the hospital for a longer period. The support from the staff at the palliative care ward towards volunteers turned out to be an important facilitator. Volunteers being valued as part of the teams by the staff was important in the implementation, as they came back again which facilitated the patients to see the value of it. At wards other than the palliative care ward, raising awareness about the service was a challenge as they were not familiar with this type of volunteering. Therefore, it was noted that it took time for healthcare staff to become aware of the volunteer service’s availability to support their patients.

With regard to the characteristics of the service (score 4), there were strict views on the role and boundaries of what a volunteer can or cannot do in the context of end of life. Education and awareness about this among volunteers and staff were identified as facilitators. A good working relationship between the VC, volunteers and staff at the ward was an important facilitator in the CFIR domain of characteristics of individuals:

“I think something that’s really important that has been positive is the relation between the volunteer service staff and the staff within the palliative care department and the staff in the ward where the volunteers work. Having good communication and good working relationship is really important. And I would say we definitely have that and without that I think that would hinder the implementation and the service as a whole, but we work really well together and can go to each other if we have questions or concerns. […] I think that’s really important, and everyone is aware of what those volunteer role boundaries are and the purpose of the volunteer and what they need support wise in order to succeed in that role.” (Coordinator E1, United Kingdom, in-depth interview)

This study investigated the facilitators and barriers to the implementation of a novel form of EoL care volunteering: in inpatient hospital settings. Pilots in five hospitals in five European countries were involved. Using the CFIR model, both generic and site-specific barriers and facilitators regarding implementation were identified. Similar influences across sites were the COVID-19 pandemic delaying the implementation process, and the necessity to raise awareness about the new volunteer service due to lack of recognition among hospital staff about the added value of EoL care volunteers. Site-specific facilitators influencing the implementation were the presence of a general volunteer service in the hospital, quality of the volunteer training, and involving volunteers themselves in promoting the service. Education and awareness for healthcare professionals about the role, conceptualization and added value of, and boundaries in interacting with volunteers were also identified as facilitators. Site-specific barriers were struggles with promoting the service in a highly structured setting with many stakeholders, unfamiliarity with the concept of EoL care volunteering in the hospital and negative views among nurses about this source of care in the hospital. Moreover, a lack of support from healthcare professionals and the management, and uncertainty among VCs regarding their role during implementation were also perceived as barriers.

Complexity of implementing community care

Within the literature, volunteering is considered a unique source of providing community care, in addition to professional and family care at the end of life [ 6 , 9 , 27 ]. However, by incorporating community care (EoL care volunteering) into the highly specialised environment of a hospital, particular challenges and considerations can be expected due to clashes in cultures of care. In all hospitals, implementation of such a service appeared to be a complex and time-consuming process. This was partly caused by the COVID-19 pandemic and the measures restricting access of volunteers to hospitals [ 28 ], but mainly also by other barriers related to the inner and outer organisational context, the intervention itself, and characteristics of individuals involved.

In this sense, the findings of the present study fit into the theoretical model of dissemination and implementation of healthcare innovations developed by Greenhalgh and colleagues [ 29 ], which served as the foundation for the development of the CFIR model. According to Greenhalgh et al., implementation is viewed as a complex process organised under certain components such as communication and system readiness, while interactions of these components occur within the social, political and organisational context. Using the CFIR model, it was possible to identify why VCs experienced certain barriers and facilitators during the implementation process, such as nurses expressing positivity about the service while being convinced that caring for patients was their own job.

Inner setting

Although it is challenging to evaluate the impact of socio-cultural site-specific aspects on the implementation of the five services, unfamiliarity with EoL care volunteering in the hospital and negative views among nurses about volunteers in all sites, required serious time and communicative efforts of all VC’s. In site A (the Netherlands), this was even more complicated since the VCs had no direct links to clinical staff and thus encountered challenges to reach staff from various levels at the clinical wards. In addition, due to the working culture among healthcare professionals (i.e. flex workers) in this hospital, the VCs had to deal with nurses who had little information about patients and changing staff, as also demonstrated in another study about experiences of volunteers [ 12 ]. In contrast, it was found that nurses in the Norwegian site who had established a good relationship over time with their patients mostly asked for a volunteer. These findings indicate the importance of analysing the interaction of implementation components with social, political and organisational contexts, including working conditions of available staff, for understanding the differences in utilization of the volunteer service in different hospital settings. A clear conceptualization of “being there” may prevent a medical, nursing or task-oriented understanding of the contribution of volunteers to care in hospital contexts [ 6 ] and may increase constructive collaboration with professionals.

It should also be noted that a good patient-nurse relationship may introduce a risk for selection bias and unequal access to the volunteer service. For instance, patients who may experience difficulties to establish a relationship with nurses, for example due to language barriers, may be seen as less likely to be supported by a volunteer than those who have no or less difficulty establishing relationships. Themed sessions about equity or unconscious bias for volunteer coordinators and hospital staff is recommended to ensure equal access to volunteer support. In addition, recruiting a diverse group of volunteers may help minimize the risk of selection bias and unequal access [ 30 ].

One finding was that among all sites, only the Dutch VCs experienced challenges during conversations with individuals who had negative opinions about the volunteer service. This may be due to the prevailing idea in the Netherlands that death should not take place in the hospital, but at home or in community places such as a hospice. This meant that patients had a short length of stay in the hospital and therefore there was a narrow window of time to offer the volunteer service. However, it has been suggested that bringing community care to the hospital not only helps to fill the social gaps when a patient lacks visiting family, but might also lead to better transfers of patients back to a hospice or to their home [ 31 ]. In addition, even for short periods of admission to the hospital, volunteers in healthcare may have the capacity to improve patients’ experiences of care [ 32 ]. This may be even more important in the light of a growing population with palliative care needs [ 33 ].

Involvement of important individuals

The findings of this study indicate that implementation of EoL care volunteering in the hospital setting requires involvement of stakeholders in a way that enables medical and EoL care volunteering to co-exist. On the one hand, it is important to address the views of nurses about the role and boundaries of volunteers while emphasizing the knowledge that volunteers do not replace the role of paid staff [ 11 ]. On the other hand, volunteers should not only be informed, guided and enabled to perform their role [ 34 ], but also involved in decision-making during implementation. In our study, an organisational facilitator was that volunteers in site B (Norway) were from the beginning involved in decision-making about approaches and how to work the service. This approach may imply that EoL care volunteers may be viewed as equal members of the healthcare team [ 11 ]. A previous study has suggested that despite volunteers being regularly informed on how patient care was organised, they still had no decision-making power and were not regularly invited to contribute to how patient care was organised [ 35 ]. It is recommended to further explore how trust and equal partnerships between volunteers and paid staff can be built and sustained [ 1 ]. These factors are modifiable and should therefore be considered in order to improve EoL care volunteering in hospital settings [ 36 ].

Strengths and limitations

This study has several strengths. To our knowledge, this is one of the first studies to present findings on implementation aspects of EoL care volunteering in the hospital setting. Previous studies have mainly focused on experiences of volunteers, providing insight on the training needs of volunteers, and the difficulties and benefits of fulfilling the role of a volunteer in the hospital setting [ 12 , 13 , 37 ]. Another strength of our study is that the study group was able to collect qualitative data in an international context. In addition, collection of data was done by combining different methods such as a focus group, in-depth one-to-one interviews and data from preformed documents and logbooks.

A limitation of this study is that due to the international nature of the project, it was not feasible to conduct ethnographic research in the sites. Observation of (non)verbal interactions may provide more in-depth knowledge about the implementation process. However, the CFIR is based on relevant implementation theories in a variety of disciplines [ 22 ] and offered a clear structure for data collection and analysis. Further research is needed to investigate how innovations involving EoL volunteering should be adapted to the context of the hospital, especially in light of the trend that many people die in hospitals in middle- and high- income countries [ 38 ]. Another limitation is that the interviews were conducted with VCs who were in varying stages of implementation due to the impact of the pandemic. Consequently, the complete range of experiences for those who had started the implementation of the volunteer service shortly before the interviews took place not have been fully captured. Therefore, there may be additional facilitators and barriers that were not presented in the findings. Nevertheless, this study provides insights into the factors that contribute or hinder implementation of EoL care volunteer services in hospitals, highlighting areas for further investigation.

One limitation of the study may be related to the potential bias resulting from the VCs being part of an international project. On the one hand, it is plausible that the VCs may have been particularly motivated, leading to positive attributions of feelings and experiences regarding the process. On the other hand, they may have felt pressure to aim for a successful implementation of the service, potentially leading to negative attributions of meaning to the implementation process. It is possible that VCs have overlooked certain barriers or facilitators. However, it is important to note that a substantial number of both facilitators and barriers across all dimensions of the CFIR framework was identified. Therefore, it is not expected that this has affected the findings.

Recommendations

Based on the findings of this study, it is recommended to increase awareness and provide education among healthcare professionals regarding the role and benefits of EoL care volunteers in the hospital setting. This can be achieved through training programs addressing the conceptual core of EoL care volunteering organised by collaboration of VCs and healthcare professionals in hospitals. Regular communication and research about the value and cost-effectiveness of EoL care volunteering in the hospital respectively is also needed [ 11 ]. In addition, further research should explore effective strategies for promoting EoL care volunteering in hospitals and understanding the cultural and contextual factors that influence the implementation of such services. Such research could involve a multi-stakeholder approach to gain insights from healthcare professionals from the management level to frontline staff, volunteers, patients, and their families.

Availability of data and materials

Possibilities for sharing data can be discussed upon request, by contacting the corresponding author (BY).

Abbreviations

The Consolidated Framework for Implementation Research

Volunteer coordinator

End of life

The Netherlands

United Kingdom

Vanderstichelen S. Palliative care volunteering: Pressing challenges in research. London: SAGE Publications Sage UK; 2022. p. 564–6.

Google Scholar  

Andersson B, Öhlén J. Being a hospice volunteer. Palliat Med. 2005;19(8):602–9.

Article   PubMed   Google Scholar  

Wilson DM, Justice C, Thomas R, Sheps S, MacAdam M, Brown M. End-of-life care volunteers: a systematic review of the literature. Health Serv Manage Res. 2005;18(4):244–57.

Scott R. “We cannot do it without you”-the impact of volunteers in UK hospices. Eur J Palliat Care. 2015;22(2):80–3.

Goossensen A. Hospice and palliative care volunteering in the Netherlands. Practices of being there. Palliat Med Pract. 2018;12(4):193–7.

Article   Google Scholar  

Goossensen A, Somsen J, Scott R, Pelttari L. Defining volunteering in hospice and palliative care in Europe: an EAPC white paper. Eur J Palliat Care. 2016;23(4):184–91.

Swanson KM. Empirical development of a middle range theory of caring. Nurs Res. 1991;40(3):161–5.

Article   CAS   PubMed   Google Scholar  

McKinnon MM. The participation of volunteers in contemporary palliative care. Australian J Adv Nurs. 2002;19(4):38–44.

Scott R, Goossensen A, Payne S, Pelttari L. What it means to be a palliative care volunteer in eight European countries: a qualitative analysis of accounts of volunteering. Scand J Caring Sci. 2021;35(1):170–7.

Candy B, France R, Low J, Sampson L. Does involving volunteers in the provision of palliative care make a difference to patient and family wellbeing? A systematic review of quantitative and qualitative evidence. Int J Nurs Stud. 2015;52(3):756–68.

Bloomer MJ, Walshe C. ‘It’s not what they were expecting’: A systematic review and narrative synthesis of the role and experience of the hospital palliative care volunteer. Palliat Med. 2020;34(5):589–604.

Article   PubMed   PubMed Central   Google Scholar  

Delaloye S, Escher M, Luthy C, Piguet V, Dayer P, Cedraschi C. Volunteers trained in palliative care at the hospital: an original and dynamic resource. Palliat Support Care. 2015;13(3):601–7.

Brighton LJ, Koffman J, Robinson V, Khan SA, George R, Burman R, et al. ‘End of life could be on any ward really’: A qualitative study of hospital volunteers’ end-of-life care training needs and learning preferences. Palliat Med. 2017;31(9):842–52.

Germain A, Nolan K, Doyle R, Mason S, Gambles M, Chen H, et al. The use of reflective diaries in end of life training programmes: a study exploring the impact of self-reflection on the participants in a volunteer training programme. BMC Palliat Care. 2016;15(1):28.

Berivan Y, Simon A, Misa B, Pilar B-F, Michael B, Mark B, et al. Live well, die well – an international cohort study on experiences, concerns and preferences of patients in the last phase of life: the research protocol of the iLIVE study. BMJ Open. 2022;12(8):e057229.

McGlinchey T, Mason SR, Smeding R, Goosensen A, Ruiz-Torreras I, Haugen DF, et al. ILIVE Project Volunteer study. Developing international consensus for a European Core Curriculum for hospital end-of-life-care volunteer services, to train volunteers to support patients in the last weeks of life: a Delphi study. Palliat Med. 2022;36(4):652–70.

Yin RK. Case study research: design and methods. London: Sage Publications; 1994.

Yin RK. Design and methods. Case study research. 2003;3.

Rabiee F. Focus-group interview and data analysis. Proc Nutr Soc. 2004;63(4):655–60.

Barriball KL, While A. Collecting data using a semi-structured interview: a discussion paper. J Adv Nurs-Institut Subscript. 1994;19(2):328–35.

Article   CAS   Google Scholar  

Brinkmann S. Unstructured and semi-structured interviewing. The Oxford handbook of qualitative research. 2014;2:277-99

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):1–15.

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.

Consolidated framework for implementation research website: [Available from: https://cfirguide.org/guide/app/#/ (Accessed 10 May 2021).

Christine F. Framework analysis: a method for analysing qualitative data. African J Midwife Women’s Health. 2010;4(2):97–100.

Ritchie J, Lewis J, Nicholls CM, Ormston R. Qualitative research practice: A guide for social science students and researchers. Sage; 2013.

Burbeck R, Candy B, Low J, Rees R. Understanding the role of the volunteer in specialist palliative care: a systematic review and thematic synthesis of qualitative studies. BMC Palliat Care. 2014;13(1):3.

Walshe C, Pawłowski L, Shedel S, Vanderstichelen S, Bloomer MJ, Goossensen A, et al. Understanding the role and deployment of volunteers within specialist palliative care services and organisations as they have adjusted to the COVID-19 pandemic: a multi-national EAPC volunteer taskforce survey. Palliat Med. 2023;37(2):203–14.

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

Reese DJ, Melton E, Ciaravino K. Programmatic barriers to providing culturally competent end-of-life care. Am J Hospice Palliat Med®. 2004;21(5):357–64.

Vanderstichelen S, Cohen J, Van Wesemael Y, Deliens L, Chambaere K. The liminal space palliative care volunteers occupy and their roles within it: a qualitative study. BMJ Support Palliat Care. 2020;10(3):e28.

Naylor C, Mundle C, Weaks L, Buck D. Volunteering in health and care: securing a sustainable future. 2013.

Etkind SN, Bone AE, Gomes B, Lovell N, Evans CJ, Higginson IJ, et al. How many people will need palliative care in 2040? Past trends, future projections and implications for services. BMC Med. 2017;15(1):1–10.

Bird S, Bruen G, Mayland C, Maden M, Gent M, Dilnot B, et al. Using volunteers to support end-of-life care. Nurs Times. 2016;112(14):12–4.

PubMed   Google Scholar  

Vanderstichelen S, Cohen J, Van Wesemael Y, Deliens L, Chambaere K. Volunteer involvement in the organisation of palliative care: a survey study of the healthcare system in Flanders and Dutch-speaking Brussels, Belgium. Health Soc Care Community. 2019;27(2):459–71.

Nesbit R, Christensen RK, Brudney JL. The limits and possibilities of volunteering: A framework for explaining the scope of volunteer involvement in public and nonprofit organizations. Public Adm Rev. 2018;78(4):502–13.

Guirguis-Younger M, Grafanaki S. Narrative accounts of volunteers in palliative care settings. Am J Hospice Palliat Med®. 2008;25(1):16–23.

Goldsbury DE, O’Connell DL, Girgis A, Wilkinson A, Phillips JL, Davidson PM, et al. Acute hospital-based services used by adults during the last year of life in New South Wales, Australia: a population-based retrospective cohort study. BMC Health Serv Res. 2015;15:1–14.

Download references

Acknowledgements

The authors wish to thank the volunteer coordinators for their cooperation in this study.

iLIVE consortium

Simon Allan 9 , Pilar Barnestein-Fonseca 7,10 , Mark Boughey 11 , Andri Christen 12 , Nora Lüthi 12 , Martina Egloff 12 , Steffen Eychmüller 12 , Sofia C Zambrano 12,29 , Gustavo G De Simone 13 , Eline E C M Elsten 1,14 , Eric C T Geijteman 1,14 , Iris Pot 14 , Carin C D van der Rijt 14 , Carl Johan Fürst 15,16 , Birgit H Rasmussen 15 , Maria E C Schelin 15,16 , Christel Hedman 15,16,19 , Gabriel Goldraij 17 , Svandis Iris Halfdanardottir 18 , Valgerdur Sigurdardottir 18 , Tanja Hoppe 20 , Melanie Joshi 20 , Julia Strupp 20 , Raymond Voltz 20, 26–28 , Maria Luisa Martín-Roselló 7,21 , Silvi Montilla 22 , Verónica I Veloso 22 , Vilma Tripodoro 13,22 , Katrin Ruth Sigurdardottir 3,23 , Hugo M van der Kuy 24 , Lia van Zuylen 25 , Berivan Yildiz 1 , Agnes van der Heide 1 , Misa Bakan 2 , Michael Berger 6 , John Ellershaw 5 , Claudia Fischer 6 , Anne Goossensen 8 , Dagny Faksvåg Haugen 3,4 , Rosemary Hughes 5 , Grethe Skorpen Iversen 3 , Hana Kodba-Ceh 2 , Ida J Korfage 1 ,Urska Lunder 2 , Stephen Mason 5 , Tamsin McGlinchey 5 , Beth Morris 5 , Inmaculada Ruiz Torreras 7 , Judit Simon 6 , Ruthmarijke Smeding 5 , Kjersti Solvåg 3 , Eva Vibora Martin 7 .

1 Department of Public Health, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands.

2 Research Department, University Clinic of Respiratory and Allergic Diseases Golnik, Golnik, Slovenia.

3 Regional Centre of Excellence for Palliative Care, Western Norway, Haukeland University Hospital, Bergen, Norway.

4 Department of Clinical Medicine K1, University of Bergen, Bergen, Norway.

5 Palliative Care Unit, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK.

6 Department of Health Economics, Center for Public Health, Medical University of Vienna, Wien, Austria.

7 CUDECA Institute for Training and Research in Palliative Care, CUDECA Hospice Foundation, Malaga, Spain.

8 Informal Care and Care Ethics, University of Humanistic Studies, Utrecht, The Netherlands.

9 Arohanui Hospice, Palmerston North, New Zealand.

10 Group C08: Pharma Economy: Clinical and Economic Evaluation of Medication and Palliative Care, Ibima Institute, Malaga, Spain.

11 Department of Palliative Care, St Vincent's Hospital Melbourne, Fitzroy, Victoria, Australia.

12 University Center for Palliative Care, Inselspital University Hospital Bern, University of Bern, Bern, Switzerland.

13 Research Network RED-InPal, Institute Pallium Latinoamérica, Buenos Aires, Argentina.

14 Department of Medical Oncology, Erasmus MC Cancer Institute, Erasmus MC University Medical Center Rotterdam, Rotterdam, The Netherlands.

15 Institute for Palliative Care at Lund University and Region Skåne, Lund University, Lund, Sweden.

16 Division of Oncology and Pathology, Department of Clinical Sciences, Lund University, Lund, Sweden.

17 Internal Medicine/Palliative Care Program, Hospital Privado Universitario de Córdoba, Cordoba, Argentina.

18 Palliative Care Unit, Landspitali—National University Hospital, Reykjavik, Iceland.

19 Research Department, Stiftelsen Stockholms Sjukhem, Stockholm, Sweden.

20 Department of Palliative Medicine, Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany.

21 Group CA15: Palliative Care, IBIMA Institute, Malaga, Spain.

22 Institute of Medical Research A. Lanari, University of Buenos Aires, Buenos Aires, Argentina.

23 Specialist Palliative Care Team, Department of Anaesthesia and Surgical Services, Haukeland University Hospital, Bergen, Norway.

24 Department of Clinical Pharmacy, Erasmus MC, University Medical Center, Rotterdam, The Netherlands.

25 Department of Medical Oncology, Amsterdam University Medical Center, Amsterdam, The Netherlands.

26 Center for Integrated Oncology Aachen Bonn Cologne Dusseldorf (CIO ABCD), Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany.

27 Clinical Trials Center (ZKS), Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany.

28 Center for Health Services Research (ZVFK), Faculty of Medicine and University Hospital, Cologne, Germany.

29 Institute for Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland.

This work is supported by the European Union's Horizon 2020 Research and Innovation Programme under Grant agreement no. 825731.

Author information

Authors and affiliations.

Department of Public Health, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands

Berivan Yildiz & Agnes van der Heide

Research Department, University Clinic of Respiratory and Allergic Diseases Golnik, Golnik, Slovenia

Regional Centre of Excellence for Palliative Care, Western Norway, Haukeland University Hospital, Bergen, Norway

Grethe Skorpen Iversen & Dagny Faksvåg Haugen

Department of Clinical Medicine K1, University of Bergen, Bergen, Norway

Dagny Faksvåg Haugen

Palliative Care Unit, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK

Tamsin McGlinchey, Ruthmarijke Smeding & John Ellershaw

Department of Health Economics, Center for Public Health, Medical University of Vienna, Vienna, Austria

Claudia Fischer & Judit Simon

CUDECA Institute for Training and Research in Palliative Care, CUDECA Hospice Foundation, Malaga, Spain

Eva Vibora-Martin & Inmaculada Ruiz-Torreras

Informal Care and Care Ethics, University of Humanistic Studies, Utrecht, The Netherlands

Anne Goossensen

You can also search for this author in PubMed   Google Scholar

  • Simon Allan
  • , Pilar Barnestein-Fonseca
  • , Mark Boughey
  • , Andri Christen
  • , Nora Lüthi
  • , Martina Egloff
  • , Steffen Eychmüller
  • , Sofia C. Zambrano
  • , Gustavo G. De Simone
  • , Eline E. C. M. Elsten
  • , Eric C. T. Geijteman
  • , Carin C. D. van der Rijt
  • , Carl Johan Fürst
  • , Birgit H. Rasmussen
  • , Maria E. C. Schelin
  • , Christel Hedman
  • , Gabriel Goldraij
  • , Svandis Iris Halfdanardottir
  • , Valgerdur Sigurdardottir
  • , Tanja Hoppe
  • , Melanie Joshi
  • , Julia Strupp
  • , Raymond Voltz
  • , Maria Luisa Martín-Roselló
  • , Silvi Montilla
  • , Verónica I. Veloso
  • , Vilma Tripodoro
  • , Katrin Ruth Sigurdardottir
  • , Hugo M. van der Kuy
  • , Lia van Zuylen
  • , Berivan Yildiz
  • , Agnes van der Heide
  • , Misa Bakan
  • , Michael Berger
  • , John Ellershaw
  • , Claudia Fischer
  • , Anne Goossensen
  • , Dagny Faksvåg Haugen
  • , Rosemary Hughes
  • , Grethe Skorpen Iversen
  • , Hana Kodba-Ceh
  • , Ida J. Korfage
  • , Urska Lunder
  • , Stephen Mason
  • , Tamsin McGlinchey
  • , Beth Morris
  • , Inmaculada Ruiz-Torreras
  • , Judit Simon
  • , Ruthmarijke Smeding
  • , Kjersti Solvåg
  •  & Eva Vibora-Martin

Contributions

AG conducted the interviews and focus group interview, BY took field notes during the focus group interview. BY performed the qualitative analyses, drafted and revised the article. AVDH, AG, GSI, DFH, RS, CF, JS, MB, TMG, JE, EVM, IRT critically reviewed the manuscript for important intellectual content and provided feedback on versions of the manuscript. iLC contributed to the design of the study. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Berivan Yildiz .

Ethics declarations

Ethics approval and consent to participate.

The study has been conducted in accordance with national and international regulations and guidelines, including the Declaration of Helsinki, and the International Conference on Harmonisation (ICH) guidance on Good Clinical Practice (GCP). The iLIVE study has been approved by ethics committees and the institutional review boards (IRBs) of participating institutes in all countries:

Medical Research Ethics Committees United (MEC-U) (R20.004), The Netherlands.

Regional Committee for Medical and Health Research Ethics South East D (35035), Norway.

Komisija Republike Slovenije za Medicinsko etiko (0120–129/2020/3), Slovenia.

Comité de Ética de la Investigación Provincial de Málaga, Hospital Regional Universitario de Malaga, Spain.

Health Research Authority (HRA) and Health and Care Research Wales (HCRW) (272927), UK.

Written informed consent was obtained from all individual participants in this study.

Consent for publication

Not applicable.

Competing interests

The author(s) declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Yildiz, B., van der Heide, A., Bakan, M. et al. Facilitators and barriers of implementing end-of-life care volunteering in a hospital in five European countries: the iLIVE study. BMC Palliat Care 23 , 88 (2024). https://doi.org/10.1186/s12904-024-01423-5

Download citation

Received : 11 July 2023

Accepted : 26 March 2024

Published : 02 April 2024

DOI : https://doi.org/10.1186/s12904-024-01423-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • End of life care
  • Volunteering
  • Implementation
  • Hospital volunteer

BMC Palliative Care

ISSN: 1472-684X

focus groups data collection in qualitative research

ORIGINAL RESEARCH article

Baseline socio-economic characterization and resource use of the community in the mefakiya watershed provisionally accepted.

  • 1 Integrated Watershed Management, Ethiopian Institute of Agricultural Research (EIAR), Ethiopia
  • 2 Ethiopian Institute of Agricultural Research (EIAR), Ethiopia

The final, formatted version of the article will be published soon.

Baseline characterization is used during the project to show progress towards the goal and objectives and after the project to measure the amount of change. The main objective of the study was to investigate the socio-economic characterization and natural resource use in the Mefakiya learning watershed. Both qualitative and quantitative data were collected. Quantitative data was collected using a structured questionnaire through face-to-face interviews with households at the intervention site. Sixty representative households were selected randomly and interviewed.Constraints and potentials were identified via focus group discussions. Descriptive statistics were used to analyze the quantitative data. The majority of the sample households (90%) were maleheaded. Agriculture (crop and livestock production) is the principal (98.3%) occupation of the sample households in the Mefakiya watershed. Maize, finger millet, and tef are the major crops cultivated in the watershed, produced by 98%, 92%, and 68% of the households, respectively. The study area is characterized by high natural resource degradation that is interconnected in nature.Therefore, an integrated approach is more important and meaningful for the sustainable use of watershed resources and further development in all aspects of the watershed in the study area.

Keywords: Baseline survey, characterization, constraints, Social aspect, Mefakiya watershed

Received: 01 Dec 2023; Accepted: 09 Apr 2024.

Copyright: © 2024 Yimam, Gelagil and Bazie. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mr. Mekin M. Yimam, Ethiopian Institute of Agricultural Research (EIAR), Integrated Watershed Management, Dessie, Ethiopia

People also looked at

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cover of Comparing Interview and Focus Group Data Collected in Person and Online

Comparing Interview and Focus Group Data Collected in Person and Online

Gregory Guest , PhD, MA, Emily Namey , MA, Amy O'Regan , MS, Chrissy Godwin , MSPH, and Jamilah Taylor .

Affiliations

  • Copyright and Permissions

Structured Abstract

Background:.

Online focus groups (FGs) and individual interviews (IDIs) are increasingly used to collect qualitative data. Online data collection offers benefits (eg, geographic reach), but the literature on whether and how data collection modality affects the data generated is mixed. The limited evidence base suggests that data collected via online modalities may be less rich in terms of word count, but more efficient in terms of surfacing thematic content. There is also limited evidence on the comparative costs of online vs in-person data collection.

Objectives:

The purpose of this study was to compare data generated from FGs and IDIs across 4 data collection modalities: (1) face-to-face; (2) online synchronous video-based; (3) online synchronous text-based; and (4) online asynchronous text–based. We also aimed to compare participant experience and data collection costs across modalities.

We used a cross-sectional quasi-experimental design. We systematically assigned participants to 1 of the 4 modalities (according to a rolling sequence) on enrollment and randomly assigned them to either IDIs or FGs. We held constant the interviewer and question guide across 24 FGs (n = 123 participants) and 48 IDIs, conducted between September 2016 and October 2017. Participants also completed a brief survey on their experiences of data collection. A team of 3 analysts performed inductive thematic analysis of the qualitative data, generating and applying emergent theme-based codes. We also used a priori codes to tag sensitive information across modalities. Analysts were not masked to data type, but all transcripts were coded independently by 2 analysts and compared to reach final consensus coding. We operationalized data richness in terms of volume of participant data, measured by word count, and thematic content, measured by the number of thematic codes applied per modality. Using time and expense data from the study, we calculated average cost per data collection activity.

Visual (face-to-face and online video) modalities generated significantly greater volume of data than did online text-based modalities; however, there were no significant qualitative differences in the thematic content among modalities for either IDIs or FGs. Text-based online FGs were more likely to contain a dissenting opinion ( P = 0.04) than visually based FGs, although this level of significance should be interpreted cautiously due to multiple comparisons. Participant ratings of data collection events were generally in the moderate to high range, with statistically significant differences in participant experience measures by modality for FGs: participants rated online video FGs lower than others on several measures. Without travel, online video data collection had the highest average costs for both IDIs and FGs; however, if estimated travel costs are included, then in-person data collection was more expensive.

Conclusions:

Among our sample, online modalities for conducting qualitative research did not result in substantial or significantly different thematic findings than in-person data collection. We did not find that online modalities encouraged more sharing of personally sensitive information, although we observed more instances of dissenting opinions in online text-based modalities. The homogeneity of the sample—in terms of sex, race, educational level, and computer skills—limits the wider generalizability of the findings. We also did not have a geographically distributed sample, which prevented us from having actual travel expenses for the cost analysis; however, the findings from this study were largely consistent with previous comparative research.

Qualitative research refers to research that generates and/or uses non-numeric data, most typically text, 1 , 2 and is characterized by open-ended questions and inductive probing. Given its inherent ability to allow people to respond to questions in their own words, in an open-ended way, qualitative inquiry excels at capturing individual perspectives in rich detail. Qualitative research is therefore employed across a broad range of health topics and in various settings to answer “how” and “why” type questions that are difficult to assess using closed-ended survey responses.

Two of the most common qualitative research methods are in-depth individual interviews (IDIs) and focus groups (FGs). 3 , 4 IDIs, as the name implies, generate personal narratives from individuals. They are conducted one-on-one, last from 30 minutes to an hour, and are aimed at understanding processes and/or eliciting a participant's experiences, beliefs, or opinions on a specific topic.

Similar to in-depth IDIs, FGs use open-ended questions and an inductive probing pattern, but the group environment offers the opportunity to observe and draw on interpersonal dynamics. Ideally, FGs are constructed and conducted to take advantage of the group dynamic to stimulate discussion and a broad range of ideas. FGs usually range in size from 6 to 12 individuals (who are similar to one another in some way that is related to the research topic) and are conducted by a moderator and an assistant. FGs typically run from 1.5 to 2.5 hours and are well suited for evaluating products and programs and for gathering information about group norms or processes. 5 , 6 One key difference between IDIs and FGs is that response independence cannot be assumed in an FG setting; therefore, the group is typically the unit of analysis when data are collected via FGs.

FGs and IDIs are employed in all fields of research, including patient-centered studies. These qualitative data collection methods are co-evolving with technology 7 and are increasingly conducted online. 8 An often-cited advantage of online data collection is that a researcher can collect data from individuals across multiple locations without needing to travel, 9 , 10 reducing both time and costs. 11 , 12 More specifically, online approaches can extend to stakeholders whose input would be lost if only face-to-face techniques were used, including populations for whom travel might be difficult. Online qualitative data collection modalities have been shown, for example, to work well with patient populations facing unique health challenges such as traumatic brain injury, 13 autism, 14 multiple sclerosis, 15 and children with chronic conditions. 16

Online qualitative data collection modalities can be categorized along 2 dimensions. One dimension refers to the nature of the communication medium: text-based or video-based. In a text-based modality, questions and responses are typed via computer. Video-based modalities use online video (with audio) technology, and questions/responses are spoken. The other dimension pertains to temporality: synchronous and asynchronous. Synchronous methods are conducted in real time through text or video-conference platforms. 17 Conversely, asynchronous methods do not occur in real time; they are typically conducted through venues such as discussion and bulletin boards, email, and listservs, where a question can be posted and respondents answer at their convenience. 18 Synchronous methods tend to be relatively fast-paced with a conversational back-and-forth communication flow, whereas asynchronous methods allow participants more time to consider and respond to questions. The latter are purported to generate richer and deeper data. 17 , 19

The sampling benefits of online data collection, in terms of geographic and logistical flexibility, have been at least anecdotally documented among several patient populations. The effects of data collection modality on the information generated, however, have not been rigorously investigated. The limited number of comparative studies that exist suggest that online techniques, in general, generate a smaller volume of information—measured in terms of time and quantity of textual data 20 , 21 —than their face-to-face (ie, in-person) incarnations, and that face-to-face techniques generate “richer” data than online techniques. 15 , 16 , 20 A study by Campbell and colleagues further observed that the face-to-face context “caused some participants to hold back from discussing information that they felt was too personal or potentially embarrassing” 10 ; Woodyatt and colleagues also found slightly more discussion of sensitive topics in online versus face-to-face focus groups. 22 This phenomenon—whereby participants are more likely to be open and discuss personal information online—is known as the online disinhibition effect . 23

These few studies are important but lack the necessary rigor to provide a foundational evidence base. None of the studies employed an experimental design. The majority did not control for instrument or interviewer variation, had very small sample sizes, and lacked systematic and transparent analytic procedures. The 2 studies that compared richness of data, for example, did so without operationalizing the term or describing how they identified and compared this construct. It is therefore difficult to assess the validity of the comparisons. Another limitation of previous research is that studies focused on comparing only 2 data collection modalities or used only asynchronous techniques as the online context. Additionally, earlier studies failed to compare relative modality costs, an important research planning component. Finally, few of the referenced studies addressed the larger question of participant satisfaction with data collection modalities, which is critical for developing rapport and participant comfort for valid and successful qualitative inquiry.

Based on the state of the evidence, our study had 3 primary objectives:

  • To systematically compare differences in data generated among 4 data collection modalities: face-to-face, online synchronous video-based, online synchronous text-based, and online asynchronous text-based.
  • To compare patient experiences of the data collection process across the 4 data collection modalities.
  • To compare the average per-event cost of data collection modalities.

This research was funded by PCORI, which requires reporting according to its methodology standards. 24 We consider PCORI Methodology Standards 1 and 3, covering standards for formulating a research question and for data integrity and analysis, respectively, as most directly applicable to this methods-related research. Accordingly, we identified gaps in the literature and then developed a study protocol. Regarding the selection of appropriate interventions and comparators, we describe the different arms of our study (though not health/patient related) in this section, and we also define and operationalize our outcome measures. Data analysis procedures are also outlined in this section.

Study Population

Given our methodological objectives, large sample requirements, and stratified participant assignment to data collection modality, we sought to define a study topic and population that was relatively ubiquitous (not too narrowly circumscribed) and could provide a wide pool of potential participants. We also wanted to generate data that could be useful in informing patient–provider interactions in an area of maternal health; after discussion with local obstetrical colleagues, we focused the topic of our data collection efforts on pregnant women's thoughts about medical risk and Zika. Our study population included women in the Research Triangle area of North Carolina over age 18 who had been pregnant between 2013 and 2016 and who were hoping to become pregnant again in the next 3 years, at the time of enrollment. Additionally, women enrolled had to have internet access and reasonable typing skills and had to agree to random assignment to either an IDI or FG and assignment to either online or in-person data collection. Analytically, we viewed sample homogeneity as a relative strength, in that it was one less potential confounder of our comparisons between modalities. We therefore did not set explicit demographic diversity criteria.

We recruited participants through a combination of local community events, magazine advertisements, radio announcements, flyers posted near establishments that serve pregnant and postpartum women, online classifieds, and online social networking groups. We also asked study participants to refer other women to the study website and recruiter. Once women contacted the study recruiter, they were screened for eligibility and provided informed consent if eligible.

Each study participant was provided an incentive of a webcam (worth about $15) and an Amazon gift card initially worth $40, and later raised to $80. (The monetary incentive was increased approximately halfway through the study, to increase enrollment rates. This increase was made after a round of data collection was completed, to keep any effect the same across arms.) An additional $30 reimbursement was offered to participants who indicated a need for childcare because they had to leave their homes for face-to-face data collection.

Study Design and Study Setting

This was a cross-sectional, quasi-experimental qualitative study with 8 arms, distributed as per Table 1 . Study research questions and data analysis methods are summarized by objective in Table 2 .

Table 1. Eight Study Arms by Data Collection Method and Modality.

Eight Study Arms by Data Collection Method and Modality.

Table 2. Study Objectives, Research Questions, and Data Analysis Methods.

Study Objectives, Research Questions, and Data Analysis Methods.

Randomly assigning women to modality was logistically challenging given the limited availability of the study population; instead, to limit participant selection bias among arms, we systematically assigned women to a modality and then randomly assigned the data collection method ( Figure 1 ). As women consented to participate and were enrolled in the study, the first 15 women were assigned to the “scheduling pool” for the face-to-face modality. The next 15 women to enroll were assigned to the online video modality, and so on through each of the 4 modalities for the first round of data collection. From each distinct group of 15 women, 2 were randomly selected (according to a computer-generated sequence provided by a study statistician) to take part in an IDI. The other 13 were assigned to the focus group for that arm, an intentional over-recruitment to ensure that 6 to 8 participants from the group could attend on the same date and time. Any women who were assigned to an arm but not scheduled (due to availability conflicts) were rolled over into the next open scheduling list and rerandomized. The process was repeated for all subsequent rounds. Women were masked to their assignment until they were scheduled. We scheduled most events during usual business hours (Monday through Friday between 9 am and 5 pm ), while maintaining some flexibility for synchronous events to occur on evenings or weekends, if necessary, to accommodate participants' schedules.

Systematic Assignment of Participants to Modality and Method.

Data Collection Process and Modalities

The study was reviewed and approved by FHI 360's Protection of Human Subjects Committee, and verbal informed consent was obtained from all participants, individually, before initiation of data collection. Data were collected from September 2016 to October 2017.

The data collector (E.N., an experienced qualitative researcher) and the question guide were held constant across all modalities. Procedures, aside from technical connection requirements, were also kept consistent within each modality. Instrument questions were open-ended and asked in the same order, to enhance comparability. 3 , 25 As with any qualitative inquiry, the data collector inductively probed on participant responses to seek follow-up clarifications or expansions on initial answers. 3

Questions explored women's perceptions of what is “safe” and “unsafe” during pregnancy, considerations regarding preventive medical interventions along this continuum (eg, vaccines), and willingness to participate in clinical research while pregnant (see Appendix A ). Before implementation, the data collection instrument was pilot-tested among an in-person focus group of 5 study-eligible, consented participants to enhance clarity and validity of questions. (These participants were excluded from later data collection events.)

At the end of each data collection event, participants completed a brief anonymous questionnaire containing several structured questions with Likert-scale response options on their perceptions of the event. We included questions on rapport, the feeling of a safe/comfortable space to share, and convenience ( Appendix B ). In the FG contexts, the structured questions were completed individually and independently from the group.

With permission from participants, we digitally audio-recorded all face-to-face and video-conferencing data collection activities. These audio recordings of FGs and IDIs were transcribed verbatim using a standardized transcription protocol. 29 , 30 Transcripts for the text-based FGs and IDIs were automatically generated as part of the data collection process.

Modality Descriptions

As described above, we intentionally kept key elements of the data collection process the same to minimize possible confounders. The modalities required some differences in the conduct of data collection, in accordance with best practices for each modality, 8 as described below and summarized in Table 3 . The duration refers to the time we asked participants to set aside for the activity; actual time averages are presented in the Results section.

Table 3. Data Collection Methods by Modality.

Data Collection Methods by Modality.

Face-to-face and online synchronous modality procedures

The face-to-face modality for FGs and IDIs followed traditional qualitative data collection procedures. 3 , 26 All face-to-face FGs and IDIs were conducted in a conference room at the study office. In-person focus groups also included an assistant who helped with greeting participants and providing refreshments.

All synchronous online activity (video and text) participants used internet-connected computers, at their homes or other convenient locations, to sign in to a private “chat room” at a designated date and time to participate in the FG or IDI. The synchronous video events involved web-connected video through this platform, along with audio over a telephone conference call line. Participants could see the moderator, other participants (for FGs), and themselves. For synchronous text-based activities, the moderator typed questions and follow-ups while participants typed their responses, all in real time. Additional time was allowed for the synchronous text-based IDIs to accommodate the delays generated by typing back and forth. FG sessions were conducted in “group mode,” where participants could see each other's responses as they were entered and respondents could type responses simultaneously. These 3 synchronous modalities are subsequently referred to as face-to-face, online video, and online chat.

Online asynchronous modality procedures

Asynchronous, text-based data collection modalities used an online discussion board platform (FGs) or email (IDIs). For FGs, the moderator posted a series of 3 to 5 questions on the discussion board each day over several days. Participants could sign in at their convenience, complete the day's questions, and read and comment on each other's postings. The moderator reviewed all responses and posted follow-up questions, as appropriate, to which participants could again respond. The same procedure was followed for asynchronous IDIs, except that the medium for correspondence was email—the interviewer emailed the participant a series of 3 to 5 questions to which the participant responded in 24 to 48 hours. The interviewer's next email would contain follow-up questions on responses and the next series of new questions. Participants were prompted to complete unanswered questions before moving on to the next question. We allowed 5 to 10 days for each data collection event to be completed via this modality, depending on the pace of the participants. The asynchronous modality is subsequently referred to as email-based or online posts, for IDIs and FGs, respectively.

Analytical and Statistical Approaches

Coding process.

The qualitative data generated through IDIs and FGs were coded using an analytic strategy that integrated 2 distinct approaches to qualitative data: inductive and a priori thematic analyses. Inductive thematic analysis is a set of iterative techniques designed to identify categories and concepts that emerge from the text during analysis. 27 The analytic process entails reading through verbatim transcripts and identifying possible themes. An inductive approach is exploratory in nature; themes are not predetermined before analysis but rather emerge from the data as analysis progresses. 27 , 28 We developed an inductive codebook using a standardized iterative process. 29 Emergent themes were noted as 2 data analysts (E.N. and C.G.) read through the transcripts. As the data collector was also an analyst, this process commenced after all data had been collected, to avoid influencing data collection activities. All inductive codes were explicitly defined in the codebook using the template outlined by MacQueen et al. 30 Analysts (C.G., A.O., and E.N.) then used NVivo 11 (QSR International) to apply content codes to the text of each transcript. More than 1 content code could be applied to any one segment of text.

In an a priori thematic analysis, analytic themes are established before analysis and instances of those themes are sought out and coded for in the data. 31 , 32 For this study, we created a priori codes for “sensitive/personal” themes, so that we could assess if data collection modalities vary in terms of their capacity to generate these types of themes. We defined “sensitive” disclosures as containing information about one's own experience that is highly personal, taboo, illegal, or socially stigmatized in nature, which we would reasonably expect people to be reluctant to disclose to a stranger(s). We also created a “dissenting opinion” code for FG data, to capture instances when a participant expressed an opinion opposite to an opinion expressed by another participant earlier in the discussion. We included both explicit (eg, “I disagree”) and subtle (eg, stating a different opinion without framing it as a disagreement) statements of disagreement.

Two data analysts independently coded all transcripts. Analysts performed inter-coder agreement checks on each transcript, comparing all coding and discussing any coding discrepancies. A master transcript was created to reflect the agreed-upon coding. The inductive codebook was revised, as necessary, after each successive transcript had been coded, to reflect any changes to code definitions.

Data “richness”

To assess our first hypothesis in objective 1, that face-to-face modalities would generate richer data than online modalities, we operationalized data richness in 2 ways. First, we considered data richness in terms of the volume of information offered by the participants, operationalized as the number of words contributed by the participant(s) within each transcript. Second, we considered data richness in terms of the meaning and content of the information contributed by participants, using thematic code frequency as an indicator of thematic content . Our methods for measuring data richness for each approach are summarized in Table 4 .

Table 4. Operationalization of Data Richness as It Relates to Objective 1.

Operationalization of Data Richness as It Relates to Objective 1.

Sensitive/personal themes and dissenting opinions

To assess our second hypothesis in objective 1, that online modalities would generate more sensitive disclosures than would face-to-face modalities, we assessed the data coded as “sensitive” and recoded it to reflect the nature of the unsolicited sensitive/personal themes disclosed. We summed and compared the number of unique transcripts in which sensitive themes appeared and were coded.

To assess whether participants may be more willing to offer a dissenting opinion in an online vs face-to-face FG, we focused on FG data from a question that asked about the effect of Zika on women's personal views on abortion. The responses to this question were analyzed by (a) whether any member of the group dissented from the stated opinion of others in the group who had already answered the question (dissension); and (b) number/percentage of participants choosing to abstain from answering this question, either by remaining silent or explicitly deferring response (abstention). These data were then examined by modality of focus group.

Participant experiences

We assessed participants' experiences of data collection both quantitatively and qualitatively. Participants' responses to a series of questions with Likert-scale response options were tabulated by modality. Comments provided in an open text box associated with each question were reviewed and summarized to augment interpretation of the quantitative data.

Costs to conduct

We considered many time and cost inputs related to data collection across modalities, as summarized in Table 5 . For staff-related costs, we used illustrative hourly rates; for all other nontravel costs, we used averaged actual costs as documented during the project. Note that we performed recruitment, scheduling, data collection, and data formatting in-house; we contracted online hosting platforms and transcription services. Regarding data preparation for analysis, the live-generated transcripts from the 2 online text-based modalities required some post hoc formatting. Although our project did not include travel for in-person data collection, we included estimated travel expenses based on the literature. 33 For both IDIs and FGs, we assumed 4 hours of round-trip travel time. For the IDIs, we divided travel cost and time by 4 to assess per-event costs, assuming that 4 IDIs could be completed on a single trip.

Table 5. Time and Cost Inputs Relevant to Data Collection for Each Modality.

Time and Cost Inputs Relevant to Data Collection for Each Modality .

Statistical analyses

We tested all outcome measures for differences by modality, separately for FGs and IDIs. For some analyses, we also considered audiovisual methods (face-to-face and online video) compared with text-based online methods, where the visual connection of the method might have been more important than whether it was online or offline. For the continuous outcome measures—word count and total thematic codes generated—we used 1-way analysis of variance (ANOVA) and Tukey honest significance tests. Further, in order to control for the number of participants per group in the word count analyses of focus group data, an analysis of covariance (ANCOVA) test was used. The ANCOVA test allowed us to examine a continuous outcome (word count) using both continuous (number of participants) and categorical (modality) covariates in our model. For the dichotomous outcomes—sensitive disclosure and dissenting minority opinion—we used a chi-square test; for the responses on a Likert scale we used a Kruskal-Wallis test.

Participant Characteristics

We enrolled 171 women, who were randomly assigned to either the FG or IDI arm and systematically assigned to data collection modality ( Table 6 ).

Table 6. Number of Participants per Study Arm.

Number of Participants per Study Arm.

Our study sample was fairly similar across modalities: mostly white well-educated working women, in their early 30s, and within a relatively high-income bracket ( Table 7 ). Of note, we did not find evidence that systematic allocation to modality resulted in statistically different subsamples.

Table 7. Demographic Characteristics of Study Participants by Modality.

Demographic Characteristics of Study Participants by Modality.

Objective 1 Results

Objective 1 sought to systematically compare differences in data generated across the 4 data collection modalities: face-to-face, online video, online chat, and email/online post. The primary hypothesis predicted that data generated from face-to-face modalities would be richer than data generated from online modalities, with a secondary hypothesis stating that online techniques would elicit more sensitive/personal information than would face-to-face techniques.

Data Richness: Participant Information Sharing

Table 8 presents word-based measures of data richness by modality. Comparing IDIs, the online video and face-to-face modalities produced the richest data in terms of how active participants were in contributing to the discussion. The mean number of words contributed by participants per IDI differed significantly between audiovisual and text-based interviews: Face-to-face and online video modalities produced significantly larger numbers of words spoken by participants than did online text-based modalities.

Table 8. ANOVA/ANCOVA Comparisons of Participant Word Count by Modality.

ANOVA/ANCOVA Comparisons of Participant Word Count by Modality.

A similar but more nuanced trend held for the FGs. The face-to-face and online video modalities generated the greatest mean numbers of words spoken by participants per FG and were not significantly different from each other, after controlling for the number of participants per group. The mean number of words spoken by participants in the face-to-face modality was significantly larger than the means observed for either of the text-based modalities. There was also a significant difference in mean participant word counts between online video and online chat modalities, but not between the online video and the online message board groups.

Data Richness: Overall Thematic Content and Code Application

Across the aggregate data set, 85 thematic codes were developed to categorize women's opinions and experiences (see Appendix C ). The same 85 themes were present in both the IDI and FG data sets, with only small differences in thematic content, as measured by thematic code application across modalities ( Table 9 ). Within the IDI data set, 79 thematic codes were applied in the online video IDIs, compared with 77 in the face-to-face interviews and 73 in each of the online text-based (chat and email) interviews. Among the FGs, the face-to-face modality had the highest number (80) of unique codes applied, followed by online chat (79), online posts (77), and online video (75). For both FGs and in-depth IDIs, no significant differences emerged in the mean number of codes used for each modality ( P = 0.39 and P = 0.15, respectively).

Table 9. Frequency of Thematic Code Application by Modality.

Frequency of Thematic Code Application by Modality.

Variations in frequency of thematic code application were present across modalities (as indicated by shading in Appendix C ), more so than which codes were applied. The clearest differences among theme/code presence across modality came from low-frequency codes (ie, codes that did not appear often in the data set). There were 10 codes in the IDI data set that were used in only 2 modalities, and 3 codes used in only 1. Where these codes were applied, they were used in only 1 to 2 interviews per subsample (representing approximately 8%-17% of the subsample). Similarly, for the FG data set, 2 codes were used in only 2 modalities and 6 codes were used in only 1 modality. Here again, the codes in question were primarily lower-frequency codes, appearing usually in only 1 focus group within the modality (representing approximately 17% of the subsample). However, 1 discernable pattern emerged in the FG data: None of the 8 codes that were used in only 1 or 2 modalities were present in the online posting FG data.

Sensitive Disclosures

We thematically recoded data coded at the a priori “sensitive” code to reflect the nature of the sensitive disclosures ( Appendix B ). Topics of sensitive disclosures included the following:

  • Drinking some amount of alcohol while pregnant
  • Taking medication for anxiety, depression, or other mental health condition
  • Smoking cigarettes while pregnant
  • Being exposed to secondhand marijuana smoke while pregnant
  • Having had a previous abortion

We did not directly solicit information on these topics. Frequencies of all disclosures are described at the individual level for IDIs and at the group level for FGs (because there is no response independence in a group, we count only the first disclosure).

Across the IDI data, 4 of these types of sensitive disclosures were present ( Table 10 ). Personal experience with alcohol use during pregnancy was mentioned by interviewees in all modalities, with slightly more disclosures in the face-to-face and online chat modalities. Sensitive disclosures were made by the greatest number (42%) of participants in the face-to-face IDIs.

Table 10. Frequency of Disclosure of Sensitive Themes by Method and Modality.

Frequency of Disclosure of Sensitive Themes by Method and Modality.

The FG data showed less variation across modalities. Disclosures of alcohol use and medication for a mental health condition were present consistently across all modalities. Differences appeared in those disclosures that occurred rarely—exposure to secondhand marijuana smoke, personal tobacco use, and having had an abortion previously—and were spread across the 4 modalities. At least 1 sensitive disclosure was made in 5 of 6 FGs (88%) for each modality. There were no statistically significant differences in overall sensitive disclosures by modality for either FGs or IDIs.

Dissenting Opinions

This analysis included FGs only and looked at 1 question on abortion to see whether women might be more comfortable offering a dissenting opinion in text-based online modalities where others in the group could not see them. In both online text-based modalities (chat and discussion board posts), at least 1 participant expressed a dissenting opinion on abortion in nearly all (5 of 6) groups ( Table 11 ). In contrast, a dissenting opinion was raised in just half of the online video groups and one of the face-to-face groups. The nonvisual, online text-based focus groups were 2.8 (95% CI, 1.2-6.8) times more likely to contain a dissenting opinion ( P = 0.01) than the “visual” face-to-face and online video focus groups.

Table 11. Frequency of Dissenting Opinions Within Focus Groups and Abstentions Among Participants.

Frequency of Dissenting Opinions Within Focus Groups and Abstentions Among Participants.

We also assessed how many participants in each FG modality abstained from this question. The percentage of abstaining participants was highest (21%) in the online discussion board posts, followed by the online video FGs (18%) and online chat FGs (12%). No one abstained from offering an opinion in the face-to-face groups. The differences in the rates of participants abstaining from the question on abortion were not statistically significant.

Objective 2 Results

Participant perceptions of modality.

Using a brief exit survey, we collected women's perceptions of the data collection modality in which they had participated. No significant differences were identified in participant perceptions of rapport, safe space, comfort, or convenience among the IDI sample, while varying levels of statistically significant differences were observed in participant perceptions of the same characteristics of FGs ( Table 12 ). Across nearly all domains, women who participated in the online video FGs reported relatively lower levels of satisfaction.

Table 12. Participant Perceptions of Data Collection Modality.

Participant Perceptions of Data Collection Modality.

A majority (73%) of women who participated in a face-to-face interview felt that rapport during the interview was high, with perceptions of a high level of rapport decreasing across the modalities from online video to online chat and email (25%). This differed from the FG context, where both face-to-face and online discussion board post participants reported feeling high levels of rapport, and most of both online video and online chat participants reported moderate rapport. No participants reported feeling no rapport in any of the IDI modalities, although 3 women felt no rapport during an online video FG. One respondent said, “It was hard to build rapport online for me,” while another, who noted moderate rapport, stated that “there was a good bit of rapport, but I would say technical issues (like audio cutting in and out, video freezing) really disrupted it.”

Nearly all participants in IDIs across modalities agreed or strongly agreed that the interview environment felt like a safe space in which to talk and express their feelings. The exception was one woman in an email-based interview; she disagreed, stating, “I wasn't sure who or where these emails were going; I spoke my mind but was hesitant.” In the FGs, the reported perceptions were similar; nearly all women agreed or strongly agreed that the FG environment provided a safe space. Among the online video FG participants, 2 women disagreed. The reason provided by one pointed to the group composition and topic, rather than the modality, per se: “One respondent was strongly against abortion and made me feel really uncomfortable about discussing my own feelings.”

Comfort answering questions and willingness to share

Nearly all IDI participants felt at least moderately comfortable answering questions across modalities; only 1 woman in an email interview (same as above) felt not at all comfortable, citing uncomfortable questions. FG participants also reported high levels of comfort across modalities, with the exception of the online video FGs, where the majority of respondents reported moderate comfort and 2 felt “only slightly comfortable.” Few comments were provided, but one woman's response suggests the discomfort came from the nature of the questions as much as the modality: “It was still challenging to share opposing views, even knowing I would likely not see these women again, and even though everyone acted respectful [sic] during the video chat.”

Relatedly, women shared their perceptions on how the modality of data collection affected their willingness to share. Women who participated in online text-based IDIs were generally split between finding that the modality made them more willing or had no effect on their willingness to share their experiences. Most women in face-to-face IDIs felt the modality made them more willing to share, while most online video IDI participants felt the medium had no effect on their willingness to share.

Within FGs, most participants in the online chat (84%) and discussion board post (76%) text-based modalities reported that the mode of communication made them more willing to share. Women in face-to-face and online video FGs were more evenly split between reporting more willingness and no effect on sharing. However, 23% of online video FG participants (and 7% of discussion board post participants) also thought the modality made them less willing to share, as summarized by one woman: “I did the online [video] focus group. If it had been just a telephone focus group, I would have been more open to sharing, but seeing the other participants made me more nervous to be open.”

Convenience

Most women in all modalities reported that IDI participation was moderately or very convenient. One woman in the face-to-face sample and 2 in the online video sample felt the interview was less convenient; the only comment provided stated the participant, as a full-time working mother of a toddler, felt she had no extra time for anything. The online text-based FG participants generally reported those modalities as very or moderately convenient, while a greater proportion of face-to-face and online video FG participants found the modalities slightly to moderately convenient.

Objective 3 Results

Cost comparison.

We considered many time and cost inputs related to data collection across modalities, as summarized in Table 13 . For staff-related costs, we used illustrative hourly rates; for all others, we used actual costs. Regarding the data processing required to prepare data for analysis, the face-to-face and online video modalities required transcription of audio recordings, while the 2 online text-based modalities had live-generated transcripts that required post hoc formatting.

Table 13. Time and Group-size Inputs for Cost Calculations.

Time and Group-size Inputs for Cost Calculations.

Based on a combination of average and actual times and expenses, we calculated the total cost of data collection for each modality by method ( Table 14 ). Among the IDIs, email-based interviews had the lowest average cost per interview, while the online video IDIs had the highest average cost per interview, with a difference of $182. For the FGs, the online video modality was again most expensive in average cost per data collection event and was $732 more costly per FG than the face-to-face focus groups (even when the number of participants in each modality are standardized).

Table 14. Data Collection Costs by Modality and Method.

Data Collection Costs by Modality and Method.

Advances in telecommunications technology and access provide more opportunities to conduct qualitative research. This study addressed questions about how changes in the modality of data collection might affect the data collected, in terms of thematic content and participants' willingness to discuss sensitive/personal experiences or information. We also assessed the comparative cost of different data collection modalities and participants' experience of them.

Effects of Modality on Data, Cost, and Participant Experience

The small amount of existing literature addressing differences in online vs face-to-face qualitative data collection suggested that online techniques, in general, would generate a smaller volume of information (textual data) 19 , 21 than their face-to-face incarnations. This was confirmed by our data for IDIs if we consider both in-person and online video modalities as “face-to-face,” as in both cases there is an audiovisual connection allowing for nonverbal as well as verbal communication and there is no need to type responses. Together, these modalities accounted for the greatest volume of text and the greatest proportion of participant text. The trends, though less clear, held for the FG sample as well, with significant differences in the total amount and proportion of participant text (after controlling for number of participants per group) between the audiovisual and text-based modalities. In both cases, the larger volume of data produced by the audiovisual modalities likely reflects the ease and speed of speech relative to typing. Consider that the average conversation rate for an English speaker is about 150 words per minute, while average typing speed is about 40 words per minute. At those rates, we might expect audiovisual (or simply audio) modalities to generate more than 3 times the amount of text as typing-based modalities. This was also why we extended the time allowed for online chat interviews—it simply took longer to get through the same questions, not because there was more discussion, but because the act of thinking and typing took longer than speaking. However, the significant difference within the FG sample between the synchronous and asynchronous text-based modalities suggests that not only typing ability but also real-time engagement with the moderator and other participants might have affected the volume of participant responses.

While our findings related to the volume of participant text are informative, the more salient measure of data “richness” relates to the content of the textual data generated across modalities, particularly since qualitative research aims to discover the meanings in participant responses rather than simply the number of words used to convey those meanings. According to our thematic code operationalization of richness, we found no significant differences in thematic content across modalities for either method (IDI or FG). Had we conducted only face-to-face or only online chat data collection, for example, the resultant thematic reports would have been nearly identical, based on the shared thematic content generated. It may be that the necessity of composing a written/typed response in online text-based modalities forces participants to condense and organize their thoughts before responding, or to be less “effusive,” resulting in reduced data volume but similar data content. Our operationalization of richness, however, does not consider the depth of discussion of particular themes. One might subjectively assess a narrative with fewer themes but greater depth as “richer” than one that superficially covers a larger number of themes. Although we did not perform subjective assessments of richness, earlier studies that have more qualitative assessed richness 10 , 15 , 16 , 20 , 34 , 35 indicate that in-person (or online video) FGs are “richer” than text-based ones, because of the perception of more context/illustration. However, nearly all those studies also find that in-person FGs include more off-topic commentary. 10 , 15 , 16 , 20

We also found mixed evidence of an online disinhibition effect regarding participants' sharing of sensitive/personal information across modalities. In terms of unsolicited sensitive/personal disclosures, we found no statistically significant differences across modalities for either method. Yet when presented with a sensitive topic (abortion), online text-based FG participants—those who could not see each other—offered more dissenting opinions than did participants in the visual modalities (face-to-face and online video). This aligns with data on participant perceptions of the data collection modalities, which showed that the online video FGs in particular were viewed as less comfortable and safe than the other methods. Greater proportions of women in the online text-based FG modalities, particularly the message board option, also reported that the mode of data collection made them more willing to share.

These findings may reflect general use and familiarity with some modes of communication over others. The face-to-face IDIs and FGs both scored consistently high, with participants remarking on the conversational tone and ease of interpersonal exchange, even when discussing a sensitive topic. Given the similarity between in-person face-to-face data collection and online video-based forms, in terms of the spoken conversation, visual connection, and ability to read nonverbal cues, it is perhaps surprising that online videos consistently received the lowest proportion of women reporting strong feelings of rapport, comfort, and safety. As participant feedback indicated, some of this relative discomfort was associated with the topics of discussion, but the technology (freezing video, connection challenges) also interrupted the flow of conversation, and women seemed to “warm up” to each other less when connected via video vs in person. The pre–focus group small talk that happens in a conference room did not typically happen in online video contexts; such small talk may help participants “read” one another, identify similarities and connections, and set a more relaxed tone. Online video FGs offered a chance to virtually meet the other participants (and often to see into their homes), but the technical troubleshooting and connection at the start of each FG meant that the audio wasn't joined until the group was convened and the moderator ready to start. Seeing oneself live on a computer screen among 5 to 6 others was also likely uncomfortable or distracting for some women. Conversely, the relatively anonymous online discussion post modality scored high for the FGs in terms of rapport and comfort, perhaps reflecting women's use of and familiarity with social media and posting platforms in other areas of their lives. We did not consider telephone (audio-only) data collection as part of this study because telephone-based focus groups are uncommon, but the literature suggests that the benefits of geographic reach and privacy, paired with speech, may make them a suitable option in some cases. 36 , 37

In terms of cost to conduct, our findings generally mirror those of Rupert and colleagues, 33 who found that virtual FGs do not appear to cost less or recruit participants faster than in-person groups. The highest average cost per data collection event, without travel, for both IDIs and FGs was the online video modality. For IDIs, the online asynchronous (email) modality cost least on average, while for FGs, the least costly average data collection event was the in-person modality, despite the extra cost categories of refreshments and assistant time. Online methods are often touted as less expensive because they save on the cost of researcher travel to reach a geographically distributed sample, 11 , 12 and our data are limited in that we did not have actual travel expenses. Rather, we used estimated travel expenses to provide illustrative cost implications for data collection. Including travel raised the average per-event cost of in-person data collection considerably, making it the most expensive for both IDIs and FGs. As inputs for each study will differ, we suggest reference to Tables 5 and 14 for cost comparison calculation considerations.

Limitations

As with any research, our findings come with qualifications. First, our sample was relatively homogeneous, limiting the generalizability of our findings. Given our methodological objectives (objectives 1 and 2 in particular) and stratified participant assignment to data collection modality, we viewed sample homogeneity as a relative strength, in that it was 1 less potential confounder of our comparisons between modalities. Also, the homogeneity of the sample population should be positively associated with the thematic saturation rate; groups that are alike on multiple dimensions are more likely to think in similar ways and have similar experiences, allowing for earlier thematic saturation. 38 The more homogeneous sample therefore allows us to be more confident that our sample size allowed us to reach thematic saturation for our comparison of thematic content. Nonetheless, one of the potential benefits of online research is the ability to include geographically scattered populations, and our sample was geographically circumscribed. Future research with more diverse populations—in terms of socioeconomic status, race, computer/literacy, and mobility—would help to broaden the findings.

Relatedly, all study participants were required to have internet access/computer in their homes and basic typing skills, because of the systematic assignment into face-to-face or online modalities. We recognize that these eligibility criteria may also introduce bias into the study vis-à-vis generalizability—that women who have home computers and can type are not representative of the general population of women who could benefit from remote data collection. However, given the nature of the study objectives and experimental design, this limitation was unavoidable. Interpretations should be made with awareness of this limitation. Regarding the relative costs of data collection, it should be noted that we used estimated travel costs. Additionally, some researchers may rely on free or open platforms for online video- or text-based data collection to defray those costs, but such platforms potentially raise data privacy and confidentiality concerns. As the inputs for each study will differ, we suggest reference to Tables 5 , 13 , and 14 for cost comparison calculation considerations.

  • Conclusions

Selection of a modality for conduct of qualitative research will continue to hinge on several factors, including the type and location of the participant population, the research topic, and the project budget. A major caveat related to research population is that technology-based online approaches usually require computer and internet access, which may be barriers for populations already experiencing other access or inequity issues. Our findings on the elicitation of sensitive disclosures and expressions of dissenting opinions suggest that there may be an online disinhibition effect for nonvisual online data collection modalities, but that for some topics, the social atmosphere created by an in-person group of similar participants could work equally well. Regarding cost, online and face-to-face modalities incur different types of expenses, and priorities within the time/cost/quality resource allocation triangle will dictate which costs are the most efficient use of resources. However, most importantly, our methodological findings based on a rigorous comparative research design confirm some earlier findings 16 , 22 —that conducting qualitative research via online modalities does not result in substantial or significantly different thematic findings than conducting IDIs and FGs in person. Despite differences in interpersonal dynamics between individual interviews and focus groups, our data suggest that the effect of modality on data generated is similar across both methods. This opens opportunities for broadening the reach and inclusion of sampling to a wider geographic scope, as researcher and participant(s) need not be in the same place; it also provides opportunities to include populations with mobility or transport issues, who may not be able to travel to a study location. For researchers examining the relative strengths and trade-offs of traditional vs online qualitative data collection, we have contributed many empirical data points for consideration in research design and decision-making.

  • Related Publications
  • Acknowledgment

Research reported in this report was [partially] funded through a Patient-Centered Outcomes Research Institute® (PCORI®) Award (#ME-1403-11706) Further information available at: https://www.pcori.org/research-results/2014/comparing-interview-and-focus-group-data-collected-person-and-online

Appendix A.

Summary of research objectives/questions and analysis methods (PDF, 539K)

Appendix B.

Data collection guide for the RAMP-UP mixed-modality qualitative study (PDF, 341K)

Appendix C.

Post-data collection event participant experience survey (PDF, 668K)

Appendix D.

Abbreviated codebook for the RAMP-UP study (with number of uses per code) (PDF, 689K)

Appendix E.

Sensitive codes used in the RAMP-UP study (PDF, 651K)

Suggested citation:

Guest G, Namey E, O'Regan A, Godwin C, Taylor J. (2020). Comparing Interview and Focus Group Data Collected in Person and Online . Patient-Centered Outcomes Research Institute. (PCORI). https://doi.org/10.25302/05.2020.ME.1403117064

The [views, statements, opinions] presented in this report are solely the responsibility of the author(s) and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute® (PCORI®), its Board of Governors or Methodology Committee.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License which permits noncommercial use and distribution provided the original author(s) and source are credited. (See https://creativecommons.org/licenses/by-nc-nd/4.0/

  • Cite this Page Guest G, Namey E, O'Regan A, et al. Comparing Interview and Focus Group Data Collected in Person and Online [Internet]. Washington (DC): Patient-Centered Outcomes Research Institute (PCORI); 2020 May. doi: 10.25302/05.2020.ME.1403117064
  • PDF version of this title (2.4M)

In this Page

Other titles in this collection.

  • PCORI Final Research Reports

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • How does qualitative data collection modality affect disclosure of sensitive information and participant experience? Findings from a quasi-experimental study. [Qual Quant. 2022] How does qualitative data collection modality affect disclosure of sensitive information and participant experience? Findings from a quasi-experimental study. Namey E, Guest G, O'Regan A, Godwin CL, Taylor J, Martinez A. Qual Quant. 2022; 56(4):2341-2360. Epub 2021 Sep 2.
  • Virtual Versus In-Person Focus Groups: Comparison of Costs, Recruitment, and Participant Logistics. [J Med Internet Res. 2017] Virtual Versus In-Person Focus Groups: Comparison of Costs, Recruitment, and Participant Logistics. Rupert DJ, Poehlman JA, Hayes JJ, Ray SE, Moultrie RR. J Med Internet Res. 2017 Mar 22; 19(3):e80. Epub 2017 Mar 22.
  • Advancing qualitative rare disease research methodology: a comparison of virtual and in-person focus group formats. [Orphanet J Rare Dis. 2022] Advancing qualitative rare disease research methodology: a comparison of virtual and in-person focus group formats. Dwyer AA, Uveges M, Dockray S, Smith N. Orphanet J Rare Dis. 2022 Sep 11; 17(1):354. Epub 2022 Sep 11.
  • Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics [ 2014] Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics Peterson K, McCleery E. 2014 May
  • Review Health professionals' experience of teamwork education in acute hospital settings: a systematic review of qualitative literature. [JBI Database System Rev Implem...] Review Health professionals' experience of teamwork education in acute hospital settings: a systematic review of qualitative literature. Eddy K, Jordan Z, Stephenson M. JBI Database System Rev Implement Rep. 2016 Apr; 14(4):96-137.

Recent Activity

  • Comparing Interview and Focus Group Data Collected in Person and Online Comparing Interview and Focus Group Data Collected in Person and Online

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

IMAGES

  1. Focus Groups

    focus groups data collection in qualitative research

  2. Qualitative Data Collection: What it is + Methods to do it

    focus groups data collection in qualitative research

  3. Focus group research

    focus groups data collection in qualitative research

  4. (PDF) Methods of data collection in qualitative research: Interviews

    focus groups data collection in qualitative research

  5. 4 Data Collection Techniques: Which One's Right for You?

    focus groups data collection in qualitative research

  6. Qualitative Data: Definition, Types, Analysis and Examples

    focus groups data collection in qualitative research

VIDEO

  1. Transcription Services & Academic Research

  2. Data Collection for Qualitative Studies

  3. Focus Group Discussion (FGD)

  4. Focus Group Discussion

  5. Social Work Research: Qualitative Research Methods (Chapter 19)

  6. Analyzing Qualitative Data: Indepth Interviews and Focus Groups

COMMENTS

  1. Two Approaches to Focus Group Data Collection for Qualitative Health

    Focus groups are a common qualitative data collection method and are considered an important qualitative health research technique (Morgan, 1997), owing to their efficient and economical nature (Krueger & Casey, 2000).Focus groups are defined as "group discussions exploring a set of specific issues that are focused because the process involves some collective activity" (Kitzinger, 1994, p ...

  2. Focus Group Research: An Intentional Strategy for Applied Group Research?

    Focus groups are an established mechanism for data collection across qualitative, mixed method, and quantitative methodologies (Pearson & Vossler, 2016 ). Although employed differently within each research paradigm, the popularity of focus groups is increasing (Carlsen & Glenton, 2011; George, 2013; Kress & Shoffner, 2007; Massey, 2010 ).

  3. Zooming into Focus Groups: Strategies for Qualitative Research in the

    Qualitative research focuses on exploring individuals' perspectives related to specific research questions, issues, or activities ( 1 ). Frequently, structured interviews or focus groups are tools employed for data collection for qualitative research. In-person interviews are ideal, although phone and digital alternatives may be considered ...

  4. Interviews and focus groups in qualitative research: an update for the

    The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital ...

  5. Methods of data collection in qualitative research: interviews and

    Qualitative research in dentistry This paper explores the most common methods of data collection used in qualitative research: interviews and focus groups. The paper examines each method in detail ...

  6. What is a Focus Group

    Step 1: Choose your topic of interest. Step 2: Define your research scope and hypotheses. Step 3: Determine your focus group questions. Step 4: Select a moderator or co-moderator. Step 5: Recruit your participants. Step 6: Set up your focus group. Step 7: Host your focus group.

  7. Qualitative Research: Data Collection, Analysis, and Management

    The method itself should then be described, including ethics approval, choice of participants, mode of recruitment, and method of data collection (e.g., semistructured interviews or focus groups), followed by the research findings, which will be the main body of the report or paper.

  8. PDF Methods of data collection in qualitative research: interviews and

    in qualitative research: interviews and focus groups P. Gill, 1 K. Stewart, 2 E. Treasure 3 and B. Chadwick 4 • Interviews and focus groups are the most common methods of data collection used in ...

  9. Data collection in qualitative research

    The three core approaches to data collection in qualitative research—interviews, focus groups and observation—provide researchers with rich and deep insights. All methods require skill on the part of the researcher, and all produce a large amount of raw data. However, with careful and systematic analysis 12 the data yielded with these ...

  10. Interviews and focus groups in qualitative research: an update for the

    The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital technologies, such as video chat and online forums, has further transformed these methods of data collection. This paper therefore discusses interviews and ...

  11. Methodological Aspects of Focus Groups in Health Research

    Focus groups have been widely used in health research in recent years to explore the perspectives of patients and other groups in the health care system (e.g., Carr et al., 2003; Côté-Arsenault & Morrison-Beedy, 2005; Kitzinger, 2006).They are often included in mixed-methods studies to gain more information on how to construct questionnaires or interpret results (Creswell & Plano Clark, 2007 ...

  12. Chapter 12. Focus Groups

    Because focus groups are often used for commercial purposes, they sometimes have a bit of a stigma among researchers. This is unfortunate, as the focus group is a helpful addition to the qualitative researcher's toolkit. Focus groups explicitly use group interaction to assist in the data collection.

  13. A Qualitative Framework for Collecting and Analyzing Data in Focus

    With this in mind, in the present article we provide a new qualitative framework for collecting and analyzing focus group data in social science research. First, we delineate multiple avenues for collecting focus group data. Second, using the works of Leech and Onwuegbuzie (2007, 2008), we outline multiple methods of analyzing focus group data ...

  14. Chapter 10. Introduction to Data Collection Techniques

    There are four primary techniques of data collection used in qualitative research: interviews, focus groups, observations, and document review. [1] ... Focus groups can be seen as a type of interview, one in which a group of persons (ideally between five and twelve) is asked a series of questions focused on a particular topic or subject. ...

  15. Focus Groups

    A focus group is a qualitative research method used to gather in-depth insights and opinions from a group of individuals about a particular product, service, concept, or idea. ... Efficient data collection: Focus groups are an efficient way to collect data from multiple individuals at the same time, ...

  16. How to use and assess qualitative research methods

    The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [1, 14, 16, 17]. Document study These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

  17. Methods of data collection in qualitative research: interviews and

    Abstract. This paper explores the most common methods of data collection used in qualitative research: interviews and focus groups. The paper examines each method in detail, focusing on how they work in practice, when their use is appropriate and what they can offer dentistry. Examples of empirical studies that have used interviews or focus ...

  18. Two Approaches to Focus Group Data Collection for Qualitative Health

    This article discusses four challenges to conducting qualitative focus groups: (1) maximizing research budgets through innovative methodological approaches, (2) recruiting health-care professionals for qualitative health research, (3) conducting focus groups with health-care professionals across geographically dispersed areas, and (4) taking into consideration data richness when using ...

  19. Facilitators and barriers of implementing end-of-life care volunteering

    The Consolidated criteria for reporting qualitative research (COREQ) checklist has been used to report necessary elements of the methods, analysis and results sections. ... The second type of data collection were the focus group interview and the one-to-one in-depth interviews with VCs [20, 21].

  20. Implementation of virtual focus groups for qualitative data collection

    Focus groups are an important part of qualitative research and is a well-established method for collecting data to explore participants' opinions, experiences, and perspectives. 5 The hallmark of focus groups is to produce data and insights from a group interaction that would be less pronounced in an interview setting. Focus groups are ...

  21. Opportunities and Challenges of Qualitative Research in ...

    Qualitative research practices were found to be low among health sciences academics. Lack of resources, training, and expertise, problems associated with publication, a lack of funding, and a shortage of experts were the main challenges in conducting qualitative research in health sciences academic settings in Ethiopia. 1.

  22. PDF Interviews and focus groups in qualitative research: an update for the

    The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital

  23. Frontiers

    Both qualitative and quantitative data were collected. Quantitative data was collected using a structured questionnaire through face-to-face interviews with households at the intervention site. Sixty representative households were selected randomly and interviewed.Constraints and potentials were identified via focus group discussions.

  24. Comparing Interview and Focus Group Data Collected in Person and Online

    Online focus groups (FGs) and individual interviews (IDIs) are increasingly used to collect qualitative data. Online data collection offers benefits (eg, geographic reach), but the literature on whether and how data collection modality affects the data generated is mixed. The limited evidence base suggests that data collected via online modalities may be less rich in terms of word count, but ...