Please note that Internet Explorer version 8.x is not supported as of January 1, 2016. Please refer to this support page for more information.
- Download full issue
Journal of Business Research
Literature review as a research methodology: an overview and guidelines.
Knowledge production within the field of business research is accelerating at a tremendous speed while at the same time remaining fragmented and interdisciplinary. This makes it hard to keep up with state-of-the-art and to be at the forefront of research, as well as to assess the collective evidence in a particular area of business research. This is why the literature review as a research method is more relevant than ever. Traditional literature reviews often lack thoroughness and rigor and are conducted ad hoc, rather than following a specific methodology. Therefore, questions can be raised about the quality and trustworthiness of these types of reviews. This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and evaluate a literature review paper. It also discusses common pitfalls and how to get literature reviews published.
- Previous article in issue
- Next article in issue
Cited by (0)
Hannah Snyder is an assistant professor at the department of marketing, BI - Norwegian School of Business, Oslo, Norway. Her research interest relates to service innovation, customer creativity, deviant customer behavior, and value co-creation as well as a special interest in literature review methodology. She has published in the Journal of Business Research , European Journal of Marketing , Journal of Service Management and International Journal of Nursing Studies .
- University of Arkansas
- Blackboard Learn
- Exchange Mail
- Research Guides
- Qualitative or Quantitative?
- Getting Started
- Finding articles
- Primary sources? Peer-reviewed?
- Review Articles/ Annual Reviews...?
- Books, ebooks, dissertations, book reviews
Qualitative researchers TEND to:
Researchers using qualitative methods tend to:
- t hink that social sciences cannot be well-studied with the same methods as natural or physical sciences
- feel that human behavior is context-specific; therefore, behavior must be studied holistically, in situ, rather than being manipulated
- employ an 'insider's' perspective; research tends to be personal and thereby more subjective.
- do interviews, focus groups, field research, case studies, and conversational or content analysis.
Qualitative Research (an operational definition)
Qualitative Research: an operational description
Purpose : explain; gain insight and understanding of phenomena through intensive collection and study of narrative data
Approach: inductive; value-laden/subjective; holistic, process-oriented
Hypotheses: tentative, evolving; based on the particular study
Lit. Review: limited; may not be exhaustive
Setting: naturalistic, when and as much as possible
Sampling : for the purpose; not necessarily representative; for in-depth understanding
Measurement: narrative; ongoing
Design and Method: flexible, specified only generally; based on non-intervention, minimal disturbance, such as historical, ethnographic, or case studies
Data Collection: document collection, participant observation, informal interviews, field notes
Data Analysis: raw data is words/ ongoing; involves synthesis
Data Interpretation: tentative, reviewed on ongoing basis, speculative
- Qualitative research with more structure and less subjectivity
- Increased application of both strategies to the same study ("mixed methods")
- Evidence-based practice emphasized in more fields (nursing, social work, education, and others).
Some Other Guidelines
- How to Design Graphs and Tables (Univ. of Oregon's guide)
- Critical Appraisal Checklist for an Article On Qualitative Research
Quantitative researchers TEND to:
Researchers using quantitative methods tend to:
- think that both natural and social sciences strive to explain phenomena with confirmable theories derived from testable assumptions
- attempt to reduce social reality to variables, in the same way as with physical reality
- try to tightly control the variable(s) in question to see how the others are influenced.
- Do experiments, have control groups, use blind or double-blind studies; use measures or instruments.
Quantitative Research (an operational definition)
Quantitative research: an operational description
Purpose: explain, predict or control phenomena through focused collection and analysis of numberical data
Approach: deductive; tries to be value-free/has objectives/ is outcome-oriented
Hypotheses : Specific, testable, and stated prior to study
Lit. Review: extensive; may significantly influence a particular study
Setting: controlled to the degree possible
Sampling: uses largest manageable random/randomized sample, to allow generalization of results to larger populations
Measurement: standardized, numberical; "at the end"
Design and Method: Strongly structured, specified in detail in advance; involves intervention, manipulation and control groups; descriptive, correlational, experimental
Data Collection: via instruments, surveys, experiments, semi-structured formal interviews, tests or questionnaires
Data Analysis: raw data is numbers; at end of study, usually statistical
Data Interpretation: formulated at end of study; stated as a degree of certainty
This page on qualitative and quantitative research has been adapted and expanded from a handout by Suzy Westenkirchner. Used with permission. NPG
- << Previous: Books, ebooks, dissertations, book reviews
- Last Updated: Feb 24, 2023 10:26 AM
- URL: https://uark.libguides.com/litreview
- See us on Instagram
- Follow us on Twitter
- Like us on Facebook
- Libraries on Pinterest
- Phone: 479-575-4104
An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Account settings
- Browse Titles
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].
Chapter 9 methods for literature reviews.
Guy Paré and Spyros Kitsiou .
Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).
Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).
The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).
When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.
The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.
9.2. Overview of the Literature Review Process and Steps
As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:
- formulating the research question(s) and objective(s),
- searching the extant literature,
- screening for inclusion,
- assessing the quality of primary studies,
- extracting data, and
- analyzing data.
Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).
Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).
9.3. Types of Review Articles and Brief Illustrations
EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.
9.3.1. Narrative Reviews
The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).
Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).
Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.
Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health ( m-health ) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.
9.3.2. Descriptive or Mapping Reviews
The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).
In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.
An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).
9.3.3. Scoping Reviews
Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.
Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).
One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).
9.3.4. Forms of Aggregative Reviews
Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.
Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:
- Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
- Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
- Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
- Analyzing data using quantitative or qualitative methods.
- Presenting results in summary of findings tables.
- Interpreting results and drawing conclusions.
Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.
The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed independently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.
Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.
A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guidelines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.
In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).
9.3.5. Realist Reviews
Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).
To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).
The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.
9.3.6. Critical Reviews
Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).
Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.
Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.
Typology of Literature Reviews (adapted from Paré et al., 2015).
As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.
9.5. Concluding Remarks
In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.
We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.
To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.
- Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
- Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
- Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
- Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
- A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
- Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
- Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
- Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
- Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
- Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
- Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
- Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
- Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
- Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
- Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
- Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
- Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
- Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
- Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
- Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
- Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
- Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
- Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
- Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
- Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
- Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
- King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
- Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
- Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
- Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
- Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
- Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
- Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
- Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
- Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
- Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
- Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
- Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
- Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
- Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel .aisnet.org/cgi/viewcontent .cgi?article =1221&context =ecis2011 .
- Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
- Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
- Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
- Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
- Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
- Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
- Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
- Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
- Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
- Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
- Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
- Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
- Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
- Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
- Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
- Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
- Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
- Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
- Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
- Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
- Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]
This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/
- Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
- PDF version of this title (4.5M)
- Disable Glossary Links
In this Page
- Overview of the Literature Review Process and Steps
- Types of Review Articles and Brief Illustrations
- Concluding Remarks
- PMC PubMed Central citations
- PubMed Links to PubMed
- Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
- USC Libraries
- Research Guides
Organizing Your Social Sciences Research Paper
- 5. The Literature Review
- Purpose of Guide
- Design Flaws to Avoid
- Independent and Dependent Variables
- Glossary of Research Terms
- Reading Research Effectively
- Narrowing a Topic Idea
- Broadening a Topic Idea
- Extending the Timeliness of a Topic Idea
- Academic Writing Style
- Choosing a Title
- Making an Outline
- Paragraph Development
- Research Process Video Series
- Executive Summary
- The C.A.R.S. Model
- Background Information
- The Research Problem/Question
- Theoretical Framework
- Citation Tracking
- Content Alert Services
- Evaluating Sources
- Primary Sources
- Secondary Sources
- Tiertiary Sources
- Scholarly vs. Popular Publications
- Qualitative Methods
- Quantitative Methods
- Using Non-Textual Elements
- Limitations of the Study
- Common Grammar Mistakes
- Writing Concisely
- Avoiding Plagiarism
- Footnotes or Endnotes?
- Further Readings
A literature review surveys books, scholarly articles, and any other sources relevant to a particular issue, area of research, or theory, and by so doing, provides a description, summary, and critical evaluation of these works in relation to the research problem being investigated. Literature reviews are designed to provide an overview of sources you have explored while researching a particular topic and to demonstrate to your readers how your research fits within a larger field of study.
Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . Fourth edition. Thousand Oaks, CA: SAGE, 2014.
Importance of a Good Literature Review
A literature review may consist of simply a summary of key sources, but in the social sciences, a literature review usually has an organizational pattern and combines both summary and synthesis, often within specific conceptual categories . A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information in a way that informs how you are planning to investigate a research problem. The analytical features of a literature review might:
- Give a new interpretation of old material or combine new with old interpretations,
- Trace the intellectual progression of the field, including major debates,
- Depending on the situation, evaluate the sources and advise the reader on the most pertinent or relevant research, or
- Usually in the conclusion of a literature review, identify where gaps exist in how a problem has been researched to date.
Given this, the purpose of a literature review is to:
- Place each work in the context of its contribution to understanding the research problem being studied.
- Describe the relationship of each work to the others under consideration.
- Identify new ways to interpret prior research.
- Reveal any gaps that exist in the literature.
- Resolve conflicts amongst seemingly contradictory previous studies.
- Identify areas of prior scholarship to prevent duplication of effort.
- Point the way in fulfilling a need for additional research.
- Locate your own research within the context of existing literature [very important].
Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper. 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Jesson, Jill. Doing Your Literature Review: Traditional and Systematic Techniques . Los Angeles, CA: SAGE, 2011; Knopf, Jeffrey W. "Doing a Literature Review." PS: Political Science and Politics 39 (January 2006): 127-132; Ridley, Diana. The Literature Review: A Step-by-Step Guide for Students . 2nd ed. Los Angeles, CA: SAGE, 2012.
Types of Literature Reviews
It is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the primary studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally among scholars that become part of the body of epistemological traditions within the field.
In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews. Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are a number of approaches you could adopt depending upon the type of analysis underpinning your study.
Argumentative Review This form examines literature selectively in order to support or refute an argument, deeply embedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to make summary claims of the sort found in systematic reviews [see below].
Integrative Review Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses or research problems. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication. This is the most common form of review in the social sciences.
Historical Review Few things rest in isolation from historical precedent. Historical literature reviews focus on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.
Methodological Review A review does not always focus on what someone said [findings], but how they came about saying what they say [method of analysis]. Reviewing methods of analysis provides a framework of understanding at different levels [i.e. those of theory, substantive fields, research approaches, and data collection and analysis techniques], how researchers draw upon a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection, and data analysis. This approach helps highlight ethical issues which you should be aware of and consider as you go through your own study.
Systematic Review This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyze data from the studies that are included in the review. The goal is to deliberately document, critically evaluate, and summarize scientifically all of the research about a clearly defined research problem . Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?" This type of literature review is primarily applied to examining prior research studies in clinical medicine and allied health fields, but it is increasingly being used in the social sciences.
Theoretical Review The purpose of this form is to examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review helps to establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.
NOTE : Most often the literature review will incorporate some combination of types. For example, a review that examines literature supporting or refuting an argument, assumption, or philosophical problem related to the research problem will also need to include writing supported by sources that establish the history of these arguments in the literature.
Baumeister, Roy F. and Mark R. Leary. "Writing Narrative Literature Reviews." Review of General Psychology 1 (September 1997): 311-320; Mark R. Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Kennedy, Mary M. "Defining a Literature." Educational Researcher 36 (April 2007): 139-147; Petticrew, Mark and Helen Roberts. Systematic Reviews in the Social Sciences: A Practical Guide . Malden, MA: Blackwell Publishers, 2006; Torracro, Richard. "Writing Integrative Literature Reviews: Guidelines and Examples." Human Resource Development Review 4 (September 2005): 356-367; Rocco, Tonette S. and Maria S. Plakhotnik. "Literature Reviews, Conceptual Frameworks, and Theoretical Frameworks: Terms, Functions, and Distinctions." Human Ressource Development Review 8 (March 2008): 120-130; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016.
Structure and Writing Style
I. Thinking About Your Literature Review
The structure of a literature review should include the following in support of understanding the research problem :
- An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review,
- Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches entirely],
- An explanation of how each work is similar to and how it varies from the others,
- Conclusions as to which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of their area of research.
The critical evaluation of each work should consider :
- Provenance -- what are the author's credentials? Are the author's arguments supported by evidence [e.g. primary historical material, case studies, narratives, statistics, recent scientific findings]?
- Methodology -- were the techniques used to identify, gather, and analyze the data appropriate to addressing the research problem? Was the sample size appropriate? Were the results effectively interpreted and reported?
- Objectivity -- is the author's perspective even-handed or prejudicial? Is contrary data considered or is certain pertinent information ignored to prove the author's point?
- Persuasiveness -- which of the author's theses are most convincing or least convincing?
- Validity -- are the author's arguments and conclusions convincing? Does the work ultimately contribute in any significant way to an understanding of the subject?
II. Development of the Literature Review
Four Basic Stages of Writing 1. Problem formulation -- which topic or field is being examined and what are its component issues? 2. Literature search -- finding materials relevant to the subject being explored. 3. Data evaluation -- determining which literature makes a significant contribution to the understanding of the topic. 4. Analysis and interpretation -- discussing the findings and conclusions of pertinent literature.
Consider the following issues before writing the literature review: Clarify If your assignment is not specific about what form your literature review should take, seek clarification from your professor by asking these questions: 1. Roughly how many sources would be appropriate to include? 2. What types of sources should I review (books, journal articles, websites; scholarly versus popular sources)? 3. Should I summarize, synthesize, or critique sources by discussing a common theme or issue? 4. Should I evaluate the sources in any way beyond evaluating how they relate to understanding the research problem? 5. Should I provide subheadings and other background information, such as definitions and/or a history? Find Models Use the exercise of reviewing the literature to examine how authors in your discipline or area of interest have composed their literature review sections. Read them to get a sense of the types of themes you might want to look for in your own research or to identify ways to organize your final review. The bibliography or reference section of sources you've already read, such as required readings in the course syllabus, are also excellent entry points into your own research. Narrow the Topic The narrower your topic, the easier it will be to limit the number of sources you need to read in order to obtain a good survey of relevant resources. Your professor will probably not expect you to read everything that's available about the topic, but you'll make the act of reviewing easier if you first limit scope of the research problem. A good strategy is to begin by searching the USC Libraries Catalog for recent books about the topic and review the table of contents for chapters that focuses on specific issues. You can also review the indexes of books to find references to specific issues that can serve as the focus of your research. For example, a book surveying the history of the Israeli-Palestinian conflict may include a chapter on the role Egypt has played in mediating the conflict, or look in the index for the pages where Egypt is mentioned in the text. Consider Whether Your Sources are Current Some disciplines require that you use information that is as current as possible. This is particularly true in disciplines in medicine and the sciences where research conducted becomes obsolete very quickly as new discoveries are made. However, when writing a review in the social sciences, a survey of the history of the literature may be required. In other words, a complete understanding the research problem requires you to deliberately examine how knowledge and perspectives have changed over time. Sort through other current bibliographies or literature reviews in the field to get a sense of what your discipline expects. You can also use this method to explore what is considered by scholars to be a "hot topic" and what is not.
III. Ways to Organize Your Literature Review
Chronology of Events If your review follows the chronological method, you could write about the materials according to when they were published. This approach should only be followed if a clear path of research building on previous research can be identified and that these trends follow a clear chronological order of development. For example, a literature review that focuses on continuing research about the emergence of German economic power after the fall of the Soviet Union. By Publication Order your sources by publication chronology, then, only if the order demonstrates a more important trend. For instance, you could order a review of literature on environmental studies of brown fields if the progression revealed, for example, a change in the soil collection practices of the researchers who wrote and/or conducted the studies. Thematic [“conceptual categories”] Thematic reviews of literature are organized around a topic or issue, rather than the progression of time. However, progression of time may still be an important factor in a thematic review. For example, a review of the Internet’s impact on American presidential politics could focus on the development of online political satire. While the study focuses on one topic, the Internet’s impact on American presidential politics, it will still be organized chronologically reflecting technological developments in media. The only difference here between a "chronological" and a "thematic" approach is what is emphasized the most: the role of the Internet in presidential politics. Note however that more authentic thematic reviews tend to break away from chronological order. A review organized in this manner would shift between time periods within each section according to the point made. Note that this is the most common approach in the social and behavioral sciences. Methodological A methodological approach focuses on the methods utilized by the researcher. For the Internet in American presidential politics project, one methodological approach would be to look at cultural differences between the portrayal of American presidents on American, British, and French websites. Or the review might focus on the fundraising impact of the Internet on a particular political party. A methodological scope will influence either the types of documents in the review or the way in which these documents are discussed.
Other Sections of Your Literature Review Once you've decided on the organizational method for your literature review, the sections you need to include in the paper should be easy to figure out because they arise from your organizational strategy. In other words, a chronological review would have subsections for each vital time period; a thematic review would have subtopics based upon factors that relate to the theme or issue. However, sometimes you may need to add additional sections that are necessary for your study, but do not fit in the organizational strategy of the body. What other sections you include in the body is up to you. However, only include what is necessary for the reader to locate your study within the larger scholarship about the research problem.
Here are examples of other sections, usually in the form of a single paragraph, you may need to include depending on the type of review you write:
- Current Situation : Information necessary to understand the current topic or focus of the literature review.
- Sources Used : Describes the methods and resources [e.g., databases] you used to identify the literature you reviewed.
- History : The chronological progression of the field, the literature, or an idea that is necessary to understand the literature review, if the body of the literature review is not already a chronology.
- Selection Methods : Criteria you used to select (and perhaps exclude) sources in your literature review. For instance, you might explain that your review includes only peer-reviewed articles and journals.
- Standards : Description of the way in which you present your information.
- Questions for Further Research : What questions about the field has the review sparked? How will you further your research as a result of the review?
IV. Writing Your Literature Review
Once you've settled on how to organize your literature review, you're ready to write each section. When writing your review, keep in mind these issues.
Use Evidence A literature review section is, in this sense, just like any other academic research paper. Your interpretation of the available sources must be backed up with evidence [citations] that demonstrates that what you are saying is valid. Be Selective Select only the most important points in each source to highlight in the review. The type of information you choose to mention should relate directly to the research problem, whether it is thematic, methodological, or chronological. Related items that provide additional information but that are not key to understanding the research problem can be included in a list of further readings . Use Quotes Sparingly Some short quotes are appropriate if you want to emphasize a point, or if what an author stated cannot be easily paraphrased. Sometimes you may need to quote certain terminology that was coined by the author, is not common knowledge, or taken directly from the study. Do not use extensive quotes as a substitute for using your own words in reviewing the literature. Summarize and Synthesize Remember to summarize and synthesize your sources within each thematic paragraph as well as throughout the review. Recapitulate important features of a research study, but then synthesize it by rephrasing the study's significance and relating it to your own work and the work of others. Keep Your Own Voice While the literature review presents others' ideas, your voice [the writer's] should remain front and center. For example, weave references to other sources into what you are writing but maintain your own voice by starting and ending the paragraph with your own ideas and wording. Use Caution When Paraphrasing When paraphrasing a source that is not your own, be sure to represent the author's information or opinions accurately and in your own words. Even when paraphrasing an author’s work, you still must provide a citation to that work.
V. Common Mistakes to Avoid
These are the most common mistakes made in reviewing social science research literature.
- Sources in your literature review do not clearly relate to the research problem;
- You do not take sufficient time to define and identify the most relevant sources to use in the literature review related to the research problem;
- Relies exclusively on secondary analytical sources rather than including relevant primary research studies or data;
- Uncritically accepts another researcher's findings and interpretations as valid, rather than examining critically all aspects of the research design and analysis;
- Does not describe the search procedures that were used in identifying the literature to review;
- Reports isolated statistical results rather than synthesizing them in chi-squared or meta-analytic methods; and,
- Only includes research that validates assumptions and does not consider contrary findings and alternative interpretations found in the literature.
Cook, Kathleen E. and Elise Murowchick. “Do Literature Review Skills Transfer from One Course to Another?” Psychology Learning and Teaching 13 (March 2014): 3-11; Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Jesson, Jill. Doing Your Literature Review: Traditional and Systematic Techniques . London: SAGE, 2011; Literature Review Handout. Online Writing Center. Liberty University; Literature Reviews. The Writing Center. University of North Carolina; Onwuegbuzie, Anthony J. and Rebecca Frels. Seven Steps to a Comprehensive Literature Review: A Multimodal and Cultural Approach . Los Angeles, CA: SAGE, 2016; Ridley, Diana. The Literature Review: A Step-by-Step Guide for Students . 2nd ed. Los Angeles, CA: SAGE, 2012; Randolph, Justus J. “A Guide to Writing the Dissertation Literature Review." Practical Assessment, Research, and Evaluation. vol. 14, June 2009; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016; Taylor, Dena. The Literature Review: A Few Tips On Conducting It. University College Writing Centre. University of Toronto; Writing a Literature Review. Academic Skills Centre. University of Canberra.
Break Out of Your Disciplinary Box!
Thinking interdisciplinarily about a research problem can be a rewarding exercise in applying new ideas, theories, or concepts to an old problem. For example, what might cultural anthropologists say about the continuing conflict in the Middle East? In what ways might geographers view the need for better distribution of social service agencies in large cities than how social workers might study the issue? You don’t want to substitute a thorough review of core research literature in your discipline for studies conducted in other fields of study. However, particularly in the social sciences, thinking about research problems from multiple vectors is a key strategy for finding new solutions to a problem or gaining a new perspective. Consult with a librarian about identifying research databases in other disciplines; almost every field of study has at least one comprehensive database devoted to indexing its research literature.
Frodeman, Robert. The Oxford Handbook of Interdisciplinarity . New York: Oxford University Press, 2010.
Another Writing Tip
Don't Just Review for Content!
While conducting a review of the literature, maximize the time you devote to writing this part of your paper by thinking broadly about what you should be looking for and evaluating. Review not just what scholars are saying, but how are they saying it. Some questions to ask:
- How are they organizing their ideas?
- What methods have they used to study the problem?
- What theories have been used to explain, predict, or understand their research problem?
- What sources have they cited to support their conclusions?
- How have they used non-textual elements [e.g., charts, graphs, figures, etc.] to illustrate key points?
When you begin to write your literature review section, you'll be glad you dug deeper into how the research was designed and constructed because it establishes a means for developing more substantial analysis and interpretation of the research problem.
Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1 998.
Yet Another Writing Tip
When Do I Know I Can Stop Looking and Move On?
Here are several strategies you can utilize to assess whether you've thoroughly reviewed the literature:
- Look for repeating patterns in the research findings . If the same thing is being said, just by different people, then this likely demonstrates that the research problem has hit a conceptual dead end. At this point consider: Does your study extend current research? Does it forge a new path? Or, does is merely add more of the same thing being said?
- Look at sources the authors cite to in their work . If you begin to see the same researchers cited again and again, then this is often an indication that no new ideas have been generated to address the research problem.
- Search Google Scholar to identify who has subsequently cited leading scholars already identified in your literature review [see next sub-tab]. This is called citation tracking and there are a number of sources that can help you identify who has cited whom, particularly scholars from outside of your discipline. Here again, if the same authors are being cited again and again, this may indicate no new literature has been written on the topic.
Onwuegbuzie, Anthony J. and Rebecca Frels. Seven Steps to a Comprehensive Literature Review: A Multimodal and Cultural Approach . Los Angeles, CA: Sage, 2016; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016.
- << Previous: Theoretical Framework
- Next: Citation Tracking >>
- Last Updated: Feb 16, 2023 1:36 PM
- URL: https://libguides.usc.edu/writingguide
- Open Access
- Published: 11 October 2016
Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research
- Stephen J. Gentles 1 , 4 ,
- Cathy Charles 1 ,
- David B. Nicholas 2 ,
- Jenny Ploeg 3 &
- K. Ann McKibbon 1
Systematic Reviews volume 5 , Article number: 172 ( 2016 ) Cite this article
Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.
The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.
We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.
Peer Review reports
While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.
The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.
Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.
While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.
The example systematic methods overview on sampling in qualitative research
The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.
The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.
For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.
Organization of the guidance into principles and strategies
For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.
We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.
Results and discussion
Literature identification and selection.
The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.
Delimiting a manageable set of publications
One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.
Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.
To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.
We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.
In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).
It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.
Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.
Searching beyond standard bibliographic databases
An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.
Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.
To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.
In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.
Searching without relevant metadata
Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.
Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.
One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.
In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.
Purposefully selecting literature on conceptual grounds
A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.
Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.
One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.
In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.
At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.
To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure 1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).
Literature identification and selection process used in the methods overview on sampling [ 18 ]
In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.
The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.
Iteratively defining conceptual information to abstract
In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.
Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.
Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.
In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.
The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.
Accounting for inconsistent terminology
An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.
Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.
An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.
In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table 1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .
We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.
This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.
Generating credible and verifiable analytic interpretations
The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.
Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.
We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.
The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig. 2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.
Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]
In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.
Integrative versus interpretive methods overviews
The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].
The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.
In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.
As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.
A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.
To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.
Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.
Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.
Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.
Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.
Article PubMed PubMed Central Google Scholar
Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.
Article Google Scholar
Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.
Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.
Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.
Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.
Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.
Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.
Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.
Article PubMed Google Scholar
Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.
Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.
Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.
Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.
Article CAS PubMed PubMed Central Google Scholar
Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.
Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.
Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.
Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.
Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.
Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.
Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.
Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.
Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.
Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.
Book Google Scholar
Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.
Article CAS Google Scholar
Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.
Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.
There was no funding for this work.
Availability of data and materials
The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .
SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.
The authors declare that they have no competing interests.
Consent for publication
Ethics approval and consent to participate, author information, authors and affiliations.
Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
Stephen J. Gentles, Cathy Charles & K. Ann McKibbon
Faculty of Social Work, University of Calgary, Alberta, Canada
David B. Nicholas
School of Nursing, McMaster University, Hamilton, Ontario, Canada
CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada
Stephen J. Gentles
You can also search for this author in PubMed Google Scholar
Correspondence to Stephen J. Gentles .
Cathy Charles is deceased
Additional file 1:.
Submitted: Analysis_matrices. (DOC 330 kb)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.
Reprints and Permissions
About this article
Cite this article.
Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0
Received : 06 June 2016
Accepted : 14 September 2016
Published : 11 October 2016
DOI : https://doi.org/10.1186/s13643-016-0343-0
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Systematic review
- Literature selection
- Research methods
- Research methodology
- Overview of methods
- Systematic methods overview
- Review methods
- Submission enquiries: Access here and click Contact Us
- General enquiries: [email protected]
- Research article
- Open Access
- Published: 10 July 2008
Methods for the thematic synthesis of qualitative research in systematic reviews
- James Thomas 1 &
- Angela Harden 1
BMC Medical Research Methodology volume 8 , Article number: 45 ( 2008 ) Cite this article
There is a growing recognition of the value of synthesising qualitative research in the evidence base in order to facilitate effective and appropriate health care. In response to this, methods for undertaking these syntheses are currently being developed. Thematic analysis is a method that is often used to analyse data in primary qualitative research. This paper reports on the use of this type of analysis in systematic reviews to bring together and integrate the findings of multiple qualitative studies.
We describe thematic synthesis, outline several steps for its conduct and illustrate the process and outcome of this approach using a completed review of health promotion research. Thematic synthesis has three stages: the coding of text 'line-by-line'; the development of 'descriptive themes'; and the generation of 'analytical themes'. While the development of descriptive themes remains 'close' to the primary studies, the analytical themes represent a stage of interpretation whereby the reviewers 'go beyond' the primary studies and generate new interpretive constructs, explanations or hypotheses. The use of computer software can facilitate this method of synthesis; detailed guidance is given on how this can be achieved.
We used thematic synthesis to combine the studies of children's views and identified key themes to explore in the intervention studies. Most interventions were based in school and often combined learning about health benefits with 'hands-on' experience. The studies of children's views suggested that fruit and vegetables should be treated in different ways, and that messages should not focus on health warnings. Interventions that were in line with these suggestions tended to be more effective. Thematic synthesis enabled us to stay 'close' to the results of the primary studies, synthesising them in a transparent way, and facilitating the explicit production of new concepts and hypotheses.
We compare thematic synthesis to other methods for the synthesis of qualitative research, discussing issues of context and rigour. Thematic synthesis is presented as a tried and tested method that preserves an explicit and transparent link between conclusions and the text of primary studies; as such it preserves principles that have traditionally been important to systematic reviewing.
Peer Review reports
The systematic review is an important technology for the evidence-informed policy and practice movement, which aims to bring research closer to decision-making [ 1 , 2 ]. This type of review uses rigorous and explicit methods to bring together the results of primary research in order to provide reliable answers to particular questions [ 3 – 6 ]. The picture that is presented aims to be distorted neither by biases in the review process nor by biases in the primary research which the review contains [ 7 – 10 ]. Systematic review methods are well-developed for certain types of research, such as randomised controlled trials (RCTs). Methods for reviewing qualitative research in a systematic way are still emerging, and there is much ongoing development and debate [ 11 – 14 ].
In this paper we present one approach to the synthesis of findings of qualitative research, which we have called 'thematic synthesis'. We have developed and applied these methods within several systematic reviews that address questions about people's perspectives and experiences [ 15 – 18 ]. The context for this methodological development is a programme of work in health promotion and public health (HP & PH), mostly funded by the English Department of Health, at the EPPI-Centre, in the Social Science Research Unit at the Institute of Education, University of London in the UK. Early systematic reviews at the EPPI-Centre addressed the question 'what works?' and contained research testing the effects of interventions. However, policy makers and other review users also posed questions about intervention need, appropriateness and acceptability, and factors influencing intervention implementation. To address these questions, our reviews began to include a wider range of research, including research often described as 'qualitative'. We began to focus, in particular, on research that aimed to understand the health issue in question from the experiences and point of view of the groups of people targeted by HP&PH interventions (We use the term 'qualitative' research cautiously because it encompasses a multitude of research methods at the same time as an assumed range of epistemological positions. In practice it is often difficult to classify research as being either 'qualitative' or 'quantitative' as much research contains aspects of both [ 19 – 22 ]. Because the term is in common use, however, we will employ it in this paper).
When we started the work for our first series of reviews which included qualitative research in 1999 [ 23 – 26 ], there was very little published material that described methods for synthesising this type of research. We therefore experimented with a variety of techniques borrowed from standard systematic review methods and methods for analysing primary qualitative research [ 15 ]. In later reviews, we were able to refine these methods and began to apply thematic analysis in a more explicit way. The methods for thematic synthesis described in this paper have so far been used explicitly in three systematic reviews [ 16 – 18 ].
The review used as an example in this paper
To illustrate the steps involved in a thematic synthesis we draw on a review of the barriers to, and facilitators of, healthy eating amongst children aged four to 10 years old [ 17 ]. The review was commissioned by the Department of Health, England to inform policy about how to encourage children to eat healthily in the light of recent surveys highlighting that British children are eating less than half the recommended five portions of fruit and vegetables per day. While we focus on the aspects of the review that relate to qualitative studies, the review was broader than this and combined answering traditional questions of effectiveness, through reviewing controlled trials, with questions relating to children's views of healthy eating, which were answered using qualitative studies. The qualitative studies were synthesised using 'thematic synthesis' – the subject of this paper. We compared the effectiveness of interventions which appeared to be in line with recommendations from the thematic synthesis with those that did not. This enabled us to see whether the understandings we had gained from the children's views helped us to explain differences in the effectiveness of different interventions: the thematic synthesis had enabled us to generate hypotheses which could be tested against the findings of the quantitative studies – hypotheses that we could not have generated without the thematic synthesis. The methods of this part of the review are published in Thomas et al . [ 27 ] and are discussed further in Harden and Thomas [ 21 ].
Qualitative research and systematic reviews
The act of seeking to synthesise qualitative research means stepping into more complex and contested territory than is the case when only RCTs are included in a review. First, methods are much less developed in this area, with fewer completed reviews available from which to learn, and second, the whole enterprise of synthesising qualitative research is itself hotly debated. Qualitative research, it is often proposed, is not generalisable and is specific to a particular context, time and group of participants. Thus, in bringing such research together, reviewers are open to the charge that they de-contextualise findings and wrongly assume that these are commensurable [ 11 , 13 ]. These are serious concerns which it is not the purpose of this paper to contest. We note, however, that a strong case has been made for qualitative research to be valued for the potential it has to inform policy and practice [ 11 , 28 – 30 ]. In our experience, users of reviews are interested in the answers that only qualitative research can provide, but are not able to handle the deluge of data that would result if they tried to locate, read and interpret all the relevant research themselves. Thus, if we acknowledge the unique importance of qualitative research, we need also to recognise that methods are required to bring its findings together for a wide audience – at the same time as preserving and respecting its essential context and complexity.
The earliest published work that we know of that deals with methods for synthesising qualitative research was written in 1988 by Noblit and Hare [ 31 ]. This book describes the way that ethnographic research might be synthesised, but the method has been shown to be applicable to qualitative research beyond ethnography [ 32 , 11 ]. As well as meta-ethnography, other methods have been developed more recently, including 'meta-study' [ 33 ], 'critical interpretive synthesis' [ 34 ] and 'metasynthesis' [ 13 ].
Many of the newer methods being developed have much in common with meta-ethnography, as originally described by Noblit and Hare, and often state explicitly that they are drawing on this work. In essence, this method involves identifying key concepts from studies and translating them into one another. The term 'translating' in this context refers to the process of taking concepts from one study and recognising the same concepts in another study, though they may not be expressed using identical words. Explanations or theories associated with these concepts are also extracted and a 'line of argument' may be developed, pulling corroborating concepts together and, crucially, going beyond the content of the original studies (though 'refutational' concepts might not be amenable to this process). Some have claimed that this notion of 'going beyond' the primary studies is a critical component of synthesis, and is what distinguishes it from the types of summaries of findings that typify traditional literature reviews [e.g. [ 32 ], p209]. In the words of Margarete Sandelowski, "metasyntheses are integrations that are more than the sum of parts, in that they offer novel interpretations of findings. These interpretations will not be found in any one research report but, rather, are inferences derived from taking all of the reports in a sample as a whole" [[ 14 ], p1358].
Thematic analysis has been identified as one of a range of potential methods for research synthesis alongside meta-ethnography and 'metasynthesis', though precisely what the method involves is unclear, and there are few examples of it being used for synthesising research [ 35 ]. We have adopted the term 'thematic synthesis', as we translated methods for the analysis of primary research – often termed 'thematic' – for use in systematic reviews [ 36 – 38 ]. As Boyatzis [[ 36 ], p4] has observed, thematic analysis is "not another qualitative method but a process that can be used with most, if not all, qualitative methods..." . Our approach concurs with this conceptualisation of thematic analysis, since the method we employed draws on other established methods but uses techniques commonly described as 'thematic analysis' in order to formalise the identification and development of themes.
We now move to a description of the methods we used in our example systematic review. While this paper has the traditional structure for reporting the results of a research project, the detailed methods (e.g. precise terms we used for searching) and results are available online. This paper identifies the particular issues that relate especially to reviewing qualitative research systematically and then to describing the activity of thematic synthesis in detail.
When searching for studies for inclusion in a 'traditional' statistical meta-analysis, the aim of searching is to locate all relevant studies. Failing to do this can undermine the statistical models that underpin the analysis and bias the results. However, Doyle [[ 39 ], p326] states that, "like meta-analysis, meta-ethnography utilizes multiple empirical studies but, unlike meta-analysis, the sample is purposive rather than exhaustive because the purpose is interpretive explanation and not prediction" . This suggests that it may not be necessary to locate every available study because, for example, the results of a conceptual synthesis will not change if ten rather than five studies contain the same concept, but will depend on the range of concepts found in the studies, their context, and whether they are in agreement or not. Thus, principles such as aiming for 'conceptual saturation' might be more appropriate when planning a search strategy for qualitative research, although it is not yet clear how these principles can be applied in practice. Similarly, other principles from primary qualitative research methods may also be 'borrowed' such as deliberately seeking studies which might act as negative cases, aiming for maximum variability and, in essence, designing the resulting set of studies to be heterogeneous, in some ways, instead of achieving the homogeneity that is often the aim in statistical meta-analyses.
However you look, qualitative research is difficult to find [ 40 – 42 ]. In our review, it was not possible to rely on simple electronic searches of databases. We needed to search extensively in 'grey' literature, ask authors of relevant papers if they knew of more studies, and look especially for book chapters, and we spent a lot of effort screening titles and abstracts by hand and looking through journals manually. In this sense, while we were not driven by the statistical imperative of locating every relevant study, when it actually came down to searching, we found that there was very little difference in the methods we had to use to find qualitative studies compared to the methods we use when searching for studies for inclusion in a meta-analysis.
Assessing the quality of qualitative research has attracted much debate and there is little consensus regarding how quality should be assessed, who should assess quality, and, indeed, whether quality can or should be assessed in relation to 'qualitative' research at all [ 43 , 22 , 44 , 45 ]. We take the view that the quality of qualitative research should be assessed to avoid drawing unreliable conclusions. However, since there is little empirical evidence on which to base decisions for excluding studies based on quality assessment, we took the approach in this review to use 'sensitivity analyses' (described below) to assess the possible impact of study quality on the review's findings.
In our example review we assessed our studies according to 12 criteria, which were derived from existing sets of criteria proposed for assessing the quality of qualitative research [ 46 – 49 ], principles of good practice for conducting social research with children [ 50 ], and whether studies employed appropriate methods for addressing our review questions. The 12 criteria covered three main quality issues. Five related to the quality of the reporting of a study's aims, context, rationale, methods and findings (e.g. was there an adequate description of the sample used and the methods for how the sample was selected and recruited?). A further four criteria related to the sufficiency of the strategies employed to establish the reliability and validity of data collection tools and methods of analysis, and hence the validity of the findings. The final three criteria related to the assessment of the appropriateness of the study methods for ensuring that findings about the barriers to, and facilitators of, healthy eating were rooted in children's own perspectives (e.g. were data collection methods appropriate for helping children to express their views?).
Extracting data from studies
One issue which is difficult to deal with when synthesising 'qualitative' studies is 'what counts as data' or 'findings'? This problem is easily addressed when a statistical meta-analysis is being conducted: the numeric results of RCTs – for example, the mean difference in outcome between the intervention and control – are taken from published reports and are entered into the software package being used to calculate the pooled effect size [ 3 , 51 ].
Deciding what to abstract from the published report of a 'qualitative' study is much more difficult. Campbell et al . [ 11 ] extracted what they called the 'key concepts' from the qualitative studies they found about patients' experiences of diabetes and diabetes care. However, finding the key concepts in 'qualitative' research is not always straightforward either. As Sandelowski and Barroso [ 52 ] discovered, identifying the findings in qualitative research can be complicated by varied reporting styles or the misrepresentation of data as findings (as for example when data are used to 'let participants speak for themselves'). Sandelowski and Barroso [ 53 ] have argued that the findings of qualitative (and, indeed, all empirical) research are distinct from the data upon which they are based, the methods used to derive them, externally sourced data, and researchers' conclusions and implications.
In our example review, while it was relatively easy to identify 'data' in the studies – usually in the form of quotations from the children themselves – it was often difficult to identify key concepts or succinct summaries of findings, especially for studies that had undertaken relatively simple analyses and had not gone much further than describing and summarising what the children had said. To resolve this problem we took study findings to be all of the text labelled as 'results' or 'findings' in study reports – though we also found 'findings' in the abstracts which were not always reported in the same way in the text. Study reports ranged in size from a few pages to full final project reports. We entered all the results of the studies verbatim into QSR's NVivo software for qualitative data analysis. Where we had the documents in electronic form this process was straightforward even for large amounts of text. When electronic versions were not available, the results sections were either re-typed or scanned in using a flat-bed or pen scanner. (We have since adapted our own reviewing system, 'EPPI-Reviewer' [ 54 ], to handle this type of synthesis and the screenshots below show this software.)
Detailed methods for thematic synthesis
The synthesis took the form of three stages which overlapped to some degree: the free line-by-line coding of the findings of primary studies; the organisation of these 'free codes' into related areas to construct 'descriptive' themes; and the development of 'analytical' themes.
Stages one and two: coding text and developing descriptive themes
In our children and healthy eating review, we originally planned to extract and synthesise study findings according to our review questions regarding the barriers to, and facilitators of, healthy eating amongst children. It soon became apparent, however, that few study findings addressed these questions directly and it appeared that we were in danger of ending up with an empty synthesis. We were also concerned about imposing the a priori framework implied by our review questions onto study findings without allowing for the possibility that a different or modified framework may be a better fit. We therefore temporarily put our review questions to one side and started from the study findings themselves to conduct an thematic analysis.
There were eight relevant qualitative studies examining children's views of healthy eating. We entered the verbatim findings of these studies into our database. Three reviewers then independently coded each line of text according to its meaning and content. Figure 1 illustrates this line-by-line coding using our specialist reviewing software, EPPI-Reviewer, which includes a component designed to support thematic synthesis. The text which was taken from the report of the primary study is on the left and codes were created inductively to capture the meaning and content of each sentence. Codes could be structured, either in a tree form (as shown in the figure) or as 'free' codes – without a hierarchical structure.
line-by-line coding in EPPI-Reviewer.
The use of line-by-line coding enabled us to undertake what has been described as one of the key tasks in the synthesis of qualitative research: the translation of concepts from one study to another [ 32 , 55 ]. However, this process may not be regarded as a simple one of translation. As we coded each new study we added to our 'bank' of codes and developed new ones when necessary. As well as translating concepts between studies, we had already begun the process of synthesis (For another account of this process, see Doyle [[ 39 ], p331]). Every sentence had at least one code applied, and most were categorised using several codes (e.g. 'children prefer fruit to vegetables' or 'why eat healthily?'). Before completing this stage of the synthesis, we also examined all the text which had a given code applied to check consistency of interpretation and to see whether additional levels of coding were needed. (In grounded theory this is termed 'axial' coding; see Fisher [ 55 ] for further discussion of the application of axial coding in research synthesis.) This process created a total of 36 initial codes. For example, some of the text we coded as "bad food = nice, good food = awful" from one study [ 56 ] were:
'All the things that are bad for you are nice and all the things that are good for you are awful.' (Boys, year 6) [[ 56 ], p74]
'All adverts for healthy stuff go on about healthy things. The adverts for unhealthy things tell you how nice they taste.' [[ 56 ], p75]
Some children reported throwing away foods they knew had been put in because they were 'good for you' and only ate the crisps and chocolate . [[ 56 ], p75]
Reviewers looked for similarities and differences between the codes in order to start grouping them into a hierarchical tree structure. New codes were created to capture the meaning of groups of initial codes. This process resulted in a tree structure with several layers to organize a total of 12 descriptive themes (Figure 2 ). For example, the first layer divided the 12 themes into whether they were concerned with children's understandings of healthy eating or influences on children's food choice. The above example, about children's preferences for food, was placed in both areas, since the findings related both to children's reactions to the foods they were given, and to how they behaved when given the choice over what foods they might eat. A draft summary of the findings across the studies organized by the 12 descriptive themes was then written by one of the review authors. Two other review authors commented on this draft and a final version was agreed.
relationships between descriptive themes.
Stage three: generating analytical themes
Up until this point, we had produced a synthesis which kept very close to the original findings of the included studies. The findings of each study had been combined into a whole via a listing of themes which described children's perspectives on healthy eating. However, we did not yet have a synthesis product that addressed directly the concerns of our review – regarding how to promote healthy eating, in particular fruit and vegetable intake, amongst children. Neither had we 'gone beyond' the findings of the primary studies and generated additional concepts, understandings or hypotheses. As noted earlier, the idea or step of 'going beyond' the content of the original studies has been identified by some as the defining characteristic of synthesis [ 32 , 14 ].
This stage of a qualitative synthesis is the most difficult to describe and is, potentially, the most controversial, since it is dependent on the judgement and insights of the reviewers. The equivalent stage in meta-ethnography is the development of 'third order interpretations' which go beyond the content of original studies [ 32 , 11 ]. In our example, the step of 'going beyond' the content of the original studies was achieved by using the descriptive themes that emerged from our inductive analysis of study findings to answer the review questions we had temporarily put to one side. Reviewers inferred barriers and facilitators from the views children were expressing about healthy eating or food in general, captured by the descriptive themes, and then considered the implications of children's views for intervention development. Each reviewer first did this independently and then as a group. Through this discussion more abstract or analytical themes began to emerge. The barriers and facilitators and implications for intervention development were examined again in light of these themes and changes made as necessary. This cyclical process was repeated until the new themes were sufficiently abstract to describe and/or explain all of our initial descriptive themes, our inferred barriers and facilitators and implications for intervention development.
For example, five of the 12 descriptive themes concerned the influences on children's choice of foods (food preferences, perceptions of health benefits, knowledge behaviour gap, roles and responsibilities, non-influencing factors). From these, reviewers inferred several barriers and implications for intervention development. Children identified readily that taste was the major concern for them when selecting food and that health was either a secondary factor or, in some cases, a reason for rejecting food. Children also felt that buying healthy food was not a legitimate use of their pocket money, which they would use to buy sweets that could be enjoyed with friends. These perspectives indicated to us that branding fruit and vegetables as a 'tasty' rather than 'healthy' might be more effective in increasing consumption. As one child noted astutely, 'All adverts for healthy stuff go on about healthy things. The adverts for unhealthy things tell you how nice they taste.' [[ 56 ], p75]. We captured this line of argument in the analytical theme entitled 'Children do not see it as their role to be interested in health'. Altogether, this process resulted in the generation of six analytical themes which were associated with ten recommendations for interventions.
Six main issues emerged from the studies of children's views: (1) children do not see it as their role to be interested in health; (2) children do not see messages about future health as personally relevant or credible; (3) fruit, vegetables and confectionery have very different meanings for children; (4) children actively seek ways to exercise their own choices with regard to food; (5) children value eating as a social occasion; and (6) children see the contradiction between what is promoted in theory and what adults provide in practice. The review found that most interventions were based in school (though frequently with parental involvement) and often combined learning about the health benefits of fruit and vegetables with 'hands-on' experience in the form of food preparation and taste-testing. Interventions targeted at people with particular risk factors worked better than others, and multi-component interventions that combined the promotion of physical activity with healthy eating did not work as well as those that only concentrated on healthy eating. The studies of children's views suggested that fruit and vegetables should be treated in different ways in interventions, and that messages should not focus on health warnings. Interventions that were in line with these suggestions tended to be more effective than those which were not.
Context and rigour in thematic synthesis
The process of translation, through the development of descriptive and analytical themes, can be carried out in a rigorous way that facilitates transparency of reporting. Since we aim to produce a synthesis that both generates 'abstract and formal theories' that are nevertheless 'empirically faithful to the cases from which they were developed' [[ 53 ], p1371], we see the explicit recording of the development of themes as being central to the method. The use of software as described can facilitate this by allowing reviewers to examine the contribution made to their findings by individual studies, groups of studies, or sub-populations within studies.
Some may argue against the synthesis of qualitative research on the grounds that the findings of individual studies are de-contextualised and that concepts identified in one setting are not applicable to others [ 32 ]. However, the act of synthesis could be viewed as similar to the role of a research user when reading a piece of qualitative research and deciding how useful it is to their own situation. In the case of synthesis, reviewers translate themes and concepts from one situation to another and can always be checking that each transfer is valid and whether there are any reasons that understandings gained in one context might not be transferred to another. We attempted to preserve context by providing structured summaries of each study detailing aims, methods and methodological quality, and setting and sample. This meant that readers of our review were able to judge for themselves whether or not the contexts of the studies the review contained were similar to their own. In the synthesis we also checked whether the emerging findings really were transferable across different study contexts. For example, we tried throughout the synthesis to distinguish between participants (e.g. boys and girls) where the primary research had made an appropriate distinction. We then looked to see whether some of our synthesis findings could be attributed to a particular group of children or setting. In the event, we did not find any themes that belonged to a specific group, but another outcome of this process was a realisation that the contextual information given in the reports of studies was very restricted indeed. It was therefore difficult to make the best use of context in our synthesis.
In checking that we were not translating concepts into situations where they did not belong, we were following a principle that others have followed when using synthesis methods to build grounded formal theory: that of grounding a text in the context in which it was constructed. As Margaret Kearney has noted "the conditions under which data were collected, analysis was done, findings were found, and products were written for each contributing report should be taken into consideration in developing a more generalized and abstract model" [[ 14 ], p1353]. Britten et al . [ 32 ] suggest that it may be important to make a deliberate attempt to include studies conducted across diverse settings to achieve the higher level of abstraction that is aimed for in a meta-ethnography.
Study quality and sensitivity analyses
We assessed the 'quality' of our studies with regard to the degree to which they represented the views of their participants. In doing this, we were locating the concept of 'quality' within the context of the purpose of our review – children's views – and not necessarily the context of the primary studies themselves. Our 'hierarchy of evidence', therefore, did not prioritise the research design of studies but emphasised the ability of the studies to answer our review question. A traditional systematic review of controlled trials would contain a quality assessment stage, the purpose of which is to exclude studies that do not provide a reliable answer to the review question. However, given that there were no accepted – or empirically tested – methods for excluding qualitative studies from syntheses on the basis of their quality [ 57 , 12 , 58 ], we included all studies regardless of their quality.
Nevertheless, our studies did differ according to the quality criteria they were assessed against and it was important that we considered this in some way. In systematic reviews of trials, 'sensitivity analyses' – analyses which test the effect on the synthesis of including and excluding findings from studies of differing quality – are often carried out. Dixon-Woods et al . [ 12 ] suggest that assessing the feasibility and worth of conducting sensitivity analyses within syntheses of qualitative research should be an important focus of synthesis methods work. After our thematic synthesis was complete, we examined the relative contributions of studies to our final analytic themes and recommendations for interventions. We found that the poorer quality studies contributed comparatively little to the synthesis and did not contain many unique themes; the better studies, on the other hand, appeared to have more developed analyses and contributed most to the synthesis.
This paper has discussed the rationale for reviewing and synthesising qualitative research in a systematic way and has outlined one specific approach for doing this: thematic synthesis. While it is not the only method which might be used – and we have discussed some of the other options available – we present it here as a tested technique that has worked in the systematic reviews in which it has been employed.
We have observed that one of the key tasks in the synthesis of qualitative research is the translation of concepts between studies. While the activity of translating concepts is usually undertaken in the few syntheses of qualitative research that exist, there are few examples that specify the detail of how this translation is actually carried out. The example above shows how we achieved the translation of concepts across studies through the use of line-by-line coding, the organisation of these codes into descriptive themes, and the generation of analytical themes through the application of a higher level theoretical framework. This paper therefore also demonstrates how the methods and process of a thematic synthesis can be written up in a transparent way.
This paper goes some way to addressing concerns regarding the use of thematic analysis in research synthesis raised by Dixon-Woods and colleagues who argue that the approach can lack transparency due to a failure to distinguish between 'data-driven' or 'theory-driven' approaches. Moreover they suggest that, "if thematic analysis is limited to summarising themes reported in primary studies, it offers little by way of theoretical structure within which to develop higher order thematic categories..." [[ 35 ], p47]. Part of the problem, they observe, is that the precise methods of thematic synthesis are unclear. Our approach contains a clear separation between the 'data-driven' descriptive themes and the 'theory-driven' analytical themes and demonstrates how the review questions provided a theoretical structure within which it became possible to develop higher order thematic categories.
The theme of 'going beyond' the content of the primary studies was discussed earlier. Citing Strike and Posner [ 59 ], Campbell et al . [[ 11 ], p672] also suggest that synthesis "involves some degree of conceptual innovation, or employment of concepts not found in the characterisation of the parts and a means of creating the whole" . This was certainly true of the example given in this paper. We used a series of questions, derived from the main topic of our review, to focus an examination of our descriptive themes and we do not find our recommendations for interventions contained in the findings of the primary studies: these were new propositions generated by the reviewers in the light of the synthesis. The method also demonstrates that it is possible to synthesise without conceptual innovation. The initial synthesis, involving the translation of concepts between studies, was necessary in order for conceptual innovation to begin. One could argue that the conceptual innovation, in this case, was only necessary because the primary studies did not address our review question directly. In situations in which the primary studies are concerned directly with the review question, it may not be necessary to go beyond the contents of the original studies in order to produce a satisfactory synthesis (see, for example, Marston and King, [ 60 ]). Conceptually, our analytical themes are similar to the ultimate product of meta-ethnographies: third order interpretations [ 11 ], since both are explicit mechanisms for going beyond the content of the primary studies and presenting this in a transparent way. The main difference between them lies in their purposes. Third order interpretations bring together the implications of translating studies into one another in their own terms, whereas analytical themes are the result of interrogating a descriptive synthesis by placing it within an external theoretical framework (our review question and sub-questions). It may be, therefore, that analytical themes are more appropriate when a specific review question is being addressed (as often occurs when informing policy and practice), and third order interpretations should be used when a body of literature is being explored in and of itself, with broader, or emergent, review questions.
This paper is a contribution to the current developmental work taking place in understanding how best to bring together the findings of qualitative research to inform policy and practice. It is by no means the only method on offer but, by drawing on methods and principles from qualitative primary research, it benefits from the years of methodological development that underpins the research it seeks to synthesise.
Chalmers I: Trying to do more good than harm in policy and practice: the role of rigorous, transparent and up-to-date evaluations. Ann Am Acad Pol Soc Sci. 2003, 589: 22-40. 10.1177/0002716203254762.
Article Google Scholar
Oakley A: Social science and evidence-based everything: the case of education. Educ Rev. 2002, 54: 277-286. 10.1080/0013191022000016329.
Cooper H, Hedges L: The Handbook of Research Synthesis. 1994, New York: Russell Sage Foundation
EPPI-Centre: EPPI-Centre Methods for Conducting Systematic Reviews. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=89 ]
Higgins J, Green S, (Eds): Cochrane Handbook for Systematic Reviews of Interventions 4.2.6. 2006, Updated September 2006. Accessed 24th January 2007, [ http://www.cochrane.org/resources/handbook/ ]
Petticrew M, Roberts H: Systematic Reviews in the Social Sciences: A practical guide. 2006, Oxford: Blackwell Publishing
Book Google Scholar
Chalmers I, Hedges L, Cooper H: A brief history of research synthesis. Eval Health Prof. 2002, 25: 12-37. 10.1177/0163278702025001003.
Article PubMed Google Scholar
Juni P, Altman D, Egger M: Assessing the quality of controlled clinical trials. BMJ. 2001, 323: 42-46. 10.1136/bmj.323.7303.42.
Article CAS PubMed PubMed Central Google Scholar
Mulrow C: Systematic reviews: rationale for systematic reviews. BMJ. 1994, 309: 597-599.
White H: Scientific communication and literature retrieval. The Handbook of Research Synthesis. Edited by: Cooper H, Hedges L. 1994, New York: Russell Sage Foundation
Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J: Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Soc Sci Med. 2003, 56: 671-684. 10.1016/S0277-9536(02)00064-3.
Dixon-Woods M, Bonas S, Booth A, Jones DR, Miller T, Sutton AJ, Shaw RL, Smith JA, Young B: How can systematic reviews incorporate qualitative research? A critical perspective. Qual Res. 2006, 6: 27-44. 10.1177/1468794106058867.
Sandelowski M, Barroso J: Handbook for Synthesising Qualitative Research. 2007, New York: Springer
Thorne S, Jensen L, Kearney MH, Noblit G, Sandelowski M: Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual Health Res. 2004, 14: 1342-1365. 10.1177/1049732304269888.
Harden A, Garcia J, Oliver S, Rees R, Shepherd J, Brunton G, Oakley A: Applying systematic review methods to studies of people's views: an example from public health. J Epidemiol Community Health. 2004, 58: 794-800. 10.1136/jech.2003.014829.
Article PubMed PubMed Central Google Scholar
Harden A, Brunton G, Fletcher A, Oakley A: Young People, Pregnancy and Social Exclusion: A systematic synthesis of research evidence to identify effective, appropriate and promising approaches for prevention and support. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=674 ]
Thomas J, Sutcliffe K, Harden A, Oakley A, Oliver S, Rees R, Brunton G, Kavanagh J: Children and Healthy Eating: A systematic review of barriers and facilitators. 2003, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, accessed 4 th July 2008, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=246 ]
Thomas J, Kavanagh J, Tucker H, Burchett H, Tripney J, Oakley A: Accidental Injury, Risk-Taking Behaviour and the Social Circumstances in which Young People Live: A systematic review. 2007, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=1910 ]
Bryman A: Quantity and Quality in Social Research. 1998, London: Unwin
Hammersley M: What's Wrong with Ethnography?. 1992, London: Routledge
Harden A, Thomas J: Methodological issues in combining diverse study types in systematic reviews. Int J Soc Res Meth. 2005, 8: 257-271. 10.1080/13645570500155078.
Oakley A: Experiments in Knowing: Gender and methods in the social sciences. 2000, Cambridge: Polity Press
Harden A, Oakley A, Oliver S: Peer-delivered health promotion for young people: a systematic review of different study designs. Health Educ J. 2001, 60: 339-353. 10.1177/001789690106000406.
Harden A, Rees R, Shepherd J, Brunton G, Oliver S, Oakley A: Young People and Mental Health: A systematic review of barriers and facilitators. 2001, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=256 ]
Rees R, Harden A, Shepherd J, Brunton G, Oliver S, Oakley A: Young People and Physical Activity: A systematic review of barriers and facilitators. 2001, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=260 ]
Shepherd J, Harden A, Rees R, Brunton G, Oliver S, Oakley A: Young People and Healthy Eating: A systematic review of barriers and facilitators. 2001, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=258 ]
Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J: Integrating qualitative research with trials in systematic reviews: an example from public health. BMJ. 2004, 328: 1010-1012. 10.1136/bmj.328.7446.1010.
Davies P: What is evidence-based education?. Br J Educ Stud. 1999, 47: 108-121. 10.1111/1467-8527.00106.
Newman M, Thompson C, Roberts AP: Helping practitioners understand the contribution of qualitative research to evidence-based practice. Evid Based Nurs. 2006, 9: 4-7. 10.1136/ebn.9.1.4.
Popay J: Moving Beyond Effectiveness in Evidence Synthesis. 2006, London: National Institute for Health and Clinical Excellence
Noblit GW, Hare RD: Meta-Ethnography: Synthesizing qualitative studies. 1988, Newbury Park: Sage
Britten N, Campbell R, Pope C, Donovan J, Morgan M, Pill R: Using meta-ethnography to synthesise qualitative research: a worked example. J Health Serv Res Policy. 2002, 7: 209-215. 10.1258/135581902320432732.
Paterson B, Thorne S, Canam C, Jillings C: Meta-Study of Qualitative Health Research. 2001, Thousand Oaks, California: Sage
Dixon-Woods M, Cavers D, Agarwal S, Annandale E, Arthur A, Harvey J, Katbamna S, Olsen R, Smith L, Riley R, Sutton AJ: Conducting a critical interpretative synthesis of the literature on access to healthcare by vulnerable groups. BMC Med Res Methodol. 2006, 6: 35-10.1186/1471-2288-6-35.
Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A: Synthesising qualitative and quantitative evidence: a review of possible methods. J Health Serv Res Policy. 2005, 10: 45-53. 10.1258/1355819052801804.
Boyatzis RE: Transforming Qualitative Information. 1998, Sage: Cleveland
Braun V, Clarke V: Using thematic analysis in psychology. Qual Res Psychol. 2006, 3: 77-101. 10.1191/1478088706qp063oa. [ http://science.uwe.ac.uk/psychology/drvictoriaclarke_files/thematicanalysis%20.pdf ]
Silverman D, Ed: Qualitative Research: Theory, method and practice. 1997, London: Sage
Doyle LH: Synthesis through meta-ethnography: paradoxes, enhancements, and possibilities. Qual Res. 2003, 3: 321-344. 10.1177/1468794103033003.
Barroso J, Gollop C, Sandelowski M, Meynell J, Pearce PF, Collins LJ: The challenges of searching for and retrieving qualitative studies. Western J Nurs Res. 2003, 25: 153-178. 10.1177/0193945902250034.
Walters LA, Wilczynski NL, Haynes RB, Hedges Team: Developing optimal search strategies for retrieving clinically relevant qualitative studies in EMBASE. Qual Health Res. 2006, 16: 162-8. 10.1177/1049732305284027.
Wong SSL, Wilczynski NL, Haynes RB: Developing optimal search strategies for detecting clinically relevant qualitative studies in Medline. Medinfo. 2004, 11: 311-314.
Murphy E, Dingwall R, Greatbatch D, Parker S, Watson P: Qualitative research methods in health technology assessment: a review of the literature. Health Technol Assess. 1998, 2 (16):
Seale C: Quality in qualitative research. Qual Inq. 1999, 5: 465-478.
Spencer L, Ritchie J, Lewis J, Dillon L: Quality in Qualitative Evaluation: A framework for assessing research evidence. 2003, London: Cabinet Office
Boulton M, Fitzpatrick R, Swinburn C: Qualitative research in healthcare II: a structured review and evaluation of studies. J Eval Clin Pract. 1996, 2: 171-179. 10.1111/j.1365-2753.1996.tb00041.x.
Article CAS PubMed Google Scholar
Cobb A, Hagemaster J: Ten criteria for evaluating qualitative research proposals. J Nurs Educ. 1987, 26: 138-143.
CAS PubMed Google Scholar
Mays N, Pope C: Rigour and qualitative research. BMJ. 1995, 311: 109-12.
Medical Sociology Group: Criteria for the evaluation of qualitative research papers. Med Sociol News. 1996, 22: 68-71.
Alderson P: Listening to Children. 1995, London: Barnardo's
Egger M, Davey-Smith G, Altman D: Systematic Reviews in Health Care: Meta-analysis in context. 2001, London: BMJ Publishing
Sandelowski M, Barroso J: Finding the findings in qualitative studies. J Nurs Scholarsh. 2002, 34: 213-219. 10.1111/j.1547-5069.2002.00213.x.
Sandelowski M: Using qualitative research. Qual Health Res. 2004, 14: 1366-1386. 10.1177/1049732304269672.
Thomas J, Brunton J: EPPI-Reviewer 3.0: Analysis and management of data for research synthesis. EPPI-Centre software. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education
Fisher M, Qureshi H, Hardyman W, Homewood J: Using Qualitative Research in Systematic Reviews: Older people's views of hospital discharge. 2006, London: Social Care Institute for Excellence
Dixey R, Sahota P, Atwal S, Turner A: Children talking about healthy eating: data from focus groups with 300 9–11-year-olds. Nutr Bull. 2001, 26: 71-79. 10.1046/j.1467-3010.2001.00078.x.
Daly A, Willis K, Small R, Green J, Welch N, Kealy M, Hughes E: Hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol. 2007, 60: 43-49. 10.1016/j.jclinepi.2006.03.014.
Popay J: Moving beyond floccinaucinihilipilification: enhancing the utility of systematic reviews. J Clin Epidemiol. 2005, 58: 1079-80. 10.1016/j.jclinepi.2005.08.004.
Strike K, Posner G: Types of synthesis and their criteria. Knowledge Structure and Use: Implications for synthesis and interpretation. Edited by: Ward S, Reed L. 1983, Philadelphia: Temple University Press
Marston C, King E: Factors that shape young people's sexual behaviour: a systematic review. The Lancet. 2006, 368: 1581-86. 10.1016/S0140-6736(06)69662-1.
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/8/45/prepub
The authors would like to thank Elaine Barnett-Page for her assistance in producing the draft paper, and David Gough, Ann Oakley and Sandy Oliver for their helpful comments. The review used an example in this paper was funded by the Department of Health (England). The methodological development was supported by Department of Health (England) and the ESRC through the Methods for Research Synthesis Node of the National Centre for Research Methods. In addition, Angela Harden held a senior research fellowship funded by the Department of Health (England) December 2003 – November 2007. The views expressed in this paper are those of the authors and are not necessarily those of the funding bodies.
Authors and affiliations.
EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, UK
James Thomas & Angela Harden
You can also search for this author in PubMed Google Scholar
Correspondence to James Thomas .
The authors declare that they have no competing interests.
Both authors contributed equally to the paper and read and approved the final manuscript.
James Thomas and Angela Harden contributed equally to this work.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Authors’ original file for figure 1
Authors’ original file for figure 2, rights and permissions.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Reprints and Permissions
About this article
Cite this article.
Thomas, J., Harden, A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol 8 , 45 (2008). https://doi.org/10.1186/1471-2288-8-45
Received : 17 April 2008
Accepted : 10 July 2008
Published : 10 July 2008
DOI : https://doi.org/10.1186/1471-2288-8-45
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Qualitative Research
- Primary Study
- Analytical Theme
- Healthy Eating
- Review Question
BMC Medical Research Methodology
- Submission enquiries: [email protected]
- General enquiries: [email protected]
Questionnaires are a cost-effective, simple and quick way to gather data that comes straight from the sources. This research method has been used for decades to gather data en masse, but it comes with its own complications and setbacks.
A business research method refers to a set of research techniques that companies employ to determine whether a specific business endeavor is worth their time and effort.
Individuals can remember the five basic methods of characterization in literature by using the acronym STEAL, which stands for speech, thoughts, effect, actions and looks. These are the techniques commonly used by writers to reveal a charac...
This is often referred to as a qualitative systematic review, which can be described as a method of comparing findings from qualitative studies (Grant & Booth
Qualitative Research and the Review of Related Literature.
qualitative data analysis techniques can play in the research synthesis:.
Literature review is neither qualitative nor quantitative method, but a review of related works in the field of study which can fall under qualitative
A literature review should provide an overview of concepts that will be discussed in your study. It should better prepare the reader for your
qualitative research studies that represent interpretive.
Qualitative researchers TEND to: · think that social sciences cannot be well-studied with the same methods as natural or physical sciences · feel that human
There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory
A literature review surveys books, scholarly articles, and any other sources relevant to a particular issue, area of research, or theory
Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that
We therefore experimented with a variety of techniques borrowed from standard systematic review methods and methods for analysing primary